title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Multiservice UAVs for Emergency Tasks in Post-disaster Scenarios",
"Multiservice UAVs for Emergency Tasks in Post-disaster Scenarios"
]
| [
"F Malandrino ",
"C Rottondi ",
"C.-F Chiasserini ",
"A Bianco ",
"I Stavrakakis "
]
| []
| []
| UAVs are increasingly being employed to carry out surveillance, parcel delivery, communication-support and other specific tasks. Their equipment and mission plan are carefully selected to minimize the carried load an overall resource consumption. Typically, several single task UAVs are dispatched to perform different missions. In certain cases, (part of) the geographical area of operation may be common to these single task missions (such as those supporting post-disaster recovery) and it may be more efficient to have multiple tasks carried out as part of a single UAV mission using common or even additional specialized equipment.In this paper, we propose and investigate a joint planning of multitask missions leveraging a fleet of UAVs equipped with a standard set of accessories enabling heterogeneous tasks. To this end, an optimization problem is formulated yielding the optimal joint planning and deriving the resulting quality of the delivered tasks. In addition, a heuristic solution is developed for large-scale environments to cope with the increased complexity of the optimization framework. The developed joint planning of multitask missions is applied to a specific post-disaster recovery scenario of a flooding in the San Francisco area. The results show the effectiveness of the proposed solutions and the potential savings in the number of UAVs needed to carry out all the tasks with the required level of quality. | 10.1145/3331053.3335032 | [
"https://arxiv.org/pdf/1905.05453v1.pdf"
]
| 153,312,597 | 1905.05453 | 59b4a1b3a0c6c0623b9dc0d3d38721d0e80c69ea |
Multiservice UAVs for Emergency Tasks in Post-disaster Scenarios
F Malandrino
C Rottondi
C.-F Chiasserini
A Bianco
I Stavrakakis
Multiservice UAVs for Emergency Tasks in Post-disaster Scenarios
UAVs are increasingly being employed to carry out surveillance, parcel delivery, communication-support and other specific tasks. Their equipment and mission plan are carefully selected to minimize the carried load an overall resource consumption. Typically, several single task UAVs are dispatched to perform different missions. In certain cases, (part of) the geographical area of operation may be common to these single task missions (such as those supporting post-disaster recovery) and it may be more efficient to have multiple tasks carried out as part of a single UAV mission using common or even additional specialized equipment.In this paper, we propose and investigate a joint planning of multitask missions leveraging a fleet of UAVs equipped with a standard set of accessories enabling heterogeneous tasks. To this end, an optimization problem is formulated yielding the optimal joint planning and deriving the resulting quality of the delivered tasks. In addition, a heuristic solution is developed for large-scale environments to cope with the increased complexity of the optimization framework. The developed joint planning of multitask missions is applied to a specific post-disaster recovery scenario of a flooding in the San Francisco area. The results show the effectiveness of the proposed solutions and the potential savings in the number of UAVs needed to carry out all the tasks with the required level of quality.
I. INTRODUCTION
The usage of Unmanned Aerial Vehicles (UAVs) to accomplish different kinds of tasks in post-disaster recovery scenarios has recently become the subject of investigation [1], [2]. Fleets of UAVs performing environmental monitoring [3], [4], dispatching medicines in rural/hardly accessible areas [5], or ensuring mobile connectivity [6] have already been envisioned. As a relevant example, UAVs are employed in Rwanda to deliver blood packs to 21 hospitals located in remote and isolated areas on a regular basis, even in the presence of harsh weather conditions [7].
However, such critical tasks have up to now been considered in isolation, thus requiring separated fleets with equipment, computational resources, and capabilities dimensioned on the specific mission to be performed [8]. In this study, we adopt a different approach and investigate a joint planning of multitask missions leveraging a fleet of UAVs equipped with a standard set of accessories (i.e., a videomonitoring system [3], a cellular communication interface and a mounting frame for parcel carriage), which enables them to perform heterogeneous tasks (i.e., medicine/blood delivery, aerial monitoring, and mobile coverage).
To show the benefits achieved by the usage of multi-purpose UAVs, we develop an optimization framework based on Integer Linear Programming (ILP) to optimally schedule their tasks in a post-disaster environment and apply it to a scenario of a simulated flooding event in the San Francisco area, where UAVs depart from one of the depots surrounding the emergency area and must return to a depot after completion of their task to change/recharge batteries. In addition, a heuristic solution is developed for larger scale environments to cope with the increased complexity of the optimization framework.
Results show that our heuristic provides a performance closely approaching the optimum. Furthermore, fully equipping all UAVs, e.g., providing all of them with cameras and radios, allows for a greater flexibility that outweighs the resulting lower payload available for parcel delivery missions, and further increases performance.
II. RELATED WORK
Beside military and security operations, the usage of UAVs is envisioned in a plethora of civil applications, ranging from agriculture to environmental monitoring and disaster management (see [9] for a thorough taxonomy and survey). In the following, we focus on the three types of tasks encompassed in the scenario under study. a) UAV placement for wireless coverage: UAVs can be leveraged in a number of wireless networking applications, e.g., complementing existing cellular systems by providing additional capacity where needed, or to ensure network coverage in emergency or disaster scenarios (see [10] for a comprehensive overview). Differently from the works in [10], our model jointly optimizes the scheduling of the UAV mobility and actions. b) UAV-based post-disaster monitoring systems: As overviewed in [11], fleets of UAVs operating as distributed processing systems can be adopted for various monitoring tasks, including, e.g., surveillance, object detection, movement tracking, support to navigation. A prototype of UAV-based architecture for sensing operations has been described in [12]. In our paper, we consider a conceptually similar UAV equipment of hardware and software modules. c) UAVs for parcel delivery: Several recent studies have already investigated optimization strategies for drone-assisted delivery models (see [13] for a literature review). In particular, variations of the Travelling Salesman Problem leveraging UAVs for last-mile delivery have been introduced [14]. Whether UAV d carries payload p at epoch k locationsThe notation 1 . The distance between two locations l 1 , l 2 is indicated as v(l 1 , l 2 ) (clearly, v(l, l) = 0). Some loca-tionsL ⊆ L host depots. Binary variables λ(d, k, l) indicate whether UAV d is at location l in epoch k. Clearly, UAVs can only be in one location at a time and can only travel between locations closer than the maximum distance V UAVs can cover in an epoch. This translates into the following constraints: l∈L λ(d, k, l) = 1, ∀d ∈ D, k ∈ K.
(1)
λ(d, k, l) ≤ m∈L : v(m,l)≤V λ(d, k−1, m) ∀d ∈ D, k ∈ K, l ∈ L.
(2) b) Payload: UAVs have a payload capacity C and can carry zero or more payload items p ∈ P, each weighting w(p). Examples of payload items (payloads for short) are blood packs or cameras. Binary decision variables ω(d, k, p) express whether payload p is carried by UAV d at time k.
p∈P w(p)ω(d, k, p) ≤ C, ∀d ∈ D, k ∈ K.(3)
UAV payload can only change at depot locations:
ω(d, k, p) = ω(d, k−1, p), ∀d ∈ D, k ∈ K, p ∈ K : L(d, k) / ∈L.(4)
(4) implies that we do not account for the fact that some payloads, e.g., medicine packs, will be dropped somewhere as a part of the mission. This accounts for the worst-case event that one or more drops fail, due to a variety of potential reasons (e.g., ground conditions are not adequate for UAV landing): in such a case, UAVs must have enough energy to bring all payloads back, if need be. c) Energy and battery: Real variables β(d, k) express the battery level of UAV d at epoch k. Clearly, such variables shall be positive and can never exceed the battery capacity E, i.e.,
0 ≤ β(d, k) ≤ E, ∀d ∈ D, k ∈ K.(5)
Next, we need to account for power consumption:
β(d, k) ≤ β(d, k − 1)+ − e(L(d, k − 1), L(d, k)) W + p∈P ω(d, k, p)w(p) , ∀d ∈ D, k ∈ K : L(d, k) / ∈L. (6)
In (6), the energy consumed at time k is given by the product between a factor e(l 1 , l 2 ), accounting for the distance between the locations, hence, for how far the UAV had to travel 2 , and the total weight of the UAV. Such a weight is given by the weight W of the UAV itself and the sum of the weight of the payload items it carries. Note that (6) does not hold at depot locations inL, as UAVs can recharge or swap their batteries therein. d) Delivery missions: Some payload itemsP ⊆ P must be delivered at certain location and times. Specifically, parameters f (p) ∈ L, a(p) ∈ K, b(p) ∈ K indicate the target location (final point), as well as the earliest and latest times at which the delivery can take place. The following constraint imposes that all deliveries are carried out:
d∈D b(p) k=a(p) ω(d, k, p)λ(d, k, f (p)) ≥ 1, ∀p ∈P. (7)
(7) can be read as follows: there must be at least one epoch between a(p) and b(p) during which an UAV d visits the target location f (p) while carrying payload p. e) Additional missions: We consider a set M = {m} of additional missions, e.g., wireless network coverage and monitoring. For the purposes of such missions, we partition the topology into zones z ∈ Z, and express their demand for mission m at epoch k through parameters n(k, m, z), e.g., the traffic offered by the users 3 . Parameters q(l, m, z) express how well an UAV in location l can perform mission m for zone z, e.g., the quality of coverage it can provide. Furthermore, parameters r(m, p) ∈ {0, 1} express the fact that some payload items p, e.g., radios, are needed for mission m. Finally, parameters s(m) express how much data is generated by performing one unit of work in mission m.
The main decision to make is how long UAVs perform additional missions, and for the benefit of which zones. This is conveyed by variables µ(d, k, m, z) ∈ [0, 1], expressing the fraction of epoch k that UAV d uses to perform mission m for the benefit of zone z. The first constraint we need to impose is that UAVs do not perform missions that they are not equipped for:
µ(d, k, m, z) ≤ ω(d, k, p) ∀d ∈ D, k ∈ K, m ∈ M, p ∈ P : r(m, p) = 1, z ∈ Z. (8)
Also, we cannot exceed the need of zones:
d∈D µ(d, k, m, z)q(L(d), m, z) ≤ n(k, m, z) ∀k ∈ K, m ∈ M, z ∈ Z. (9)
Note that (9) also accounts for the quality with which UAVs at different locations can perform the missions.
Next, we need to ensure that all the data traffic generated by additional missions is transferred to the in-field deployed cellular network (denoted with Ω), so that it can be offloaded to the backbone network infrastructure. We model such transfer to happen in a multi-hop fashion, without store-carry-andforward. We have a set of parameters t(l 1 , l 2 ) expressing the throughput that can be achieved between UAVs staying at locations l 1 and l 2 . If location l is covered by a traditional network, then t(l, Ω) expresses the amount of traffic that can be delivered to such a network in an epoch. Decision variables τ (d 1 , d 2 , k) express the amount of data transferred from UAV d 1 to UAV d 2 at epoch k.
We need to impose a flow-like constraint, expressing that the incoming traffic to every UAV d, plus the one generated at d itself, must be transferred to either other UAVs or the traditional network:
d ∈D τ (d , d, k)+ m∈M z∈Z µ(d, k, m, z)q(L(d), m, z)s(m) = = d ∈D τ (d, d , k) + τ (d, Ω, k), ∀d ∈ D, k ∈ K. (10)
We also need to account for the fact that only UAVs with specific equipment, e.g., a cellular radio, can act as relays.
To this end, we add to the set of missions M an element called relay, ensure that it requires the radio payload (i.e., r(relay, radio) = 1), and then impose that only UAVs performing the relay mission act as relays:
τ (d 1 , d 2 , k) ≤ t(L(d 1 ), L(d 2 ))µ(d 1 , k, relay, ·), ∀d 1 ∈ D, d 2 ∈ D ∪ {Ω}, k ∈ K. (11)
In (11), the · symbol in lieu of a zone indicates that the relay mission is specified for no particular mission. Also, (11) ensures that the maximum quantity of data that can be transferred t(L(d 1 ), L(d 2 )) is not exceeded. Finally, all traffic generated by all missions must make its way to Ω:
Finally, we can define our objective as maximizing the minimum satisfaction across all missions:
max min m∈Mσ (m).(15)
IV. HEURISTIC ALGORITHM Focusing only on the blood/medicine delivery tasks, the problem described in Sec. III can be modelled as a Vehicle Routing Problem with Time Windows (VRPTW), which has been extensively studied in the literature (see [15] for a thorough overview on heuristic and meta-heuristic approaches for VRPTW). In light of this, here we present a heuristic algorithm aimed at tackling large instances of the considered post-disaster scenario, which builds upon the insertion method first proposed in [16]. To incorporate additional tasks such as monitoring and connectivity coverage, we leverage the multiobjective enhancement of the insertion approach described in [17].
The insertion heuristic aims at sequentially building the tours of each UAV by adding one delivery location at a time.
To do so, a graph is created where every delivery location f (p) ∈ L : p ∈P is identified by a graph node l (an additional node l * is added to identify the UAVs depot inL 4 ) and arc (l, l ) g represent route g ∈ G ll connecting delivery locations l, l. Note that we consider a set G ll of alternative routes for each location pair (i.e., every node pair is connected by |G ll | arcs). More specifically, between each two locations l 1 , l 2 ∈ L, the following routes are considered:
• the shortest path from l 1 to l 2 ; • all paths going from l 1 to an intermediate location l 3 and thence to l 2 , taking the shortest path between l 1 and l 3 and the one between l 3 and l 2 , provided that their length does not exceed twice that of the shortest path from l 1 to l 2 ; • all paths including two intermediate locations l 3 and l 4 , subject to the same aforementioned conditions.
Each arc is associated with multiple weights: ψ(l, l ) g and e(l, l ) g respectively express the time (in number of epochs) and energy spent by the UAV to travel from node l to l along route g ∈ G ll , whereas c(l, l ) g and ν(l, l ) g respectively quantify the satisfaction level of coverage and monitoring tasks achieved by the UAV while travelling along route g from l to l . As tour initialization criterion, the insertion of the delivery task with earliest deadline has been chosen among the criteria proposed in [16]. Then the algorithm iteratively operates as follows. Let [l 0 , l 1 , ..., l m ] be the current route, with l 0 , l m = l * . For each unserved delivery at l ∈ L u (where L u ⊆ L is the set of delivery locations not yet inserted in any tour), the best insertion positionî l ∈ {1, .., m} is evaluated by minimizing the function φ 1 (l i−1 , l, l i ) = min g∈G l i−1 ,l ,g ∈G l,l i
(1 − α 1 − α 2 ) · (ψ(l i−1 , l) g + ψ(l, l i ) g − ψ(l i−1 , l i )ĝ) − α 1 · (c(l i−1 , l) g + c(l, l i ) g − c(l i−1 , l i )ĝ) − α 2 · (ν(l i−1 , l) g +ν(l, l i ) g −ν(l i−1 , l i )ĝ)
, whereĝ is the route from l i−1 to l i included in the current tour and α 1 ,α 2 are weights such that α 1 + α 2 ≤ 1. The closer α 1 (resp. α 2 ) approaches 1, the more predominant the satisfaction of coverage (resp. monitoring) tasks becomes w.r.t. the minimization of the total duration of the tour. Note that, if inserting the route pair g, g in the tour leads the overall energy consumption to exceed the UAV battery capacity or if the arrival epoch of the UAV at each delivery location does not meet the time window constraint of the corresponding delivery task, the insertion of the route pair g, g is considered as infeasible. Once the valueî l = arg min i∈1,..,m φ 1 (l i−1 , l, l i ) has been found, in order to choose the best unserved delivery to be inserted in the tour, the function φ 2 (lî l −1 , l, lî
l ) = max g∈G l * ,lî l (1 − α 1 − α 2 ) · ψ(l * , lî l ) − α 1 · c(l * , lî l ) g − α 2 · ν(l * , lî l ) g − φ 1 (lî l −1 , l, lî l )
is computed for every unserved delivery l ∈ L u . Such function quantifies the savings obtained by adding delivery l in the current tour, as opposed to direct service of delivery l in a new, dedicated tour starting from the depot. If max l∈Lu φ 2 (lî l −1 , l, lî l ) ≥ 0, then deliveryl = arg max l∈Lu φ 2 (lî l −1 , l, lî l ) is added to the current tour and the insertion procedure is repeated from the start. Otherwise, a new tour is initialized. The algorithm ends when all the delivery tasks are inserted in a tour.
V. REFERENCE SCENARIO
As our reference scenario, we consider a flooding over San Francisco, depicted in Fig. 1 and simulated through the software Hazus [18]. Over the disaster area, we identify |L| = 40 locations and |Z| = 50 zones, with each zone reachable from an average of two locations. UAVs have to perform a total of |P| = 20 deliveries of blood or medicine packs, due at randomly-selected locations (the f -parameters) over a time window of 10 epochs for medicine packs and 5 epochs for blood packs (the aand b-parameters).
UAVs can also perform |M| = 2 additional missions:i) providing network coverage for users escaping from the disaster, whose mobility is simulated through the MatSim simulator [19], as detailed in [20]; ii) video monitoring, e.g., to assess the level of the flooding in a certain area.
The quantity of needed service (the n-parameters) is determined as follows. For the coverage mission, the values computed in [20], based on the expected flow of vehicles, are used. For video monitoring, a subset of 50 randomly-selected zones are deemed to need the service, hence, n(z) = 1, while all others have n(z) = 0. Coverage and monitoring mission require additional payloads, respectively, the software radio [21] and the camera system [3], each weighting 1 kg. The maximum throughput values achievable between any two locations, i.e., the t-parameters, are obtained with reference to LTE micro-cells through the methodology in [20]; furthermore, it is assumed that UAVs can communicate with the ordinary network from all locations.
We consider a set of UAVs of variable cardinality, whose features mimic those of lightweight Amazon UAVs [22]. Specifically, they have an empty weight of W = 4 kg, and a maximum payload of C = 2.5 kg. They are equipped with a battery of capacity B = 200 Wh, and the energy consumed to fly between locations is e(l 1 , l 2 ) = 3.125 Wh/km/kg. As a result, the range of an UAV carrying its maximum payload is around VI. NUMERICAL ASSESSMENT First, we seek to assess whether flexibility in the assignment of capabilities to drones, i.e., in deciding whether or not individual drones should carry a radio or a camera, translates into better performance. To this end, we consider the smallscale scenario represented by the shadowed area in Fig. 1 and solve the problem presented in Sec. III to the optimum through an off-the-shelf solver. We consider two cases: (i) "flexible", where the equipment of drones is chosen by the optimizer, and (ii) "fixed", where additional constraints impose that one third of drones only carry the radio, one third only carry the camera, and one third carry both.
To this end, Fig. 2(a) reports the fraction of the demand for coverage and monitoring that can be satisfied, as the number of available UAVs changes, and it clearly shows that flexibility results in substantially better performance. Interestingly, Fig. 2(b) shows that, in the flexible case, drones are much more likely to carry both cameras and radios; indeed, recalling that cameras and radios weight 1 kg each and that the maximum payload is C =2.5 kg, we can conclude that drones do virtually always carry both. Additionally, as shown in Fig. 2(c), the global energy consumption is very similar in both cases: under the fixed strategy, the few drones equipped with cameras or radios are forced to take longer trips to provide a lower performance. This confirms our intuition that multiservice drones, equipped in a flexible manner, do indeed result in better performance.
Based on the results in Fig. 2, we now configure our heuristic, described in Sec. IV, to always equip drones with both cameras and radios and assess its performance against the optimum. In Fig. 3, we consider the large-scale scenario depicted in Fig. 1, and study how the heuristic performs under different parameter settings. More in detail, we consider three settings: α 1 = 0, α 2 = 0 (save time), α 1 = 1, α 2 = 0 (privilege coverage), α 1 = 0, α 2 = 1 (privilege monitoring). As reported in Fig. 3(a), privileging coverage or monitoring leads to similar overall performance in terms of service satisfaction, whereas the time saving substantially lowers the amount of offered coverage and monitoring. Different heuristic approaches have minor differences in terms of payload ( Fig. 3(b)), while the "save time" approach consumes substantially less energy than its counterparts, due to the shorter trip it results into.
Based on the above discussed results, wee now focus on the "privilege coverage" heuristic approach and compare its performance to the optimal ones, considering again the smallscale scenario depicted in Fig. 1. As we can see from Fig. 4(a), the performance yielded by the heuristic is remarkably close to the optimum, a significant fact given the heuristic low complexity and high speed. It is also interesting to observe how the difference between the coverage and monitoring missions is smaller in the optimum than in the heuristic; indeed, when decisions are made in a greedy fashion, it is harder to achieve a perfect balance between coverage and monitoring. Fig. 4(b), presenting the average weight of UAV payloads and the breakdown thereof, provides an explanation for the performance difference we can see in Fig. 4(a). Under the optimum strategy, the payload carried by UAVs is always close to their capacity C; conversely, the heuristic tends to leave more free space. It follows that UAVs can perform more deliveries in the same mission, visiting more locations on the way. Interestingly, under the optimal strategy UAVs virtually always carry both cameras and radios, which validates our decision to equip all UAVs with both radio and camera in the heuristic approach. Fig. 4(c) shows how the total quantity of used energy, expressed in battery charges. Such a value is slightly higher under the optimal strategy than under the heuristic, a confirmation that heuristics trips tend to be shorter and visit fewer locations, thus performing fewer coverage and monitoring missions.
VII. CONCLUSIONS
We addressed the challenging problem of jointly planning the missions of multitask UAVs and has applied it to a post-disaster scenario. In such cases, tasks are expected to be associated with a common geographical area (i.e., the disaster area), hence UAVs carrying out such tasks would largely geographically overlap. We show that assigning multiple, instead of single, tasks to UAVs can lead to savings in the number of UAVs required to carry out all the tasks, provided that the problem of jointly planning the multiple tasks is effectively addressed. To this end, we developed an optimization formulation and then an heuristic approach that effectively copes with the computational complexity posed by the scenario. Using a realistic model of a flooding in the San Francisco area and realistic parameters for the operational equipment and tasks, we showed that our heuristic is a good match for the optimum; moreover, the flexibility obtained by providing all UAVs with the same equipment translates into better performance.
III. SYSTEM MODEL AND OPTIMIZATION PROBLEM a) Space and time: Time is discretized into a set K = {k} of epochs, while space is discretized in a set L = {l} of
:
As a first step, we define the satisfaction σ(k, m, z) of zone z at epoch k for mission m. Such a value is the ratio between how much service the zone was provided, and how much it needed. Importantly, it is not defined with reference to epoch k alone, but also to the previous H ones: the σ variables defined in (13), we can define the mission-wise satisfaction as the minimum satisfaction across all zones and epochs: σ(m) = min k∈K min z∈Z σ(k, m, z), ∀m ∈ M.
Fig. 1 .
1The reference topology we consider. Blue dots correspond to locations in L, while orange ones correspond to zones in Z. Blue lines connect locations between which UAVs can travel in one epoch; orange lines connect zones with the locations from which UAVs can provide coverage to them. The shadowed area corresponds to the small-scale topology we use for our comparison against the optimum.
Fig. 2 .Fig. 3 .Fig. 4 .
234Small-scale scenario, optimal decisions: performance (a), payload (b), and used energy (c) yielded by flexible and fixed payload assignment strategies. Performance is normalized by the total demand, payload by the total capacity C, and used energy by the battery capacity E. Large-scale scenario: performance (a), payload (b), and used energy (c) yielded by the heuristic strategy under different parameter settings. Performance is normalized by the total demand, payload by the total capacity C, and used energy by the battery capacity E. Small-scale scenario: performance (a), payload (b), and used energy, (c). Performance is normalized by the total demand, payload by the total capacity C, and used energy by the battery capacity E.
B e(C+W ) = 9.8 km. Interestingly, such a figure matches the 10-km range envisioned for lightweight UAVs in [23, Tab. 1]. Finally, we consider |K| = 20 epochs, each corresponding to 10 minutes, and a time horizon of H = 10 epochs.
TABLE I NOTATION
ISymbol
Type
Meaning
a(p) ∈ K
parameter Earliest epoch at which deliver payload p
b(p) ∈ K
parameter Latest epoch at which deliver payload p
C
parameter Payload capacity of UAVs
E
parameter Battery capacity of UAVs
e(l 1 , l 2 )
parameter Energy consumed when traveling between
locations l 1 and l 2 , per unit of weight
f (p) ∈ L
parameter Location at which payload p shall be de-
livered
H
parameter Horizon over which satisfaction is com-
puted
K
set
Epochs
L
set
Locations
L ⊆ L
set
Locations with depots
L(d, k) ∈ L
Shorthand Location of UAV d at epoch k
M
set
Non-delivery missions, e.g., coverage or
monitoring
n(k, m, z)
parameter Work for mission m needed by users in
zone z at epoch k
q(l, m, z)
parameter Work for mission m that an UAV at loca-
tion l can perform for users in zone z, in
one epoch k
r(m, p)
∈
{0, 1}
parameter Whether payload p is necessary to perform
mission m
s(m)
parameter Data generated by performing one unit of
work of mission m
P
set
Payload itemŝ
P ⊆ P
set
Payload items to be delivered
t(l 1 , l 2 )
parameter Traffic that can be transferred between lo-
cations l 1 and l 2 , per epoch
V
parameter Maximum distance an UAV can cover in
one epoch
W
parameter UAV weight
w(p)
parameter Weight of payload p
v(l 1 , l 2 )
parameter Distance between locations l 1 and l 2
Z
set
Zones
β(d, k)
Real
variable
Battery level of UAV d at epoch k
λ(d, k, l)
Binary
variable
Whether UAV d is in location l at epoch k
µ(d, k, m, z) ∈
[0, 1]
Real
variable
Fraction of epoch k that UAV d spends in
mission m for zone z
σ(k, m, z) ∈
[0, 1]
Real
aux.
variable
Satisfaction of users in zone z concerning
mission m at epoch k
σ(m)
∈
[0, 1]
Real
aux.
variable
Mission-wide satisfaction concerning mis-
sion m
τ (d 1 , d 2 , k)
Real
variable
Traffic transferred from UAV d 1 to
UAV d 2 at epoch k
ω(d, k, p)
Binary
variable
The notation we use is summarized in Tab. I. Lower-case Greek letters indicate decision variables, lower-case Latin ones indicate parameters. Uppercase, calligraphic Latin letters indicate sets. Upper-case, regular Latin letters with indices indicate a specific element of the corresponding set, e.g., the location of an UAV. Upper-case, regular Latin letters without indices indicate design choices, e.g., UAV range, or system-wide parameters.
Note that e(l, l) > 0, i.e., energy is also consumed by hovering over the same location.
Notice that, for simplicity and without loss of generality, in this paper we only focus on uplink traffic.
For the sake of easiness, we assume that a single depot is used for all drones, i.e.,|L| = 1 .
Wireless sensor networks and multi-uav systems for natural disaster management. M Erdelj, M Król, E Natalizio, Computer Networks. 124M. Erdelj, M. Król, and E. Natalizio, "Wireless sensor networks and multi-uav systems for natural disaster management," Computer Net- works, vol. 124, pp. 72-86, 2017.
Help from the sky: Leveraging uavs for disaster management. M Erdelj, E Natalizio, K R Chowdhury, I F Akyildiz, IEEE Pervasive Computing. 161M. Erdelj, E. Natalizio, K. R. Chowdhury, and I. F. Akyildiz, "Help from the sky: Leveraging uavs for disaster management," IEEE Pervasive Computing, vol. 16, no. 1, pp. 24-32, 2017.
Real time camera system for disaster and traffic monitoring. F Kurz, D Rosenbaum, J Leitloff, O Meynberg, P Reinartz, International Conference on Sensors and Models inPhotogrammetry and Remote Sensing. F. Kurz, D. Rosenbaum, J. Leitloff, O. Meynberg, and P. Reinartz, "Real time camera system for disaster and traffic monitoring," in International Conference on Sensors and Models inPhotogrammetry and Remote Sensing, 2011.
A Saeed, A Abdelkader, M Khan, A Neishaboori, K A Harras, A Mohamed, arXiv:1702.03456On realistic target coverage by autonomous drones. A. Saeed, A. Abdelkader, M. Khan, A. Neishaboori, K. A. Harras, and A. Mohamed, "On realistic target coverage by autonomous drones," arXiv:1702.03456, 2017.
Drones: Designed for product delivery. D Bamburry, Design Management Review. 261D. Bamburry, "Drones: Designed for product delivery," Design Manage- ment Review, vol. 26, no. 1, pp. 40-48, 2015.
Survey on uav cellular communications: Practical aspects, standardization advancements, regulation, and security challenges. A Fotouhi, IEEE Communications Surveys & Tutorials. A. Fotouhi et al., "Survey on uav cellular communications: Practical aspects, standardization advancements, regulation, and security chal- lenges," IEEE Communications Surveys & Tutorials, 2019.
Medical delivery drones take flight in east africa. E Ackerman, E Strickland, IEEE Spectrum. 551E. Ackerman and E. Strickland, "Medical delivery drones take flight in east africa," IEEE Spectrum, vol. 55, no. 1, pp. 34-35, 2018.
Optimization of a modular drone delivery system. J Lee, 2017J. Lee, "Optimization of a modular drone delivery system," in 2017
Annual IEEE International Systems Conference (SysCon). IEEEAnnual IEEE International Systems Conference (SysCon). IEEE, 2017, pp. 1-8.
Optimization approaches for civil applications of unmanned aerial vehicles (uavs) or aerial drones: A survey. A Otto, N Agatz, J Campbell, B Golden, E Pesch, Networks. 724A. Otto, N. Agatz, J. Campbell, B. Golden, and E. Pesch, "Optimization approaches for civil applications of unmanned aerial vehicles (uavs) or aerial drones: A survey," Networks, vol. 72, no. 4, pp. 411-458, 2018.
A tutorial on UAVs for wireless networks: Applications, challenges, and open problems. M Mozaffari, arXiv:1803.00680M. Mozaffari et al., "A tutorial on UAVs for wireless networks: Applications, challenges, and open problems," arXiv:1803.00680, 2018.
Distributed processing applications for uav/drones: A survey. G Chmaj, H Selvaraj, Progress in Systems Engineering. H. Selvaraj, D. Zydek, and G. ChmajSpringer International PublishingG. Chmaj and H. Selvaraj, "Distributed processing applications for uav/drones: A survey," in Progress in Systems Engineering, H. Selvaraj, D. Zydek, and G. Chmaj, Eds. Cham: Springer International Publishing, 2015, pp. 449-454.
Drone networks: Communications, coordination, and sensing. E Yanmaz, S Yahyanejad, B Rinner, H Hellwagner, C Bettstetter, Ad Hoc Networks. 68E. Yanmaz, S. Yahyanejad, B. Rinner, H. Hellwagner, and C. Bettstetter, "Drone networks: Communications, coordination, and sensing," Ad Hoc Networks, vol. 68, pp. 1-15, 2018.
Drone delivery: Factors affecting the publics attitude and intention to adopt. W Yoo, E Yu, J Jung, Telematics and Informatics. 356W. Yoo, E. Yu, and J. Jung, "Drone delivery: Factors affecting the publics attitude and intention to adopt," Telematics and Informatics, vol. 35, no. 6, pp. 1687-1700, 2018.
The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery. C C Murray, A G Chu, Transportation Research Part C: Emerging Technologies. 54C. C. Murray and A. G. Chu, "The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery," Transportation Research Part C: Emerging Technologies, vol. 54, pp. 86-109, 2015.
Groupe d'études et de recherche en analyse des décisions (Montréal, The VRP with time windows. Montréal: Groupe d'études et de recherche en analyse des décisions. J.-F Cordeau, Q , J.-F. Cordeau and Q. Groupe d'études et de recherche en analyse des décisions (Montréal, The VRP with time windows. Montréal: Groupe d'études et de recherche en analyse des décisions, 2000.
Algorithms for the vehicle routing and scheduling problems with time window constraints. M M Solomon, Oper. research. 352M. M. Solomon, "Algorithms for the vehicle routing and scheduling problems with time window constraints," Oper. research, vol. 35, no. 2, pp. 254-265, 1987.
A heuristic algorithm for solving hazardous materials distribution problems. K G Zografos, K N Androutsopoulos, European Journal of Operational Research. 1522K. G. Zografos and K. N. Androutsopoulos, "A heuristic algorithm for solving hazardous materials distribution problems," European Journal of Operational Research, vol. 152, no. 2, pp. 507-519, 2004.
Hazus program. U S Fema, U.S. FEMA. Hazus program. https://www.fema.gov/hazus.
The multi-agent transport simulation MATSim. A Horni, K Nagel, K W Axhausen, Ubiquity Press LondonA. Horni, K. Nagel, and K. W. Axhausen, The multi-agent transport simulation MATSim. Ubiquity Press London, 2016.
Optimal throughput management in UAV-based networks during disasters. L Chiaraviglio, L Amorosi, F Malandrino, C F Chiasserini, P Dell'olmo, C E Casetti, IEEE INFOCOM MiSARN Workshop. L. Chiaraviglio, L. Amorosi, F. Malandrino, C. F. Chiasserini, P. Dell'Olmo, and C. E. Casetti, "Optimal throughput management in UAV-based networks during disasters," in IEEE INFOCOM MiSARN Workshop, 2019.
Ettus. N200 software radio datasheet. Ettus. N200 software radio datasheet.
Design perspectives on delivery drones. J Xu, J. Xu, Design perspectives on delivery drones. RAND, 2017.
Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery. J K Stolaroff, C Samaras, E R O'neill, A Lubers, A S Mitchell, D Ceperley, Nature communications. J. K. Stolaroff, C. Samaras, E. R. O'Neill, A. Lubers, A. S. Mitchell, and D. Ceperley, "Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery," Nature communications, 2018.
| []
|
[
"Self-shrinkers with bounded HA",
"Self-shrinkers with bounded HA"
]
| [
"Zhen Wang "
]
| []
| []
| We study integral and pointwise bounds on the second fundamental form of properly immersed self-shrinkers with bounded HA. As applications, we discuss gap and compactness results for self-shrinkers. | 10.1016/j.jmaa.2022.126124 | [
"https://arxiv.org/pdf/2109.01070v1.pdf"
]
| 237,386,130 | 2109.01070 | 11bb3f2ee1576e057bc162f2536229edbb4e1b4a |
Self-shrinkers with bounded HA
2 Sep 2021
Zhen Wang
Self-shrinkers with bounded HA
2 Sep 2021
We study integral and pointwise bounds on the second fundamental form of properly immersed self-shrinkers with bounded HA. As applications, we discuss gap and compactness results for self-shrinkers.
Introduction
A hypersurface Σ ֒→ R n+1 is said to be a self-shrinker if it is the time t = −1 slice of a mean curvature flow moving by rescalings with Σ t = √ −tΣ, or equivalently if it satisfies the equation
H = x, n 2 ,
where n and H denote the unit normal vector and the mean curvature, respectively. Self-shrinkers play an important role in the study of mean curvature flow, not least because they are models for type-I singularities of the flow by Huisken [8,9]. It is interesting to compare the HA tensor in mean curvature flow with the Ricci curvature in Ricci flow since they describe the corresponding metric evolution, respectively. In [2,10] Chen-Wang and Kotschwar-Munteanu-Wang showed the Ricci curvature blows up at the rate of type-I at the first finite singularity. In [13] Sesum proved the type-I blowup of mean curvature at the finite type-I singularity.
In [16] Li-Wang studied the flow with type-I mean curvature and confirmed the multiplicity-one conjecture in this case. The present paper follows the method of [18] and can be seen as an attempt to understand more about the asymptotic behaviour of self-shrinkers in terms of HA.
Let f = |x| 2 /4. By integral estimates and the Moser iteration we get the following pointwise growth estimate of the second fundamental form. Theorem 1.1. (Theorem 3.2) Let x : Σ n → R n+1 be a properly immersed self-shrinker with sup Σ |HA| ≤ K. Then for any p > max{n, 4} there exist positive constants C = C(n, p, K, Σ e −f , B(0,r 0 )∩Σ |A| p ) where r 0 = c(n, p)(1 + K) and a = a(n, p, K) such that
|A|(x) ≤ C(|x| + 1) a , ∀ x ∈ Σ,
i.e., the second fundamental form grows at most polynomially in the distance.
Based on the polynomial growth of volume and second fundamental form, we find that a selfshrinker with sufficiently small |HA| must be a hyperplane.
Theorem 1.2. (Corollary 4.2) Let x : Σ n → R n+1 be a smooth properly embedded self-shrinker. There exists a constant ε n = 1 √ n(n+5) 4 such that if sup Σ |HA| ≤ ε n then Σ is a hyperplane through 0.
By similar argument we find that a local energy bound implies a global energy bound.
Theorem 1.3. (Proposition 4.3) Let
x : Σ n → R n+1 be a properly immersed self-shrinker with n ≥ 4 and sup Σ |HA| ≤ K. Then there exists a r 1 = c n √ K such that if B(0,r 1 )∩Σ |A| n ≤ E, then for any
r > 0 we have B(0,r)∩Σ |A| n ≤ 3Ee r 2 /4 .
By virtue of the energy estimate above and the ǫ−regularity from Li-Wang [15] we find that the space of properly embedded self-shrinkers with uniformly bounded entropy, uniformly bounded |HA| and uniformly bounded local energy is compact. The organization of this paper is as follows. In Sect.2 we recall some results on self-shrinkers and differential equations. In Sect.3 we develop L p estimate of A and derive pointwise estimate by standard Moser iteration. In Sect.4 we obtain the gap theorem using weighted integral estimate and get the convergence result by ǫ−regularity.
Acknowledgements: The author would like to thank his advisor H. Z. Li for suggesting this problem. Z. Wang is very grateful to I. Khan and H. B. Fang for their insightful discussions.
Preliminaries
Let x : Σ n → R n+1 be a hypersurface without boundary. Σ is called a self-shrinker if it satisfies H =
x, n 2 .
Throughout this paper, we set the potential function
f :=|x|∆f − |∇f | 2 = n 2 − f, (2.1) HA + ∇ 2 f = 1 2 g,(2.
2) From [3] one sees the equivalence of weighted volume finiteness, polynomial volume growth and properness of an immersed self-shrinker in Euclidean space. Provided bounded mean curvature we can also get volume ratio lower bound. See Lemma 3.5 in Li-Wang [15]. [15]) Let Σ n ֒→ R n+1 be a properly immersed hypersurface in B(x 0 , r 0 ) with x 0 ∈ Σ and sup Σ |H| ≤ Λ. Then for any s ∈ (0, r 0 ) we have
H 2 + ∆f = n 2 , (2.3) |∇f | 2 + H 2 = f, (2.4) ∇ 2 H − ∇f · ∇A = 1 2 A − HA 2 , (2.5) ∆A − ∇f · ∇A = ( 1 2 − |A| 2 )A, (2.6) 1 2 ∆|A| 2 − ∇f · ∇|A| 2 = |∇A| 2 + ( 1 2 − |A| 2 )|A| 2 , (2.7) ∆H − ∇f · ∇H = ( 1 2 − |A| 2 )H, (2.8) 1 2 ∆H 2 − ∇f · ∇H 2 = |∇H| 2 + ( 1 2 − |A| 2 )H 2 .(2.Vol Σ (B(x 0 , s) ∩ Σ) ω n s n ≤ e Λr 0 Vol Σ (B(x 0 , r 0 ) ∩ Σ) ω n r 0 n .
In particular, Vol(B(x 0 , r) ∩ Σ) ≥ e −Λr ω n r n , ∀ r ∈ (0, r 0 ].
In order to obtain the growth rate of second fundamental form from the L p estimate we will apply the standard Moser iteration. Recall the Michael-Simon inequality which needs mean curvature. Here we present a precise estimate of elliptic case derived from [12] and [14].
Lemma 2.5 (Moser iteration). Let Σ n ֒→ R n+1 be a hypersurface without boundary. Consider the differential inequality −∆u ≤ ϕu, u ≥ 0. Fix x 0 ∈ Σ and denote D r := B(x 0 , r) ∩ Σ.
Then for any r > 0, q > n 2 and β ≥ 2 there exists a positive constant C = C(n, q, β) such that
u L ∞ (D r/2 ) ≤ Cr − 2n 2 β ϕ 2q 2q−n L q (Dr) + H n+2 L n+2 (Dr) n 2 β u L β (Dr) .
Finally we recall some technical results on interior estimates and compactness of immersed hy-
persurfaces in R n+1 . Note that if Σ is a self-shrinker then {Σ t = √ −tΣ, − 3 2 ≤ t ≤ − 1 2 } is a mean curvature flow, i.e., ∂ t x = −Hn.
We obtain some kind of ǫ−regularity from Corollary 3.11 and Theorem 3.7 in Li-Wang [15] and the interior estimates of Ecker and Huisken in [17].
Lemma 2.6 (ǫ-regularity). There exist constants ǫ = ǫ(n) > 0, δ = δ(n) > 0, η = η(n) > 0 and {D k (n, θ)} k≥1 satisfying the following properties. Let Σ n ֒→ R n+1 be a properly immersed self-shrinker satisfying sup Σ |H| ≤ Λ. If B(x 0 ,r)∩Σ |A| n ≤ ǫ
for some x 0 ∈ R n+1 and some 0 < r ≤ 1 Λ , then we have
sup B(x 0 ,r/2)∩Σ |A| ≤ 1 r ; sup B(x 0 ,r/32)∩Σt |A| ≤ 2 δr , ∀ t + 1 ∈ [− ηr 2 16 , ηr 2 16 ] ∩ [− 1 2 , 1 2 ]; sup B(x 0 , √ θR)∩Σ |∇ k A| ≤ 2D k (n, θ) δr , ∀ θ ∈ (0, 1 2 ], ∀ k ≥ 1, where R := min{ r 32 , √ nη 2 r, √ 2n}.
The following compactness result of mean curvature flow is well-known. See [1] for a detailed proof.
Lemma 2.7 (Compactness of mean curvature flow). Let {(Σ n i , x i (t)), −1 < t < 1} be a sequence of mean curvature flow properly immersed in B(0, R) ⊂ R n+1 . Suppose that sup B(0,R)∩Σ i,t |A|(·, t) ≤ Λ, ∀ t ∈ (−1, 1) for some Λ > 0. Then a subsequence of {(B(0, R)∩Σ i,t ), −1 < t < 1} converges in smooth topology to a smooth mean curvature flow {Σ ∞,t , −1 < t < 1} in B(0, R).
L p estimate and growth rate
Throughout the section we set
sup Σ |HA| ≤ K. Proposition 3.1 (L p estimate). Let x : Σ n → R n+1 be a properly immersed self-shrinker with sup Σ |HA| ≤ K.
Then for any p ≥ 4 there exists positive constants a = a(n, p, K) and
C = C(n, p, K, Σ e −f , B(0,r 0 )∩Σ |A| p ) where r 0 = c(n, p)(1 + K) such that Σ |A| p (|x| 2 + 1) −a ≤ C.
Moreover, for any x ∈ Σ,
B(x,1)∩Σ |A| p ≤ C(|x| 2 + 1) a .
Proof. We always use c to denote a nonegative constant depending only on n and p. For a > 0 and p > 1, integrating by parts we have
a Σ |∇f | 2 (f + 1) −a−1 |A| p φ = − Σ ∇f · ∇(f + 1) −a |A| p φ (3.1) = Σ ∆f (f + 1) −a |A| p φ + Σ ∇f · ∇|A| p · (f + 1) −a φ + Σ ∇f · ∇φ · (f + 1) −a |A| p .
Note that
a|∇f | 2 (f + 1) −a−1 − ∆f (f + 1) −a = a f − H 2 f + 1 + H 2 − n 2 (f + 1) −a .
Since H is bounded by n
r 0 = 8 n (1 + n 1/2 K)a 1/2 (3.2)
such that
a f − H 2 f + 1 + H 2 − n 2 ≥ a − n, ∀ |x| ≥ r 0 . Let φ(x) := η(|x| 2 ) be a cutoff where η : [0, ∞) → R is a nonnegative decreasing Lipschitz function. Thus ∇f · ∇φ = 4η ′ |∇f | 2 ≤ 0.
Moreover, for any r > 0 and 0 < δ < 1 fixed, let
η ≡ 1 on [0, r 2 ] ; η ≡ 0 on [4r 2 , ∞) ; |η ′ | ≤ 1 3δr 2 η 1−δ , which implies φ −1 |∇φ| 2 ≤ 16 9δ 2 r 2 φ 1−2δ . Back to (3.1), (a − n) Σ |A| p (f + 1) −a φ (3.3) ≤ Σ ∇f · ∇|A| p · (f + 1) −a φ + Σ 4η ′ |∇f | 2 (f + 1) −a |A| p + C 1 , where C 1 = C 1 (p) (3.4) := {|x|≤r 0 }∩Σ − a f − H 2 f + 1 − H 2 + n 2 + a − n (f + 1) −a |A| p φ ≤ {|x|≤r 0 }∩Σ a 1 + H 2 f + 1 − n 2 (f + 1) −a |A| p ≤ ca(1 + K) {|x|≤r 0 }∩Σ |A| p (f + 1) −a−1 < ∞.
Using (2.5) and integrating by parts we get
Σ ∇f · ∇|A| p · (f + 1) −a φ = p Σ 1 2 x, ∇A A|A| p−2 (f + 1) −a φ (3.5) ≤ p Σ ∇ 2 H + HA 2 − 1 2 A A|A| p−2 (f + 1) −a φ ≤ p Σ ∇ 2 H · A · |A| p−2 (f + 1) −a φ + (pK − p 2 ) Σ |A| p (f + 1) −a φ ≤ p(p − 1) Σ |∇H||∇A||A| p−2 (f + 1) −a φ + ap Σ |∇H||∇f ||A| p−1 (f + 1) −a−1 φ +p Σ |∇H||∇φ||A| p−1 (f + 1) −a + (pK − p 2 ) Σ |A| p (f + 1) −a φ.
Using the Caucuy-Schwarz inequality and the Young's inequality, we estiamte the right hand side of (3.5) above as follows:
p(p − 1) Σ |∇H||∇A||A| p−2 (f + 1) −a φ ≤ cK −1 Σ |∇H| 2 |A| p (f + 1) −a φ + cK Σ |∇A| 2 |A| p−4 (f + 1) −a φ, ap Σ |∇H||∇f ||A| p−1 (f + 1) −a−1 φ ≤ cK −1 Σ |∇H| 2 |A| p (f + 1) −a φ + ca 2 K Σ |∇f | 2 |A| p−2 (f + 1) −a−2 φ ≤ cK −1 Σ |∇H| 2 |A| p (f + 1) −a φ + ca 2 K Σ |A| p−2 (f + 1) −a−1 φ, ca 2 Σ |A| p−2 (f + 1) −a−1 φ = Σ |A| p−2 (f + 1) − p−2 p a · ca 2 (f + 1) − 2 p a−1 · φ ≤ c Σ |A| p (f + 1) −a φ + ca p Σ (f + 1) −a− p 2 φ, p Σ |∇H||∇φ||A| p−1 (f + 1) −a ≤ cK −1 Σ |∇H| 2 |A| p (f + 1) −a φ + cK Σ φ −1 |∇φ| 2 |A| p−2 (f + 1) −a ≤ cK −1 Σ |∇H| 2 |A| p (f + 1) −a φ + cK δ 2 r 2 Σ φ 1−2δ |A| p−2 (f + 1) −a , c δ 2 r 2 Σ φ 1−2δ |A| p−1 (f + 1) −a = Σ |A| p−1 φ p−1 p · c δ 2 r 2 φ 1−2pδ p · (f + 1) −a ≤ c Σ |A| p (f + 1) −a φ + c δ 2p r 2p Σ φ 1−2pδ (f + 1) −a .
Let δ = 1 4p so that 1 − 2pδ = 1 2 > 0. Then plugging the estimates above into (3.5) yidles
Σ ∇f · ∇|A| p · (f + 1) −a φ (3.6) ≤ cK −1 Σ |∇H| 2 |A| p (f + 1) −a φ + cK Σ |∇A| 2 |A| p−4 (f + 1) −a φ +cK Σ |A| p (f + 1) −a φ + c(a p + r −2p )K Σ (f + 1) −a .
Furthermore, using (2.9) we have
Σ |∇H| 2 |A| p (f + 1) −a φ = Σ 1 2 ∆H 2 − 1 2 ∇f · ∇H 2 + (|A| 2 − 1 2 )H 2 |A| p (f + 1) −a φ ≤ − p 2 Σ ∇H 2 · ∇A · A · |A| p−2 (f + 1) −a φ + a 2 Σ ∇H 2 · ∇f · |A| p (f + 1) −a−1 φ − 1 2 Σ ∇H 2 · ∇φ · |A| p (f + 1) −a − 1 2 Σ ∇f · ∇H 2 · |A| p (f + 1) −a φ +K 2 Σ |A| p (f + 1) −a φ.
Then,
Σ |∇H| 2 |A| p (f + 1) −a φ ≤ pK Σ |∇H||∇A||A| p−2 (f + 1) −a φ + (a + 1)K Σ |∇H||∇f ||A| p−1 (f + 1) −a φ +K Σ |∇H||∇φ||A| p−1 (f + 1) −a + K 2 Σ |A| p (f + 1) −a φ ≤ 1 4 Σ |∇H| 2 |A| p (f + 1) −a φ + p 2 K 2 Σ |∇A| 2 |A| p−4 (f + 1) −a φ + 1 4 Σ |∇H| 2 |A| p (f + 1) −a φ + (a + 1) 2 K 2 Σ |∇f | 2 |A| p−2 (f + 1) −a φ + 1 4 Σ |∇H| 2 |A| p (f + 1) −a φ + K 2 Σ φ −1 |∇φ| 2 |A| p−2 (f + 1) −a +K 2 Σ |A| p (f + 1) −a φ.
Thus,
Σ |∇H| 2 |A| p (f + 1) −a φ (3.7) ≤ 4p 2 K 2 Σ |∇A| 2 |A| p−4 (f + 1) −a φ + 4(a + 1) 2 K 2 Σ |A| p−2 (f + 1) −a+1 φ +cr −2 K 2 Σ φ 1−2δ |A| p−2 (f + 1) −a + 4K 2 Σ |A| p (f + 1) −a φ ≤ cK 2 Σ |∇A| 2 |A| p−4 (f + 1) −a φ + 4K 2 Σ |A| p (f + 1) −a φ +cK 2 Σ |A| p (f + 1) −a φ + c(a + 1) p K 2 Σ (f + 1) −a+ p 2 +cK 2 Σ |A| p (f + 1) −a φ + cr −p K 2 Σ (f + 1) −a ≤ cK 2 Σ |∇A| 2 |A| p−4 (f + 1) −a φ + cK 2 Σ |A| p (f + 1) −a φ +c (a + 1) p + r −p K 2 Σ (f + 1) −a+ p 2 .
On the other hand, for any p ≥ 4 by (2.7) we have
Σ |∇A| 2 |A| p−4 (f + 1) −a φ = Σ 1 2 ∆|A| 2 − 1 2 ∇f · ∇|A| 2 + (|A| 2 − 1 2 )|A| 2 |A| p−4 (f + 1) −a φ ≤ a 2 Σ ∇|A| 2 · ∇f · |A| p−4 (f + 1) −a−1 φ − 1 2 Σ ∇|A| 2 · ∇φ · |A| p−4 (f + 1) −a − 1 2 Σ ∇f · ∇|A| 2 · |A| p−4 (f + 1) −a φ + Σ |A| p (f + 1) −a φ.
Then,
Σ |∇A| 2 |A| p−4 (f + 1) −a φ ≤ (a + 1) Σ |∇A||∇f ||A| p−3 (f + 1) −a φ + Σ |∇A||∇φ||A| p−3 (f + 1) −a + Σ |A| p (f + 1) −a φ ≤ 1 4 Σ |∇A| 2 |A| p−4 (f + 1) −a φ + (a + 1) 2 Σ |∇f | 2 |A| p−2 (f + 1) −a φ + 1 4 Σ |∇A| 2 |A| p−4 (f + 1) −a φ + Σ φ −1 |∇φ| 2 |A| p−2 (f + 1) −a + Σ |A| p (f + 1) −a φ, Σ |∇A| 2 |A| p−4 (f + 1) −a φ (3.8) ≤ 2(a + 1) 2 Σ |A| p−2 (f + 1) −a+1 φ + cr −2 Σ φ 1−2δ |A| p−2 (f + 1) −a + Σ |A| p (f + 1) −a φ = Σ |A| p−2 (f + 1) − p−2 p a · 2(a + 1) 2 ( |x| 2 4 + 1) − 2 p a+1 · φ + Σ |A| p−2 φ p−2 p · cr −2 φ 2(1−pδ) p · (f + 1) −a + Σ |A| p (f + 1) −a φ ≤ c Σ |A| p (f + 1) −a φ + c(a + 1) p Σ (f + 1) −a+ p 2 φ +c Σ |A| p (f + 1) −a φ + cr −p Σ (f + 1) −a φ 1−pδ + Σ |A| p (f + 1) −a φ ≤ c Σ |A| p (f + 1) −a φ + c (a + 1) p + r −p Σ (f + 1) −a+ p 2 .
Combining (3.6), (3.7) and (3.8) we conclude
Σ ∇f · ∇|A| p · (f + 1) −a φ ≤ cK Σ |∇A| 2 |A| p−4 (f + 1) −a φ + cK Σ |A| p (f + 1) −a φ +c r −2p + r −p + (a + 1) p K Σ (f + 1) −a+ p 2 ≤ cK Σ |A| p (f + 1) −a φ + c r −2p + r −p + (a + 1) p K Σ (f + 1) −a+ p 2 , which together with (3.3) implies (a − n − cK) Σ |A| p (f + 1) −a φ ≤ c r −2p + r −p + (a + 1) p K Σ (f + 1) −a+ p 2 + C 1 .
Recall the volume estimate in Lemma 2.3. Take a = n + p + cK + 1 so that the right hand side above makes sense. Recall the settings (3.2) and (3.4) one sees
r 0 ≤ c(1 + K), C 1 (p) ≤ c(1 + K) 2 {|x|≤r 0 }∩Σ |A| p .
Letting r → ∞ yields
Σ |A| p (f + 1) −a ≤ C(n, p, K) Σ (f + 1) −a+ p 2 + c(1 + K) 2 {|x|≤r 0 }∩Σ |A| p < ∞.
In particular, we restrict the integration on B(x 0 , 1) ∩ Σ for any x 0 ∈ Σ, then
(|x 0 | + 1) 2 4 + 1 −a B(x 0 ,1)∩Σ |A| p ≤ B(x 0 ,1)∩Σ |A| p (f + 1) −a φ ≤ C,
i.e.,
B(x 0 ,1)∩Σ |A| p ≤ C(|x 0 | 2 + 1) a ,
where a = a(n, p, K), C = C(n, p, K,
Σ e −f , B(0,r 0 )∩Σ |A| p ).
Theorem 3.2 (growth rate). Let x : Σ n → R n+1 be a properly immersed self-shrinker with sup Σ |HA| ≤ K. Then for any p > max{n, 4} there exist positive constants C = C(n, p, K, Σ e −f , B(0,r 0 )∩Σ |A| p ) where r 0 = c(n, p)(1 + K) and a = a(n, p, K) such that
|A|(x) ≤ C(|x| + 1) a , ∀ x ∈ Σ,
i.e., the second fundamental form grows at most polynomially in the distance.
Proof. Fix q > max{n/2, 2}. From (2.7) we know
∆|A| 2 = ∇f · ∇|A| 2 + 2|∇A| 2 + (1 − 2|A| 2 )|A| 2 ≥ − 1 2 |∇f | 2 + 1 − 2|A| 2 |A| 2 ≥ − |x| 2 8 + 2|A| 2 |A| 2 .
If we set ϕ := |x| 2 8 + 2|A| 2 , then
−∆|A| 2 ≤ ϕ|A| 2 .
Recall that sup Σ |H| ≤ c n K 1 2 . Fix x 0 ∈ Σ. Applying the standard Moser iteration Lemma 2.5 yields that for any β = n 2 < q sup B(x 0 ,1)∩Σ
|A| 2 ≤ C(n, q) ϕ 2q 2q−n L q (B(x 0 ,1)∩Σ) + H n+2 L n+2 (B(x 0 ,1)∩Σ) 2n |A| 2 L n 2 (B(x 0 ,1)∩Σ) ,
where by Lemma 2.3 and Proposition 3.1
ϕ L q (B(x 0 ,1)∩Σ) ≤ 1 8 |x| 2 L q (B(x 0 ,1)∩Σ) + 2 |A| 2 L q (B(x 0 ,1)∩Σ) ≤ C(|x 0 | 2 + 1) 1+ n 2q + C(|x 0 | 2 + 1) a ′ 2q , H n+2 L n+2 (B(x 0 ,1)∩Σ) ≤ CK n+2 2 (|x 0 | 2 + 1) n 2 , |A| 2 L n 2 (B(x 0 ,1)∩Σ) ≤ C(|x 0 | 2 + 1) 2a ′′ n .
Finally we conclude for any x ∈ Σ,
|A|(x) ≤ C(|x| + 1) a ,
for constants a = a(n, q, K), C = C(n, q, K, Σ e −f , C 1 (2q), C 1 (n)) = C(n, q, K, Σ e −f ,
B(0,r 0 )∩Σ |A| 2q ),
where r 0 = c(n, q)(1 + K). So is Theorem 3.2 proved.
Gap and compactness theorems
Since Lemma 2.3 and Theorem 3.2 show the polynomial growth, now we can consider global integrations with the natural weight e −f , which leads to simplier calculations. As in the previous section, we set sup Σ |HA| ≤ K.
We will use the notation ∆ f = ∆ − ∇f · ∇ which is self adjoint with respect to the weighted volume e −f dv. Proof. By virtue of (2.3) and (2.4), we have
Σ (f − n 2 )|A| p e −f = Σ |∇f | 2 − ∆f |A| p e −f (4.1) = Σ ∆(e −f )|A| p = − Σ ∇(e −f ) · ∇|A| p = Σ ∇f · ∇|A| p e −f .
Note that sup Σ |HA| ≤ K. Using (2.5) and integrating by parts, we have
Σ ∇f · ∇|A| p e −f = p Σ ∇f · ∇A · A|A| p−2 e −f = p Σ ∇ 2 H + HA 2 − 1 2 A A|A| p−2 e −f ≤ p Σ ∇ 2 H · A|A| p−2 e −f + (pK − p 2 ) Σ |A| p e −f ≤ p(p − 1) Σ |∇H||∇A||A| p−2 e −f + p Σ |∇H||∇f ||A| p−1 e −f +(pK − p 2 ) Σ |A| p e −f .
By (2.4) and Schwarz's inequality,
Σ ∇f · ∇|A| p e −f (4.2) ≤ p(p − 1) Σ |∇H||∇A||A| p−2 e −f + 1 2 Σ |∇f | 2 |A| p e −f + p 2 2 Σ |∇H| 2 |A| p−2 e −f + (pK − p 2 ) Σ |A| p e −f ≤ p 2 (1 + √ n 2 ) Σ |∇H||∇A||A| p−2 e −f + 1 2 Σ |∇f | 2 |A| p e −f +(pK − p 2 ) Σ |A| p e −f ≤ p 2 (1 + √ n 2 )K Σ |∇A| 2 |A| p−4 e −f + 1 4 p 2 (1 + √ n 2 )K −1 Σ |∇H| 2 |A| p e −f + 1 2 Σ f |A| p e −f + (pK − p 2 ) Σ |A| p e −f .
Plugging (4.2) into (4.1) yields
1 2 Σ f |A| p e −f ≤ (pK + n 2 − p 2 ) Σ |A| p e −f + p 2 (1 + √ n 2 )K Σ |∇A| 2 |A| p−4 e −f (4.3) +p 2 (1 + √ n 2 )K −1 Σ |∇H| 2 |A| p e −f .
Furthermore, by (2.7) we get, for p ≥ 4,
Σ |∇A| 2 |A| p−4 e −f = Σ 1 2 ∆ f |A| 2 − ( 1 2 − |A| 2 )|A| 2 |A| p−4 e −f ≤ − 1 2 Σ ∇|A| 2 · ∇|A| p−4 e −f + Σ (|A| p − 1 2 |A| p−2 )e −f ≤ Σ |A| p e −f ,|A| p e −f + {f ≤3b}∩Σ |A| p ≤ (2e r 2 + 1) {f ≤3b}∩Σ |A| p , i.e., {|x|≤r}∩Σ |A| p ≤ (2e r 2 /4 + 1) {|x|≤2 √ 3b}∩Σ |A| p ≤ 3e r 2 /4 {|x|≤2 √ 3cK+3(n−p)/2}∩Σ |A| p .
Letting p = n yields the energy estimate we want.
Combining the volume estimate and the energy bound, we derive the following compactness theorem for self-shrikers, which largely follows the techniques on minimal surfaces. Remark that here we only need a local energy bound instead of a global bound. Proof. Fix B(0, ρ) ⊂ R n+1 . Assume that B(0,r 1 )∩Σ i |A| n ≤ E < ∞. By Proposition 4.3 we can define the measures ν i on B(0, ρ) by
ν i (U ) := U ∩B(0,ρ)∩Σ i |A| n ≤ 3Ee ρ 2 /4 , ∀ U ⊂ B(0, ρ).
Then a subsequence converges weakly to a Radon measure ν with ν(B(0, ρ)) ≤ 3Ee ρ 2 /4 . We define the set
S ρ := {x ∈ B(0, ρ) | ν(x) ≥ ǫ},
where ǫ = ǫ(n) is the positive constant in Lemma 2.6 and see the number estimate ♯{S ρ } ≤ 3E ǫ e ρ 2 /4 . For any x 0 ∈ B(0, ρ) \ S ρ there exists some r ∈ (0, where R = R(r, n) < r 32 and D k = D k (n). By Lemma 2.7 and a diagonal sequence argument we find a subsequence of {B(0, ρ)∩Σ i } converges smoothly, away from S ρ , to a properly embedded selfshrinker Σ ∞ in B(0, ρ). Furthermore, we find a subsequence of {Σ i } converges in smooth topology, away from S := ρ>0 S ρ , to a properly embedded self-shrinker Σ ∞ . By Lemma 2.3 and Lemma 2.4 the multiplicity of the convergence is bounded by N 0 = N 0 (n, K) < ∞. Note that Σ ∞ is a minimal hypersurface with some conformal metric. Following the argument in the proof of Proposition 7.14 of [5] we see that Σ ∞ ∪ S is a smooth properly embedded self-shrinker and the convergence is also in Hausdorff distance.
Finally it boils down to the multiplicity-one convergence as in [6]. Suppose for the sake of contradiction that u i denotes the normalized height-difference between the top and bottom sheets. In fact {u i } satisfies Lu i = 0 up to higher order correction terms and converges on Σ ∞ \ S to a solution u with Lu = 0. Applying the foliation argument and local maximum principle in a cylindrical neighbourhood of a singularity y ∈ S, we see that u i is bounded on a neighbourhood of y by a mutiple of its supermum on the boundary. Hence u is bounded over each y ∈ S and then extends to a smooth positive solution on Σ ∞ ∪S which actually implies L-stability. However, there are no L-stable smooth properly embedded self-shrinkers according to Lemma 2.3 and Theorem 0.5 of [6]. See more details in [6,20].
Theorem 1.4. (Theorem 4.4) Let {Σ n i } be a sequence of properly embedded self-shrinkers with n ≥ 4 normalized by Σ i e −f ≤ (4π) n/2 . Assume that sup i sup Σ i |HA| ≤ K and sup i B(0,r 1 )∩Σ i |A| n < ∞ where r 1 = c n √ K is the positive constant in Proposition 4.3. Then a subsequence of {Σ i } converges smoothly to a smooth properly embedded self-shrinker Σ ∞ .
Lemma 2.3. (Theorem 1.1 of [3]) Let Σ n be a complete noncompact properly immersed self-shrinker in Eucildean space R n+1 . Then Σ has finite weighted volume Vol f (Σ) = Σ e −f dv < +∞ and Vol(B(0, r) ∩ Σ) ≤ Cr n , ∀ r > 0, where C is a positive constant depending only on Σ e −f dv.
Theorem 4. 1
1(gap theorem). Let x : Σ n → R n+1 be a properly immersed self-shrinker. If sup Σ |HA| ≤ 1 √ n(n+5) 4 , then A ≡ 0.
Theorem 4 . 4 (
44Compactness). Let {Σ n i } be a sequence of properly embedded self-shrinkers with n ≥ 4 normalized by Σ i e −f ≤ (4π) n/2 . Assume that sup i sup Σ i |HA| ≤ K and sup i B(0,r 1 )∩Σ i |A| n < ∞ where r 1 = c n √ K is the positive constant in Proposition 4.3. Then a subsequence of {Σ i } converges smoothly to a smooth properly embedded self-shrinker Σ ∞ .
K
) such that B(x 0 , r) ⊂ B(0, ρ) \ S ρ with ν(B(x 0 , r)) < ǫ. For i sufficiently large we haveB(x 0 ,r)∩Σ i |A| n ≤ ǫ.
i.e.,|∇A| 2 |A| p−4 e −f ≤Σ
Σ
|A| p e −f .
(4.4)
Similarly, by (2.9) we get
which implies
{f ≥3b}∩Σ
|A| p e −f ≤ 2
{f ≤3b}∩Σ
|A| p .
Moreover, for any r > 0,
{|x|≤2r}∩Σ
|A| p =
{f ≤r 2 }∩Σ
|A| p
≤ e r 2
{f ≥3b}∩Σ
Finally, combining (4.3), (4.4) and (4.5), we concludewhereNow we take p = n + 4. Then c ≤ 2 √ n(n + 5) 4 . If K ≤ 1 √ n(n+5) 4 , i.e., a upper bound which depends only on n, then the above inequality implies that A ≡ 0. |A| p e −f ,
Uniqueness and pseudolocality theorems of the mean curvature flow. B L Chen, L Yin, Comm. Anal. Geom. 153B. L. Chen, L. Yin, Uniqueness and pseudolocality theorems of the mean curvature flow. Comm. Anal. Geom. 15 (2007), no. 3, 435-490.
On the conditions to extend Ricci flow(III). X X Chen, B Wang, Int. Math. Res. Not. IMRN. 201310X. X. Chen, B. Wang, On the conditions to extend Ricci flow(III). Int. Math. Res. Not. IMRN 2013, no. 10, 2349-2367.
Volume estiamte about shrinkers. X Cheng, D T Zhou, Proc. Amer. Math. Soc. 1412X. Cheng, D. T. Zhou, Volume estiamte about shrinkers. Proc. Amer. Math. Soc. 141(2013), no. 2, 687-696.
A characterization of the singular time of the mean curvature flow. A Cooper, Proc. Amer. Math. Soc. 1398A. Cooper, A characterization of the singular time of the mean curvature flow. Proc. Amer. Math. Soc. 139 (2011), no. 8, 2933-2942.
A course in minimal surfaces. T H Colding, W P Minicozzi, I I , Graduate Studies in Mathematics. Providence, RIAmerican Mathematical Society121xii+313 ppT. H. Colding, W. P. Minicozzi II, A course in minimal surfaces. Graduate Studies in Mathemat- ics, 121. American Mathematical Society, Providence, RI, 2011. xii+313 pp.
Smooth compactness of self-shrinkers. T H Colding, W P Minicozzi, I I , Comment. Math. Helv. 872T. H. Colding, W. P. Minicozzi II, Smooth compactness of self-shrinkers. Comment. Math. Helv. 87 (2012), no. 2, 463-475.
Generic mean curvature flow I: generic singularities. T H Colding, W P Minicozzi, I I , Ann. of Math. 2T. H. Colding, W. P. Minicozzi II, Generic mean curvature flow I: generic singularities. Ann. of Math. (2) 175 (2012), no. 2, 755-833.
Asymptotic behavior for singularities of the mean curvature flow. G Huisken, J. Differential Geom. 311G.Huisken, Asymptotic behavior for singularities of the mean curvature flow. J. Differential Geom. 31 (1990), no. 1, 285-299.
Local and global behaviour of hypersurfaces moving by mean curvature. partial differential equations on manifolds. G Huisken, Proc. Sympos. Pure Math. 54Amer. Math. SocG.Huisken, Local and global behaviour of hypersurfaces moving by mean curvature. partial differential equations on manifolds (Los Angeles, CA, 1990), 175-191, Proc. Sympos. Pure Math., 54, Part 1, Amer. Math. Soc., Providence, RI, 1993.
A local curvature estimate for the Ricci flow. B Kotschwar, O Munteanu, J P Wang, J. Funct. Anal. 2719B. Kotschwar, O. Munteanu, J. P. Wang, A local curvature estimate for the Ricci flow. J. Funct. Anal. 271 (2016), no. 9, 2604-2630.
The mean curvature at the first singular time of the mean curvature flow. N Q Le, N Sesum, Ann. Inst. H. Poincare Anal. Non Lineaire. 276N. Q. Le, N. Sesum, The mean curvature at the first singular time of the mean curvature flow. Ann. Inst. H. Poincare Anal. Non Lineaire 27 (2010), no. 6, 1441-1459.
On the extension of the mean curvature flow. N Q Le, N Sesum, Math. Z. 267N. Q. Le, N. Sesum, On the extension of the mean curvature flow. Math. Z. 267, 583-604 (2011).
Blow-up rate of the mean curvature during the mean curvature flow and a gap theorem for self-shrinkers. N Q Le, N Sesum, Commun. Anal. Geom. 194N. Q. Le, N. Sesum, Blow-up rate of the mean curvature during the mean curvature flow and a gap theorem for self-shrinkers. Commun. Anal. Geom. 19(4), 633-659 (2011).
Peter, Li, Lecture notes on geometry analysis. RIMGARC Lecture Notes Series 6. Seoul National University. Peter. Li, Lecture notes on geometry analysis. RIMGARC Lecture Notes Series 6. Seoul Na- tional University (1993).
The extension problem of the mean curvature flow(I). H Z Li, B Wang, Invent. Math. 218H. Z. Li, B. Wang, The extension problem of the mean curvature flow(I), Invent. Math. 218, 721-777 (2019).
On Ilmanen's multiplicity-one conjecture for mean curvature flow with type-I mean curvature. H Z Li, B Wang, arXiv:1811.08654H. Z. Li, B. Wang, On Ilmanen's multiplicity-one conjecture for mean curvature flow with type-I mean curvature. arXiv:1811.08654.
. C Mantegazza, Lecture notes on mean curvature flow. Progress in Mathematics. 290Birkhäuser/Springer Basel AGxii+166 ppC. Mantegazza, Lecture notes on mean curvature flow. Progress in Mathematics, 290. Birkhäuser/Springer Basel AG, Basel, 2011. xii+166 pp.
The curvature of gradient Ricci solitons. O Munteanu, M T Wang, Reviewer: Bo Yang) 53C21 (53C25). 18O. Munteanu, M. T. Wang, The curvature of gradient Ricci solitons. Math. Res. Lett. 18 (2011), no. 6, 1051-1069. (Reviewer: Bo Yang) 53C21 (53C25).
Curvature tensor under the Ricci flow. N Sesum, Amer. J. Math. 1276N. Sesum, Curvature tensor under the Ricci flow. Amer. J. Math. 127 (2005), no. 6, 1315-1324.
Compactness of minimal hypersurfaces with bounded index. B Sharp, J. Differential Geom. 1062B.Sharp, Compactness of minimal hypersurfaces with bounded index. J. Differential Geom. 106 (2017), no. 2, 317-339.
On the conditions to extend Ricci flow(II). B Wang, Int. Math. Res. Not. IMRN. 201214B. Wang, On the conditions to extend Ricci flow(II). Int. Math. Res. Not. IMRN 2012, no. 14, 3192-3223.
| []
|
[
"Chaos and Correspondence in Classical and Quantum Hamiltonian Ratchets: A Heisenberg Approach",
"Chaos and Correspondence in Classical and Quantum Hamiltonian Ratchets: A Heisenberg Approach"
]
| [
"Jordan Pelc \nDepartment of Chemistry, and Centre for Quantum Information and Quantum Control\nChemical Physics Theory Group\nUniversity of Toronto\nM5S 3H6TorontoCanada\n",
"Jiangbin Gong \nDepartment of Physics\nCentre of Computational Science and Engineering\nNational University of Singapore\n117542Singapore\n\nNUS Graduate School for Integrative Sciences and Engineering\n117597Singapore\n",
"Paul Brumer \nDepartment of Chemistry, and Centre for Quantum Information and Quantum Control\nChemical Physics Theory Group\nUniversity of Toronto\nM5S 3H6TorontoCanada\n"
]
| [
"Department of Chemistry, and Centre for Quantum Information and Quantum Control\nChemical Physics Theory Group\nUniversity of Toronto\nM5S 3H6TorontoCanada",
"Department of Physics\nCentre of Computational Science and Engineering\nNational University of Singapore\n117542Singapore",
"NUS Graduate School for Integrative Sciences and Engineering\n117597Singapore",
"Department of Chemistry, and Centre for Quantum Information and Quantum Control\nChemical Physics Theory Group\nUniversity of Toronto\nM5S 3H6TorontoCanada"
]
| []
| Previous work [Gong and Brumer, Phys. Rev. Lett., 97, 240602 (2006)] motivates this study as to how asymmetry-driven quantum ratchet effects can persist despite a corresponding fully chaotic classical phase space. A simple perspective of ratchet dynamics, based on the Heisenberg picture, is introduced. We show that ratchet effects are in principle of common origin in classical and quantum mechanics, though full chaos suppresses these effects in the former but not necessarily the latter. The relationship between ratchet effects and coherent dynamical control is noted. | 10.1103/physreve.79.066207 | [
"https://arxiv.org/pdf/0906.1435v1.pdf"
]
| 38,224,327 | 0906.1435 | 3940afa0ddf7dced8164cf667f7adad3f61a19b4 |
Chaos and Correspondence in Classical and Quantum Hamiltonian Ratchets: A Heisenberg Approach
8 Jun 2009 (Dated: June 8, 2009)
Jordan Pelc
Department of Chemistry, and Centre for Quantum Information and Quantum Control
Chemical Physics Theory Group
University of Toronto
M5S 3H6TorontoCanada
Jiangbin Gong
Department of Physics
Centre of Computational Science and Engineering
National University of Singapore
117542Singapore
NUS Graduate School for Integrative Sciences and Engineering
117597Singapore
Paul Brumer
Department of Chemistry, and Centre for Quantum Information and Quantum Control
Chemical Physics Theory Group
University of Toronto
M5S 3H6TorontoCanada
Chaos and Correspondence in Classical and Quantum Hamiltonian Ratchets: A Heisenberg Approach
8 Jun 2009 (Dated: June 8, 2009)numbers: 0545-a3280Qk0560Gg
Previous work [Gong and Brumer, Phys. Rev. Lett., 97, 240602 (2006)] motivates this study as to how asymmetry-driven quantum ratchet effects can persist despite a corresponding fully chaotic classical phase space. A simple perspective of ratchet dynamics, based on the Heisenberg picture, is introduced. We show that ratchet effects are in principle of common origin in classical and quantum mechanics, though full chaos suppresses these effects in the former but not necessarily the latter. The relationship between ratchet effects and coherent dynamical control is noted.
I. INTRODUCTION
Originally proposed by Smoluchowski [1] and Feynman [2], and motivated by an application to biological molecular motors [3], studies of ratchet transport, that is, asymmetrydriven directed transport without external bias, are now the subject of an expanded range of theoretical interest [4,5]. While earlier investigations depended on external noise to rationalize these directional effects, recent work has shown that they can persist even in its absence [6,7,8,9,10,11,12,13,14], thereby raising questions about the origin of transport in isolated Hamiltonian systems. Many studies have therefore focused on the relationship between ratchet dynamics and deterministic chaos (see, for instance, [7,8,10]), relating ratchet transport to the typical questions of chaology, including, naturally, the complex relationship between quantum systems and their corresponding chaotic classical counterparts. In the context of recent cold-atom testing of Hamiltonian ratchet transport in classically chaotic systems [15,16,17], investigations of ratchet transport are interesting both as a method of exploring quantum and classical transport properties as well as a means of addressing general questions in quantum and classical chaos.
It has been shown [9,10,11] that quantum ratchet transport is possible even when the corresponding classical dynamics is completely chaotic. In such a case, the classical system displays no appreciable current. Hence, these systems show a novel qualitative divergence between quantum and classical dynamical properties, motivating this study of the relationship between quantum and classical ratchet transport.
Below we show that ratchet effects emerge, both quantum mechanically and classically, via an asymmetry-induced distortion of the spatial distribution, leading to a net effective force. Classically, full chaos diminishes this distortion, and hence suppresses ratchet effects.
Quantum mechanically, by contrast, the distortion generally persists, except at very small values of the effective Planck constant.
Hamiltonian ratchet dynamics is also directly related to laser-induced coherent control of directional transport [18,19]. Symmetry-breaking schemes have been used in coherent control since its inception [20], and so studying quantum vs. classical ratchet transport also lends insight into quantum control scenarios. Recent results [19] suggest that such control, once thought to be exclusively quantum mechanical, is possible classically, as well. Quantum ratchet transport in the presence of full classical chaos, by contrast, is an excellent example of controlled transport that may not be possible in classical dynamics. As such, this topic is also of interest to two related, more general issues: quantum controllability of classically chaotic systems; and survival conditions for quantum control in the classical limit.
The case of quantum ratchet transport with full classical chaos discussed below further strengthens the view that quantum control of classically chaotic systems is often feasible [21,22]. Indeed, this interesting possibility has already attracted some interest, both theoretically and experimentally [23,24,25].
We consider here spatially-periodic quantum systems with HamiltonianĤ =Ĥ 0 (p) + V (q, t), whereV (q, t) is a time-periodic operator representing an external potential imposed on the system,q andp are conjugate position and momentum operators, respectively, and operators are denoted by a circumflex. These systems display ratchet transport, that is, despite being initially distributed uniformly in space, having zero initial momentum, and being driven by a force without bias, they organize to show an increase in the current or the average momentum, denoted p below for both quantum and classical mechanics. The absence of a biased force means that upon averaging over all space, denoted by an overbar:
− ∂V (q, t) ∂q ≡ F (q, t) = 0,(1)
where V (q, t) and F (q, t) are the coordinate-space representation of the applied potential and force, respectively [26]. Significantly, this zero average, which is the standard definition of the absence of bias, is entirely independent of the structure or state of the physical system upon which F (q, t) acts. As such, as emphasized below, F (q, t) is conceptually distinct from the expectation value of a net force F (t) actually felt by an evolving system, which, of course, is a function of the system evolution. The significance of this distinction will become apparent in what follows.
While the discussion presented below is quite general, we continue to employ our modification of the kicked Harper paradigm [10,27,28] as an illustrative example and motivator of this study. The quantum modified Harper Hamiltonian is given by [29]
H = J cos(p) + KV r (q) n δ(t − n);(2)
V r (q) = cos(2πq) + sin(4πq),
where the system potential isV (q, t) = KV r (q) n δ(t − n), and we defineF r (q) ≡ − ∂Vr (q) ∂q . Here t is the time, n is an integer, and J and K are system parameters. The associated q- for K = 4, J = 2, andh = 1, shown here for the first 1000 kicks, with the initial state given by a momentum eigenstate with zero momentum. (b) The mean acceleration rate of the quantum current for a range of K values, with K = 2J andh = 1. Note that, as seen in Ref. [28], the transport direction may change erratically with the initial condition. As explained in the text, all quantities here and those in other figures are in dimensionless units.
space is periodic in [0, 1]. All system variables here should be understood to be appropriately scaled and hence dimensionless. In particular, the scaled, dimensionless Planck constant is denoted ash, and hencep = −ih ∂ ∂q . The unitary time evolution operator associated with one kick from time t = 0 to t = 1 + ǫ in Eq. (2) is given bŷ
U (1, 0) = e − iJ h cos(p) e − iK hV r (q) ,(4)
with the cumulative time evolution operator from t = 0 to t = m given bŷ
U (m, 0) = [Û(1, 0)] m .(5)
We also stress that the initial quantum state used here is always assumed to be a zero momentum eigenstate, which is time-reversal symmetric and spatially uniform. As shown in Ref. [28], the ratchet transport can be a sensitive function of the initial state. However, our analyzes below can be easily adapted to other initial states.
The properties of this model discussed below hold in general for the regime where the kicked Harper does not show dynamical localization [30]. We consider the case where K = 2J, although this choice is arbitrary. Since this system can be exactly mapped onto the problem of a kicked charge in a magnetic field [31], or can be related to cold-atom experiments [28,32] or to driven electrons on the Fermi surface [33], it has a realistic physical and experimental interpretation. In particular, though the unkicked part of the Hamiltonian in Eq. (2) is given by J cos(p), the underlying dispersion relation in the cold-atom and kicked-charge realizations of the kicked Harper model is still given by E =p 2 /2 [28,31,32].
That is, the momentum variable in our abstract model can still be directly linked to the mechanical momentum of a moving particle. Hence the current of particles can indeed be calculated via the momentum expectation value.
The quantum dynamics associated with the propagator in Eq. (4) shows unbounded [34] acceleration of the ratchet current [10]. Typical results are shown in Fig. 1, where panel (a)
shows the current p for K = 4 and panel (b) shows the mean current acceleration rate as a function of K. Here the acceleration is defined approximately as p(t = 1000) /1000.
The classical comparison with the quantum dynamics considers ensembles of trajectories that are analogous to the quantum systems discussed above: initially, the trajectories have zero momentum and are uniformly distributed in coordinate space, and are driven by a force of zero spatial mean at all times. Again, our discussion is quite general for such systems, although we consider the classical analogue of the modified kicked Harper system as an illustrative example, obtained by replacing the quantum operators in Eqs. (2) and (3) with their respective classical observables. Specifically, the evolution of a classical trajectory through one kick is then given by
p N = p N −1 + KF r (q N −1 ) q N = q N −1 − J sin(p N ).(6)
This system has been shown to display virtually no classical ratchet transport [10] if the system parameters are in the regime of full classical chaos.
Throughout this discussion, it will often be convenient to consider quantum and classical arguments simultaneously. We distinguish quantum and classical objects by respective subscripts Q and C, and refer to both dynamics when these subscripts are omitted.
This paper is organized as follows. Section II analyzes, from a new perspective based on the Heisenberg picture of the dynamics, the origin of asymmetry-driven ratchet transport. Section III considers the difference and correspondence between classical and quantum ratchet transport. Section IV summarizes the conclusions of this study.
II. ASYMMETRY AND RATCHET EFFECTS
A. The Heisenberg Force
Classically and quantum mechanically, the rate of the ratchet current increase, here termed the acceleration, at time t is given by the expectation value of the net force at that time:
d p dt = F (t) .(7)
Evidently, p must remain zero if it begins at zero and F (t) = 0 at all times. Hence, when ratchet acceleration occurs, it follows that the expectation value of the net force must be nonzero. This result calls for analysis of how ratchet acceleration is possible in the absence of a biased force.
To facilitate comparison of quantum and classical mechanics, it is convenient to cast this discussion in terms of the density matrix formalism. The expectation value of the quantum force at time t is given by
F (t) Q = Tr ρ Q (t)F Q (q, 0) = Tr ρ Q (0)T e − ī h R t 0 dt ′L Q (t ′ )F Q (q, 0) ,(8)
whereρ Q (0) is the (pure state) density matrix at time zero andρ Q (t) is the propagated density at time t [35]. Here, time evolution is mediated by the quantum Liouville operator
L Q · = ī h [Ĥ, ·], the bracket [ , ]
is the commutator, andT denotes the time-ordering operator. For the Hamiltonian in Eq.
(2),F Q (q, 0) = −K∂V r (q)/∂q.
The effect of the time-ordered exponential is given in terms of the evolution operator as
[36]F Q,H (q, t = n) =Û −1 (n, 0)F Q (q, 0)Û(n, 0)(9)
Equation (8) can be rewritten as [36]
F (t) Q = Tr ρ Q (t)F Q (q, 0) = Tr ρ Q (0)F Q,H (q, t) ,(10)whereF Q,H (q, t) ≡T e − ī h R t 0L Q (t ′ )dt ′F Q (q, 0)(11)
defines the Heisenberg force, the focus of attention below.
The classical, ensemble-averaged value of the force at time t is similarly given by
F (t) C = dpdq ρ C (0)T e −i R t 0 dt ′L C (t ′ ) F C (q, 0) ,(12)
where ρ C (0) is the initial classical density distribution,L C · = i{H, ·} is the classical Liouville operator, where { , } represents a classical Poisson bracket, and F C (q, 0) = −K∂V r (q)/∂q.
The time evolution of q, and hence of F C (q, 0), is carried out via Eq. (6).
For either the quantum or the classical ensemble average F (t) to be nonzero, and hence induce ratchet acceleration, some system attribute needs to break the positive-negative symmetry to "choose" a direction. From Eqs. (8) and (12) it is clear that asymmetries in either the initial distribution, force, or evolution operator are essentially equivalent as the origin of bias. Since, for classical and quantum ratchets, the initial distribution and force are chosen to be symmetric, the asymmetry in the evolution operator, and hence asymmetry in dynamics induced by the Hamiltonian, must be responsible for the nonzero net current.
Specifically, consider the q-representation of Eq. (10). Noting thatρ Q (0) describes a spatially uniform state (i.e.ρ Q (0) = |q q|), in normalized coordinates ρ Q (q, 0) ≡ q|ρ Q (0)|q = 1, so that
F (t) Q = dq q|ρ Q (t)|q q|F Q (q, 0)|q = dq q|ρ Q (0)|q q|F Q,H (q, t)|q (13) = dq q|F Q,H (q, t)|q =F Q,H (q, t)(14)
That is, the average force is dictated by the uniform spatial average over the Heisenberg force, as distinguished from the Schrödinger forceF Q (q, 0) = −K∂V (q)/∂q. Correspondingly, since ρ C (0) is chosen to be normalized and spatially uniform, Eq. (12) indicates that
F (t) C =T e −i R t 0 dt ′L C (t ′ ) F C (q, 0) = F C,H (q, t),(15)
i.e., a spatial average over the time-evolving classical force F C,H (q, t), analogous to the quantum case. Since Eqs. (14) and (15) show that the expectation value of the force is given by an average over the evolving force, a nonzero net force as a result of an asymmetry in the dynamics becomes possible, even if the spatial average of the bare force F (q, t) itself remains zero at all times.
Note that, since the force is diagonal in q in quantum mechanics, and not a function of p classically, the evolving force distribution F H (q, t) is adequately described entirely in q in both mechanics. This allows simple, direct comparisons of quantum and classical mechanics, as shown in the following section. Below, we term F Q,H (q, t) ≡ q|F Q,H (q, t)|q the force distribution in q associated with the Heisenberg force. Similar terminology applies in classical mechanics. The diagonal element of the q-representation of the distribution of the
Schrödinger density q|ρ Q (t)|q , is denoted ρ Q (q, t), so that F (t) Q = dqρ Q (q, t)F Q (q, 0).
The classical object analogous to ρ Q (q, t) is the q-component of the evolving density,
ρ C (q, t) ≡ dpρ C (p, q, t), where ρ C (p, q, t)
is the classical evolving density. In both mechanics, the initial spatial distribution is assumed uniform. As a result, in the quantum case for example, and in accord with Eqs. (8) and (10),
F Q,H (q, t) = q|F Q,H (q, t)|q = q|ρ Q (0)F Q (q, t)|q = q|ρ Q (t)|q q|F Q (q, 0)|q = ρ Q (q, t)F Q (q, 0)(16)
That is, the evolving force distribution is given by the bare force weighted by the evolving density. The analogous result holds in classical mechanics.
Given that, in either mechanics, F H (q, t) = ρ(q, t)F (q, 0), with a uniform initial distribution ρ(0) and unbiased force F (q, 0), a net nonzero F H (q, t) requires that ρ(q, t) weights F (q, 0) so as to break the directional symmetry. Minimally, the system evolution must be such that each point q i in the q-space does not in general have a complement q j such that both ρ(q i , t) = ρ(q j , t) and F (q i , 0) = −F (q j , 0). This is the simplest asymmetry condition on the dynamics necessary for the generation of a ratchet current. The modified Harper
Hamiltonian [Eq.
(2)] clearly satisfies this condition.
This Heisenberg approach thus gives a simple picture of ratchet current generation. The origin of a current arising from a net force can be understood as either as (a) a distortion in the density ρ(q, t), which will weight the bare force F (q, 0) non-uniformly giving rise to a nonzero average, or (b) as a distortion in the evolving force F H (q, t) itself, whose average F (t) is nonzero due to this distortion, even if the bare force has zero mean. The advantage of using the evolving force picture is that it resolves the intuitive puzzle of how directional transport in the momentum space emerges in the absence of a biased force. Whether the force itself, that is, the bare force, is biased or not is irrelevant. Rather, the intrinsic asymmetry in the dynamics permits the evolving force F H (q, t) to develop a nonzero mean, and hence a nonzero ratchet acceleration rate.
Computationally, the Heisenberg picture is easily applied to the modified kicked Harper model to examine F Q,H (q, t). For example, Fig. 2 shows ρ Q (q, t) and F Q,H (q, t) for parameters associated with an appreciable and unbounded ratchet current acceleration. Despite starting with a flat distribution in q, ρ Q (q, t) in Fig. 2(a) is now clearly unevenly distributed.
Accordingly, the distribution of the Heisenberg force F Q,H (q, t) shown in Fig. 2(b) is strongly biased compared to the symmetric bare force distribution (plus symbols).
B. Two Roles of the Force
Implicitly, we have considered the force in two capacities: acting on the structure of the ensemble and thereby producing a nonzero net Heisenberg force; and the net force itself, acting within an ensemble average to generate ratchet acceleration, i.e. F (t) = d p dt . To further elucidate how this relates to ratchet transport, consider any δ-kicked quantum ratchet model with an arbitrary kicking potential operator KV r (q) and kinetic energy operator JT (p). The evolution of this type of system is mediated by a propagatorÛ like Eq. (4), such that a Heisenberg observableÔ Q,H after N kicks is given byÔ Q,H (N) = (Û −1 ) NÔÛ N .
Consider the current p(1) Q after the first kick:
p(1) Q = Tr[ρ Q (0)Û −1pÛ ] = Tr[ρ Q (0)e iK hV r (q) e iJ hT (p)p e −iJ hT (p) e −iK hV r (q) ].(17)
Usingp = −ih ∂ ∂q and that the initial state is assumed uniform in q, one obtains
p(1) Q = −KTr[ρ Q (0)e iK hV r (q) e −iK hV r (q) ∂V r (q) ∂q ] +Tr[ρ Q (0)p] = −K ∂V r (q) ∂q + 0 = 0.(18)
This illustrates the distinction between the force's role in distorting its own distribution and its role in inducing a current. That is, although no current develops after the first kick, subsequent kicks produce current. Therefore, even though the net force remains zero for the first kick, that kick distorts the system so that it will subsequently experience a net force.
More generally, for N kicks,
= −K N −1 j=0 Tr[ρ Q (0)(Û −1 ) j ∂V r (q) ∂qÛ j ] = K N −1 j=0 F r (j) Q .(19)
It follows that the change in p on each step is
∆ p Q ≡ p(N) Q − p(N − 1) Q = K F r (N − 1) Q ,(20)
showing that the change in momentum induced at every kick is a result of the net force from the previous kick. This makes clear the general case: the force first acts on an ensemble to generate a distortion, and then a net ratchet force can develop. Exactly the same arguments apply in classical mechanics.
Thus far, this discussion has supported the view [19] that symmetry-breaking induced transport can be achieved both classically and quantum mechanically, both arising via a distortion originating from an asymmetry in the dynamics. However, despite the existence of this analogous ratchet transport mechanism in the classical modified Harper model, classical ratchet transport behaves very differently in the regime of classically chaotic motion, where the classical current quickly saturates at a value close to zero. This is clear in Fig. 3: panel (a) shows the saturating current p C for a typical chaotic case; and panel (b) shows the classical mean acceleration rates for a range of parameters. When K is greater than approximately 3.7, the classical dynamics develops full chaos and the mean acceleration rate is generally negligible. (The occasional isolated nonzero mean acceleration rates seen above K ≈ 3.7 are likely due to some remnants of pre-chaotic structure in phase space). Therefore, even though the relevant symmetry properties are the same classically and quantum mechanically, some other important distinction must exist. Fig. 3 demonstrate that the behavior of the classical modified kicked Harper system is quite different from the quantum result, where unbounded ratchet effects persist. If indeed ratchet effects emerge by the same mechanism in quantum and classical mechanics, it remains to be explained why that mechanism generates different results for different mechanics. Specifically, ratchet effects diminish classically in the regime associated with classical chaos. We therefore examine how the onset of chaos affects classical ratchet dynamics, in a way that does not occur quantum mechanically. We also discuss the peculiar long-time behavior of the quantum modified kicked Harper model, as well as quantum-classical correspondence.
Results in
A. Chaos and the Heisenberg Force
From a trajectory perspective, classical chaos is characterized by exponential sensitivity to initial conditions. However, the conventional interpretation of quantum mechanics does not describe individual trajectories. Hence, a comparison of quantum and classical dynamics demands comparison of quantum and classical distributions [37,38,39,40,41,42,43,44]. Although KAM theory and finite-time limitations suggest deviations in the properties of distribution functions of typical classically chaotic systems from theoretical ideals [45], such systems are still expected to exponentially develop increasingly fine structure. Upon coarse graining on the scale of interest, the classical phase space distribution in a fully chaotic system uniformly fills the phase space almost everywhere, with additional structure detectable only on an increasingly fine scale. Indeed, this is what is termed full chaos in most numerical studies of this type: when no structure is visible in the phase space on a pre-set fine scale, it is considered operationally chaotic.
Consider then how this applies to the Heisenberg force for the ratchet systems considered here, where the phase space is always bound or periodic in q. Ensemble averages are computed by integrating over the distribution. Since complete chaos implies no structure in ρ C (q, t) on the scale of interest, such averages will look like unweighted averages in q (provided that the scale on which the variable of interest varies is much larger than the scale of structure in ρ C (q) remaining in the chaotic phase space). That is, we can essentially ignore the q-component of the density when taking spatial averages. In the case of the force,
F (t) C = dpdqρ C (p, q, t)F C (q, t) ≈ dqF C (q, t) dpρ C (p, t) ∝ dqF C (q, t) = 0,(21)
where ρ C (p, t) = dqρ C (p, q, t) is the classical momentum density distribution. Hence, for all times when the phase space is operationally chaotic, the ensemble average of the classical force is proportional to the spatial average of the bare force: i.e., zero. Chaotic dynamics here implies no spatial distortion of the system on the scale of interest, and hence no creation of a net evolving force. This is consistent with a result of the "classical sum rule" [7], which predicts that there will be no classical ratchet current in fully chaotic systems.
The comparison with quantum mechanics is straightforward. If the quantum qdistribution ρ Q (q, t) is flat, or if the scale of structure remaining in this distribution is far smaller than that over which the bare force F Q (q, 0) varies, then by an argument analogous to the classically chaotic case, the net quantum force F (t) Q will be essentially zero. As in the classical case, the spatial distortion giving rise to a net force would not be appreciable on the scale of interest.
However, a quantum ratchet system is not expected to display such behavior. The Fourier relationship between ρ Q (q, t) and ρ Q (p, t) ≡ p|ρ Q (t)|p implies that a uniform distribution in space ρ Q (q, t) = 1 corresponds to the lowest momentum state ρ Q (p, t) = δ p,0 . Once the system is driven by a force, other momentum states will of course be populated. Correspond-
ingly, ρ Q (q, t) = k,k ′ c k c * k ′ e − ī h (p k −p k ′ )q ,
where the c k are constants and p k are momenta. This density is not flat. For fixed p k and p k ′ , a sufficiently largeh can always be found so that the e − ī h (p k −p k ′ )q terms oscillate sufficiently slowly, giving ρ Q (q, t) structure in q on the scale of interest. Therefore, sufficiently far into the quantum regime, driven quantum systems are expected to retain coarse structure in q-space; there is a limit to the fineness of scale in quantum mechanics [46]. Consequently, the net force is not in general expected to reduce to the average bare force.
This provides a qualitative explanation for the difference in behavior between quantum and classical dynamics in the regime of full classical chaos. This perspective also accounts for the difference in controllability between classical and quantum mechanics. That is, asymmetry-driven transport control is in principle possible in both. Since it relies on a distortion of the system distribution function, distributions without structure on the relevant scale show diminished control. Classically, control is therefore lost to chaos, whereas it can survive in quantum mechanics.
B. Quantum Long-Time Dynamics
To achieve stable, unbounded acceleration of the ratchet current, as observed in the modified Harper system, requires that F (t) Q continually operate in the same direction, driving a current with essentially the same bias for all time. This implies that the profile of the time-evolving density ρ Q (q, t), and hence of the Heisenberg force distribution F Q,H (q, t),
does not change appreciably in time (or that it changes in the highly unlikely way that always maintains the same bias). If the quasienergy spectrum of the system is purely discrete, this can not be the case. Specifically, from Floquet theory we have that for any time-periodic, bounded quantum system with discrete quasienergy spectrum, the density is given by ρ Q (q, t) = l,l ′ d l d * l ′ e ī h (E l −E l ′ )t ρ Q (q, 0), where the d l are constants and the E l are the quasienergies [47]. Since this density is the sum of periodic functions, it is itself quasiperiodic. Therefore, ensemble averages in such systems are also quasiperiodic, and hence do not continuously increase in time [47,48]. This is true as well for the the Heisenberg force F Q,H (q, t), which would be quasiperiodic and hence eventually reverse its direction.
For this reason, earlier quantum ratchet models without current saturation occurred for kicked-rotor systems with quantum resonance conditions [9,49], displaying a continuous quasi-energy spectrum. The behavior of the modified kicked Harper model here, which apparently does not satisfy a quantum resonance condition, and for which extended computational results (not shown here) have suggested unbounded directional current, therefore requires explanation.
In fact, it can be shown that the all kicked Harper systems can be exactly mapped onto the problem of a kicked charge in a magnetic field, although only at resonance [31].
Consequently, the quasienergy spectrum of this model is not necessarily purely discrete, the system evolution need not be quasiperiodic and the modified kicked Harper system need not necessarily show dynamical saturation in time. oscillates more drastically than in (a), but their overall shape remains roughly the same. This is consistent throughout the parameter space.
As an example, Fig. 4 shows ρ Q (q, t) after 50 and 200 kicks for typical parameters, and Fig. 5 shows F Q,H (q, t) compared to the bare force F Q (q, 0) for the same circumstances.
Indeed, there is no appreciable change in the qualitative shape of either ρ Q (q, t) and F Q,H (q, t)
after the first few kicks, although the very fine details of the oscillatory structure increase.
C. Quantum-Classical Correspondence
Given the above-mentioned quantum-classical differences, it is natural to ask how the classical results emerge from the quantum mechanics as the effective Planck constanth decreases.
Before resorting to computational studies, let us first examine how the quantum dynamics may appear more classical for smallh. Consider a time-evolving quantum density ρ Q (q, t) =
k,k ′ c k c * k ′ e − ī h (p k −p k ′ )q ,
where the c k are constants and p k are momenta. For largeh the interference between different momentum components induces large-scale patterns in the density. However, for sufficiently smallh relative to (p k − p k ′ )q, the exponential factor will rapidly oscillate; the smallest scale of structure can be much finer than the scale over which the bare force changes. Hence, at a given time, and for smaller and smallerh, the quantum limit on the fineness of scale diminishes. As in classical mechanics, coarse scale structure can persist, but it no longer has to. Therefore, it becomes possible for the ensemble-averaged quantum force to either maintain an appreciable bias, as in the classically partially-integrable regime, or to approach its average over a flat distribution, as in the classically chaotic regime. Qualitatively, then, the coarse-scale structure in q imposed by quantum coherence can diminish ash → 0. Figure 6 shows the q-representation of ρ Q (q, t), as well as a comparison of the quantum Heisenberg and Schrödinger force distributions, F Q,H (q, t) and F Q (q, 0), for a typical case in a semiclassical regime, represented byh = 0.0001 (a computationally-intensive regime). The system parameters here are associated with classical chaos. The density in Fig. 6(a) shows clear, truly drastic, oscillations, with a roughly uniform oscillation amplitude. Further, it is evident from Fig. 6(b) that on this scale, the overall distribution of the Heisenberg force is similar to that of the initial Schrödinger force distribution, justifying the loss of directional effects in going from quantum to classical mechanics. Figure 7 shows the quantum ratchet current p Q in the semiclassical regime ofh = 0.0001, as compared with the corresponding classical current p C . The quantum current p Q remains close to zero, and mimics the classical current p C almost exactly.
IV. CONCLUSION
We sought here to explain certain general features of ratchet transport in Hamiltonian systems, and in particular to explain the quantum vs classical behavior of the ratchet accelerator model developed in Ref. [10].
Here we have introduced, and applied, the concept of a Heisenberg or evolving force, in both quantum and classical mechanics, to ratchet transport. This showed that whether the bare force (i.e., the external force applied to the system) is unbiased is irrelevant, since it is the evolving force that actually affects net transport. In both mechanics, asymmetry in the dynamical evolution can cause asymmetric spatial distortion which leads to the development of a net force and a nonzero current. Symmetry-breaking-based control of quantum and classical transport is hence of the same origin. However, quantum and classical ratchet systems behave differently due to chaos. Classical systems fail to generate ratchet current when their phase space is fully chaotic, as the system distortion is effectively canceled, and the asymmetry that leads to directionality is lost. A completely chaotic phase space forces ensemble averages to reduce to phase-space means that are independent of the detailed aspects of the dynamics. In such cases the ensemble-averaged net force remains zero for a non-biased external force. By contrast, the equivalent effect is prevented in quantum mechanics, where coarse-scale structure is preserved. Symmetrybreaking-based quantum control of transport in classically chaotic systems is hence possible.
For the same reason, quantum ratchet transport with full classical chaos becomes a strong indication of non-chaotic properties of the quantum dynamics.
The peculiar feature of the modified kicked Harper system, that it shows unbounded linear transport for a wide parameter regime, is explained by its mapping onto a resonant system, and hence having a continuous spectrum. Its dynamics therefore is not necessarily quasiperiodic. Further, we computationally showed that if the quantum system is sufficiently close to the classical limit, then quantum ratchet behavior smoothly approaches classical ratchet behavior.
The advantage of using the Heisenberg force to gain insight into the ratchet dynamics is
FIG. 1 :
1(a) Time dependence of the quantum current p Q of the modified kicked Harper system
FIG. 2 :
2(a) ρ Q (q, t) and (b) F Q,H (q, t) compared to F Q (q, t) (plus symbols) for the modified kicked Harper model after the first 50 kicks for K = 4, J = 2 andh = 1. Distortion in the density ρ Q (q, t), and hence bias in the Heisenberg force distribution function F Q,H (q, t), are evident.
dependence of the classical current p C of the modified kicked Harper system for K = 3 and J = 1.5, shown here for the first 1000 kicks. (b) The mean acceleration rate of the classical current for a range of K values, with K = 2J.
FIG. 4 :
4ρ Q (q, t) of the modified kicked Harper system for K = 4, J = 2, andh = 1, after (a) the first 50 kicks, and (b) the first 200 kicks. Note that the probability distribution function in (b)
FIG. 5 :
5F Q,H (q, t) of the modified kicked Harper system compared to F Q (q, t) (plus symbols) for K = 4, J = 2, andh = 1, after (a) the first 50 kicks, and (b) the first 200 kicks. Note that the Heisenberg force distribution function in (b) oscillates more drastically than in (a), but their overall shape remains roughly the same. This is consistent throughout the parameter space.
FIG. 6 :
6(a) ρ Q (q, t) and (b) F Q,H (q, t) compared to F Q (q, t) (plus symbols) for the modified kicked Harper model after the first 50 kicks for K = 4, J = 2 andh = 0.0001.
FIG. 7 :
7Comparison of the (a)h = 0.0001 quantum current p Q with (b) its classical analogue p C in the modified kicked Harper system for the first 50 kicks for K = 4 and J = 2.
are supported by a grant from the National Sciences and Engineering Research Council of Canada. : J P Acknowledgments, P B , 144-050-193-101 and No. R-144-050-193-133J.G. is supported by the start-up fund. WBS No. R-and the NUS "YIA" fund (WBS NoAcknowledgments: J.P. and P.B. are supported by a grant from the National Sciences and Engineering Research Council of Canada. J.G. is supported by the start-up fund, (WBS No. R-144-050-193-101 and No. R-144-050-193-133) and the NUS "YIA" fund (WBS No.
. M V Smoluchowski, Physik Zeitschr, 131069M.V. Smoluchowski, Physik Zeitschr. 13, 1069 (1912).
. R P Feynman, R B Leighton, M Sands, The Feynman Lectures on Physics. 146Addison-WesleyR.P. Feynman, R.B. Leighton and M. Sands, The Feynman Lectures on Physics, (Addison- Wesley, Reading, MA 1969), Vol. 1, Chap.46.
. F Jülicher, A Ajdari, J Prost, Rev. Mod. Phys. 691269F. Jülicher, A. Ajdari, and J. Prost, Rev. Mod. Phys. 69, 1269 (1997).
. P Reimann, Phys. Rep. 36157P. Reimann, Phys. Rep. 361, 57 (2002).
. R D Astumian, P Hanggi, Phys. Today. 551133R.D. Astumian and P. Hanggi, Phys. Today 55 (11), 33 (2002).
. S Flach, O Yevtushenko, Y Zolotaryuk, Phys. Rev. Lett. 842358S. Flach, O.Yevtushenko, and Y. Zolotaryuk, Phys. Rev. Lett. 84, 2358 (2000).
. H Schanz, M F Otto, R Ketzmerick, T Dittrich, Phys. Rev. Lett. 8770601H. Schanz, M.F. Otto, R. Ketzmerick, and T. Dittrich, Phys. Rev. Lett. 87, 070601 (2001);
. H Schanz, T Dittrich, R Ketzmerick, Phys. Rev. E. 7126228H. Schanz, T. Dittrich, and R. Ketzmerick, Phys. Rev. E 71, 026228 (2005).
. J Gong, P Brumer, Phys. Rev. E. 7016202J. Gong and P. Brumer, Phys. Rev. E 70, 016202 (2004).
. E Lundh, M Wallin, Phys. Rev. Lett. 94110603E. Lundh and M. Wallin, Phys. Rev. Lett. 94, 110603 (2005).
. J Gong, P Brumer, Phys. Rev. Lett. 97240602J. Gong and P. Brumer, Phys. Rev. Lett. 97, 240602 (2006).
. D Poletti, G G Carlo, B Li, Phys. Rev. E. 7511102D. Poletti, G.G. Carlo, and B. Li, Phys. Rev. E 75, 011102 (2007).
. S Denisov, L Morales-Molina, S Flach, P Hänggi, Phys. Rev. A. 7563424S. Denisov, L. Morales-Molina, S. Flach, and P. Hänggi, Phys. Rev. A 75, 063424 (2007).
. J Gong, D Poletti, P Hänggi, Phys. Rev. A. 7533602J. Gong, D. Poletti, and P. Hänggi, Phys. Rev. A 75, 033602 (2007).
. A Kenfack, J B Gong, A K Pattanayak, Phys. Rev. Lett. 10044104A. Kenfack, J.B. Gong, and A.K. Pattanayak, Phys. Rev. Lett. 100, 044104 (2008).
. P H Jones, M Goonasekera, D R Meacher, T Jonckheere, T S Monteiro, Phys. Rev. Lett. 9873002P.H. Jones, M. Goonasekera, D.R. Meacher, and T. Jonckheere, and T.S. Monteiro, Phys. Rev. Lett. 98,073002 (2007).
. M Sadgrove, M Horikoshi, T Sekimura, K Nakagawa, Phys. Rev. Lett. 9943002M. Sadgrove, M. Horikoshi, T. Sekimura, and K. Nakagawa, Phys. Rev. Lett. 99, 043002 (2007).
. I Dana, V Ramareddy, T Talukdar, G S Summy, Phys. Rev. Lett. 10024103I. Dana, V. Ramareddy, T. Talukdar, and G.S. Summy, Phys. Rev. Lett. 100, 024103 (2008).
M Shapiro, P Brumer, Principles of the Quantum Control of Molecular Processes. New YorkJohn WileyM. Shapiro and P. Brumer, Principles of the Quantum Control of Molecular Processes (John Wiley, New York, 2003).
. I Franco, P Brumer, Phys. Rev. Lett. 9740402I. Franco and P. Brumer, Phys. Rev. Lett. 97, 040402 (2006).
. G Kurizki, M Shapiro, P Brumer, Phys. Rev. B. 393435G. Kurizki, M. Shapiro and P. Brumer, Phys. Rev. B 39, 3435 (1989).
. J Gong, P Brumer, Phys. Rev. Lett. 861741J. Gong and P. Brumer, Phys. Rev. Lett. 86, 1741 (2001);
. J. Chem. Phys. 1153590J. Chem. Phys. 115, 3590 (2001).
. J Gong, P Brumer, Annu. Rev. Phys. Chem. 561J. Gong and P. Brumer, Annu. Rev. Phys. Chem. 56, 1 (2005).
. T Takami, H Fujusaki, Phys. Rev. E. 7536219T. Takami and H. Fujusaki, Phys. Rev. E 75, 036219 (2007).
. M Sadgrove, M Horikoshi, T Sekimura, K Nakagawa, Euro. Phys. Journal D. 45229M. Sadgrove, M. Horikoshi, T. Sekimura, and K. Nakagawa, Euro. Phys. Journal D 45 229 (2007).
. T Takami, H Fujusaki, preprint arXiv 0806.4217T. Takami and H. Fujusaki, preprint arXiv 0806.4217.
A rocking ratchet constitutes a different case, where the absence of a biased force means that the average of the force is zero when averaging over an interval of time. A rocking ratchet constitutes a different case, where the absence of a biased force means that the average of the force is zero when averaging over an interval of time.
A similar system was proposed by D.L. Shepelyansky in Quantum Chaos -Quantum Measure. P. Cvitanovic, I. Percival, and A. WirzbaDordrecht-Boston-LondonKluwer Academic PublishersA similar system was proposed by D.L. Shepelyansky in Quantum Chaos -Quantum Measure- ment eds. P. Cvitanovic, I. Percival, and A. Wirzba (Kluwer Academic Publishers, Dordrecht- Boston-London, 1992).
For a more recent and even simpler quantum ratchet model with full classical chaos, see. J Wang, J B Gong, Phys. Rev. E. 7836219For a more recent and even simpler quantum ratchet model with full classical chaos, see J. Wang and J.B. Gong, Phys. Rev. E 78, 036219 (2008).
This model was originally [10] proposed withV r (q) = cos(q) + sin(2q). The spatial period has been rescaled for convenience. This model was originally [10] proposed withV r (q) = cos(q) + sin(2q). The spatial period has been rescaled for convenience.
For kicked Harper models, specifying the exact parameter ranges where dynamical localization appears is subtle. Roughly speaking, one needs K > L to have delocalization in the modified kicked Harper model considered here. For more a detailed study of this issue for the original kicked Harper model, see. R Artuso, Phys. Rev. Lett. 693302For kicked Harper models, specifying the exact parameter ranges where dynamical localization appears is subtle. Roughly speaking, one needs K > L to have delocalization in the modified kicked Harper model considered here. For more a detailed study of this issue for the original kicked Harper model, see R. Artuso, et al., Phys. Rev. Lett. 69, 3302 (1992).
. I Dana, Phys. Lett. A. 197413I. Dana, Phys. Lett. A 197, 413 (1995).
. J Wang, J Gong, Phys. Rev. A. 7731405J. Wang and J. Gong, Phys. Rev. A 77, 031405(R) (2008).
. A Iomin, S Fishman, Phys. Rev. Lett. 811921A. Iomin and S. Fishman, Phys. Rev. Lett. 81, 1921 (1998).
Throughout this paper we use the term unbounded acceleration to denote the results of computational studies carried out for long times (and indeed longer than those shown in this paper). However, there is no formal proof that such dynamics shows unbounded acceleration for times longer than those that were computed. Throughout this paper we use the term unbounded acceleration to denote the results of com- putational studies carried out for long times (and indeed longer than those shown in this paper). However, there is no formal proof that such dynamics shows unbounded acceleration for times longer than those that were computed.
If, however, the operator of interest has an explicit time dependence, then the time evaluation of the Heisenberg representation of the operator is obtained by treating the explicit and implicit time dependence independently. In general, an operator in the Schrödinger representation does not evolve in time as is assumed to be the case for the force in Eq. and combining the resultsIn general, an operator in the Schrödinger representation does not evolve in time as is assumed to be the case for the force in Eq. (8). If, however, the operator of interest has an explicit time dependence, then the time evaluation of the Heisenberg representation of the operator is obtained by treating the explicit and implicit time dependence independently, and combining the results.
. D N Zubarev, Tr P J Thermodynamics, Sheperd, R. Gray and P.J. Sheperd73Consultans Bureau, N.Y.D.N. Zubarev, Nonequilibrium Statistical Thermodynamics, Tr. P.J. Sheperd, ed. R. Gray and P.J. Sheperd, (Consultans Bureau, N.Y., 1974), pp 73.
. R Kosloff, S A Rice, J. Chem. Phys. 721340R. Kosloff and S.A. Rice, J. Chem. Phys. 72, 1340 (1981).
. Y Gu, Phys. Lett. A. 14995Y. Gu, Phys. Lett. A 149, 95 (1990).
. L E Ballentine, Yumin Yang, J P Zibin, Phys. Rev. A. 502854L.E. Ballentine, Yumin Yang, and J.P. Zibin, Phys. Rev. A 50, 2854 (1994).
. L E Ballentine, J P Zibin, Phys. Rev. A. 543813L.E. Ballentine and J.P. Zibin, Phys. Rev. A. 54, 3813 (1996).
. A K Pattanayak, P Brumer, Phys. Rev. Lett. 7759A. K. Pattanayak and P. Brumer, Phys. Rev. Lett. 77, 59 (1996)
. A K Pattanayak, P Brumer, Phys. Rev. E. 565174A. K. Pattanayak and P. Brumer, Phys. Rev. E 56, 5174 (1997)
. J Wilkie, P Brumer, Phys. Rev. A. 5527J. Wilkie and P. Brumer, Phys. Rev. A 55, 27 (1997);
. Phys. Rev. A. 5543Phys. Rev. A 55, 43 (1997).
. J Gong, P Brumer, Phys Rev. A. 6862103J. Gong and P. Brumer, Phys Rev. A 68, 062103 (2003).
Hamiltonian Chaos and Fractional Dynamics. G Zaslavsky, Oxford University PressOxfordG. Zaslavsky, Hamiltonian Chaos and Fractional Dynamics, (Oxford University Press, Oxford, 2005).
Fractals in Quantum Mechanics?. B Eckhardt, E.R. Pike and L.A. LugiatoAdam Hilger, BristolB. Eckhardt, "Fractals in Quantum Mechanics?", in: Fractals, Noise and Chaos, eds. E.R. Pike and L.A. Lugiato, (Adam Hilger, Bristol, 1987).
. T Hogg, B A Huberman, Phys. Rev. Lett. 48711T. Hogg and B.A. Huberman, Phys. Rev. Lett. 48, 711 (1982).
. A Peres, Phys. Rev. Lett. 49118A Peres, Phys. Rev. Lett. 49, 118 (1982).
. F M Izrailev, D L Shepelyanskii, Theo. Math. Phys. 433F.M. Izrailev and D.L. Shepelyanskii, Theo. Math. Phys., 43, 3 (1980).
| []
|
[
"The Turtleback Diagram for Conditional Probability",
"The Turtleback Diagram for Conditional Probability"
]
| [
"Donghui Yan \nDepartment of Mathematics and Program in Data Science\nUniversity of Massachusetts Dartmouth\nMA\n",
"Gary E Davis \nDepartment of Mathematics and Program in Data Science\nUniversity of Massachusetts Dartmouth\nMA\n"
]
| [
"Department of Mathematics and Program in Data Science\nUniversity of Massachusetts Dartmouth\nMA",
"Department of Mathematics and Program in Data Science\nUniversity of Massachusetts Dartmouth\nMA"
]
| []
| We elaborate on an alternative representation of conditional probability to the usual tree diagram. We term the representation "turtleback diagram" for its resemblance to the pattern on turtle shells. Adopting the set theoretic view of events and the sample space, the turtleback diagram uses elements from Venn diagrams-set intersection, complement and partition-for conditioning, with the additional notion that the area of a set indicates probability whereas the ratio of areas for conditional probability. Once parts of the diagram are drawn and properly labeled, the calculation of conditional probability involves only simple arithmetic on the area of relevant sets. We discuss turtleback diagrams in relation to other visual representations of conditional probability, and detail several scenarios in which turtleback diagrams prove useful. By the equivalence of recursive space partition and the tree, the turtleback diagram is seen to be equally expressive as the tree diagram for representing abstract concepts. We also provide empirical data on the use of turtleback diagrams with undergraduate students in elementary statistics or probability courses. | 10.4236/ojs.2018.84045 | [
"https://arxiv.org/pdf/1808.06884v1.pdf"
]
| 52,055,433 | 1808.06884 | 00e804390f4c207bb0867fd465f6ac315c5d58c0 |
The Turtleback Diagram for Conditional Probability
Donghui Yan
Department of Mathematics and Program in Data Science
University of Massachusetts Dartmouth
MA
Gary E Davis
Department of Mathematics and Program in Data Science
University of Massachusetts Dartmouth
MA
The Turtleback Diagram for Conditional Probability
Visualizationgraph representationrecursive space partitionVenn diagram
We elaborate on an alternative representation of conditional probability to the usual tree diagram. We term the representation "turtleback diagram" for its resemblance to the pattern on turtle shells. Adopting the set theoretic view of events and the sample space, the turtleback diagram uses elements from Venn diagrams-set intersection, complement and partition-for conditioning, with the additional notion that the area of a set indicates probability whereas the ratio of areas for conditional probability. Once parts of the diagram are drawn and properly labeled, the calculation of conditional probability involves only simple arithmetic on the area of relevant sets. We discuss turtleback diagrams in relation to other visual representations of conditional probability, and detail several scenarios in which turtleback diagrams prove useful. By the equivalence of recursive space partition and the tree, the turtleback diagram is seen to be equally expressive as the tree diagram for representing abstract concepts. We also provide empirical data on the use of turtleback diagrams with undergraduate students in elementary statistics or probability courses.
Introduction
Conditional probability [1,2,3,4] is an important concept in probability and statistics. It has been widely acknowledged that the concept of conditional probability, and particularly its application in practical contexts, is difficult for students [5,6,7,8,9,10,11,12] and especially those without much background or previous training in mathematics at the college level.
Let A and B be two events, then the conditional probability of A given B is defined as
P(A|B) = P(A ∩ B) P(B) .(1)
Our experience with undergraduate students is that a major difficulty in understanding and working effectively with conditional probability lies in the level of abstraction involved in the concepts of "event" and "conditioning"; see also [7].
The focus of this article is on productive visual representations for the understanding and application of conditional probability. The significant role of visual representation in mathematics is well-established; see, for example, [13,14]. While visualization is an important topic in statistics (see, e.g., [15,16]), the role of visualization in statistics education or practice is not as well documented. In particular, there is actually not much research into productive visualization of conditional probability [17,18]; popular books such as [19] do not dedicate much effort to visual explanations of the Bayes theorem. There has been some research on school student difficulties with conditional probability [6,8,10,11,12] but much less so for undergraduates. Our aim in discussing turtleback diagrams is to provide a visual tool for the representation of conditional probability that may, additionally, be used in further research on student understanding of conditional probability.
Student difficulties in understanding conditional probability
Tomlinson and Quinn [9], in discussing their graphic model for representing conditional probability (see Section 3.2.1), state:
"Conditional probability is a difficult topic for students to master. Often counter-intuitive, its central laws are composed of abstract terms and complex equations that do not immediately mesh with subjective intuitions of experience. If students are to acquire the mathematical skills necessary for rational judgement, teaching must focus on challenging the personal biases and cognitive heuristics identified by psychologists, and demonstrate in the most accessible way-the power of probabilistic reasoning." (p.7)
Documented student difficulties with conditional probability can be summarized as one of three main types [7]:
1. Interpreting conditionality as causality.
2. Identifying and describing the conditioning event.
3. Confusing P(A|B) and P(B|A).
Tarr and Jones [8] developed a valid and reliable framework for addressing student difficulties with conditional probability, in the context of sampling without replacement. This framework is particularly valuable in carrying out research as to which visual representation of conditional probability is most useful in assisting students and teachers.
3 Visual representations of conditional probability 3
.1 Tree diagrams
Tree diagrams have been used by many to help understand conditional probability. The idea of a tree diagram is to use nodes for events, the splitting of a node for sub-events, and the edges in the tree for conditioning. For example, Figure 1 is an illustration of conditional probability. Node * indicates the sample space Ω, and we will use them interchangeably throughout. Two possible events, either B or B, may happen. This is represented by two tree nodes B and B. The splitting of node B into two nodes A and A indicates that, given B, two possible events, A and A, may occur. The edges, B → A and B → A indicate conditional probabilities, P(A|B) and P(A|B), respectively. Tree diagrams help Figure 1: The tree diagram approach for conditional probability.
many students to understand the concept of conditional probability and apply it for problem solving, but is not so effective to many others especially those less prepared ones. Basically, they find the following two aspects non-intuitive. One is to represent events by tree nodes, which usually appear as dots or small circles, but events are sets and are more naturally represented by Venn diagram [20] type of notations. Another is the idea to represent conditional probability by tree edges; it is hard to see any straightforward connections of this to (1).
To address issues with the tree diagram, let us re-examine the idea of graphical visualization. There are two important ingredients (or steps) in visualizing an abstract mathematical concept. One is a concrete graphical representation of the target mathematical objects. This step would offload part of the burden of the brain by concrete graph objects, without which one has to keep relevant abstract mathematical objects in the brain and gets ready for subsequent mathematical operation. The second is that, the mathematical concept or operation can be understood or achieved by a simple operation on the graphical objects. This is the step to be carried out in the brain, and preferred to be simple (or at least conceptually simple). If a balance could be achieved between these two ingredients in visualizing a mathematical concept, then the graphical tool would be successful. This explains why the Venn diagram has been so successful since it was introduced, and has now become the standard graphical tool for set theory. Essentially, the Venn diagram converts the set objects to graph objects in such a way that many set relationships or operations could be accomplished by 'reading' the diagram-the mathematical operation is done directly by the human visual system, instead of having to invoke both the visual system and the brain. On the other hand, for the tree diagram, each of the two ingredients does some job but there is room for improvement.
The turtleback diagram we propose tries to optimize the two steps involved in the design of a graphical tool for conditional probability. In particular, it views events and the sample spaces as sets, and uses elements from Venn diagrams-set intersection, complement and partition-for conditioning, with the additional notion that the area of a set indicates probability whereas the ratio of areas associated with relevant sets indicates conditional probability. Once parts of the diagram are drawn and properly labelled, the calculation of conditional probability involves just simple arithmetic on the area of relevant sets. This makes it particularly easy to understand and use for problem solving.
Other visual representations
There have been several prior attempts to represent conditional probability visually [21,9,22,23], and we discuss briefly three of these below.
Tomlinson-Quinn graphical model
This graphical model, for facilitating a visually moderated understanding of conditional probability, described in [9], is a modified tree diagram.
Tomlinson and Quinn visualize compound events A ∩ B, A ∩ B as nodes of a tree (see Figure 2 of [9], so essentially their idea is still a tree diagram in which they carry out a Venn-diagram like visualization at each tree node.
Roullete-wheel diagrams
Yamagishi [22] introduces roullete-wheel diagrams as a visual representation tool; see Fig 1, p. 98 of [22]. He argues that "the graphical nature of [roulette-wheel diagrams] take advantage of people's automatic visual computation in grasping the relationship between the prior and posterior probabilities." (p. 105).
and provides experimental evidence that use of roulette-wheel diagrams increases understanding of conditional probability beyond that for tree diagrams. In this regard, Sloman et al. [24] state:
"The studies reported support the nested-sets hypothesis over the natural frequency hypothesis. . . . The nested-sets hypothesis is the general claim that making nested-set relations transparent will increase the coherence of probability judgment." (p. 307)
Iconic diagrams
"Iconicity" is the lowest of Terrence Deacon's three levels of symbolic interpretation 1 [25], as it is for Peirce on whose semiotic work Deacon's theory is based.
An icon is a form of graphical representation that requires no significant depth of interpretation: an icon brings to mind, without any apparent intermediate thought, something that it resembles. For example, the diagram in Figure 2 is universally iconic for human beings. Brase [23] carried out a number of ex- Figure 2: A diagram that is universally iconic for humans.
periments from which he inferred that an iconic representation of a Bayesian probability question is more effective in eliciting correct responses than either no visual aids, or Venn diagrams. Janine is tested for cancer with this new test. Janine has probability of a positive result from the test, with a probability of actually having cancer." An iconic representation for this problem is shown in Figure 3. The strength of such iconic representations is that they reduce the calculation of probabilities to simple counting problems and, as Brase [23] demonstrates, are effective in assisting students to get correct answers. A weakness of iconic representations such as these, are that they rely on counting discrete items and so are quite limited in representing more realistic probabilities.
Turtleback diagrams
Our focus is on how to represent an event graphically, how to relate it to the sample space, how to express the notion of conditioning such that it would be easy to understand the concept of conditional probability, to gather pieces of information together, and to solve problems accordingly.
We start by treating the sample space (denoted by Ω) and events as sets, and in terms of graph, as a region and its sub-regions, similarly as in a Venn diagram. Assume the region representing the original sample space Ω has an area of 1. To simplify our discussion (or to abuse the notation), we will use a label, say "B", to denote the region associated with event B. Note that here the label can be either a single letter, or several letters (such a case indicates the intersection of events. For example, a label "AB" indicates the intersection of events A and B and thus that of regions A and B). Similarly we can use the union of two regions (viewed as sets) to represent the union of two events. Other operations of events can also be defined accordingly in terms of set operations; we omit the details here. To quantify the chance of an event, we associate it with the area of the relevant region. For example, P(B) is indicated by the area of region B.
The centerpiece in 'graphing' conditional probability is to express the notion of conditioning. This can be achieved by re-examining the definition of conditional probability as given in (1). It can be interpreted as follows. Let A be the event of interest. Upon conditioning, say, on event B, both the new effective sample space and event A in this new sample space can be viewed as their restriction on B, that is, Ω becomes Ω ∩ B = B and A becomes A ∩ B, respectively. The conditional probability P(A|B) can now be interpreted as the proportion of the part of A that is inside B (i.e., A ∩ B) out of region B, that is,
P(A|B) = area of region A ∩ B area of B .(2)
Now we can describe how to sketch a turtleback diagram. We start by drawing a circular disk which represents the sample space Ω. Then we represent events by partitioning the circular disk and the resulting subregions. To facilitate our discussion, we define the partition of a set [26].
P = {S i | i ∈ I} is a partition of set S if S = ∪ i S i and S i ⊆ S, S i ∩ S j = ∅ for all i = j ∈ I.
We will use Figure 4 to assist our description. To represent the partition Ω = B ∪ B, we use a straight line "adc" to split the circular disk into two halves, i.e., regions surrounded by "abcda" and "adcea", which stands for event B and B, respectively. The regions corresponding to event B and B can be further split for a more refined representation involving other events. To represent conditional probability as defined by (1), event B is written as
B = (A ∩ B) ∪ (A ∩ B),(3)
which can be represented by splitting the region for B, i.e., "abcda", with a straight line "db". The conditional probability P(A|B) can then be calculated as the ratio of the area for region "bcdb" and that for region "abcda".
B = (A ∩ B) ∪ (A ∩ B)
, and the right panel is a simplified version of the middle panel where "AB" stands for A ∩ B, and "AB" stands for A ∩ B. The conditional probability P(A|B) is the ratio of the area for region "bcdb" and that for the area "abcda".
The turtleback diagram leads to a partition of the sample space Ω as follows
Ω = B ∪ B (4) = B ∪ (A ∩ B) ∪ (A ∩ B).(5)
Continuing this process, we can define events as complicated as we like in a simple hierarchical fashion as a nesting sequence of partitions P 0 P 1 P 2 ... where P 0 = {Ω}, P 1 = {B, B}, and P i+1 is a refinement of P i for index i > 0 in the sense that each element in P i+1 is a subset of some element in P i .
We can now assign labels to each of the sub-regions, e.g., by the name of the relevant events to indicate that a particular region is associated with that event.
For example in Figure 4, we assign labels "AB" and "AB" to regions "bcdb" and "abda", respectively. Here, "AB" means A ∩ B, and "AB" indicates A ∩ B, and the same convention carries over throughout. Accordingly, the turtleback diagram simplifies to the right panel in Figure 4. Note that here an event need not be a connected region, rather it could be a collection of patches (i.e., small regions) with each of them capturing information from a different source. This causes a little burden in calculation but costs really nothing conceptually, or, in terms of the ability of visualization.
One advantage of such a recursive-partition representation of the sample space Ω is that the data are now highly organized and we can easily operate on it, for example to find out the probability of a certain event. The idea of organizing the data via recursive space-partition and manipulating by their labels has been explored in CART (classification and regression trees [27]) and more recently, random projection trees [28], as well as a recent work of one author and his colleagues [29]. Note that dividing a region into a number of small patches also entails the total probability formula, an important ingredient in conditional probability to which formula (3) is related. We will use the 'Lung disease and smoking' example to illustrate the use of turtleback diagrams for conditional probability. Figure 5: The turtleback diagram for the 'Lung disease and smoking' example. The letters "L" and "L" stand for "with lung disease" and "without lung disease", "S" and "S" for "smoking" and "nonsmoking", respectively.
The lung disease and smoking example
This example is taken from online sources (see [30]). It is described as follows.
"According to the Arizona Chapter of the American Lung Association, 6.0% of population have lung disease. Of those having lung disease, 92.0% are smokers; of those not having lung disease, only 24.0% are smokers. Answer the following questions.
(1) If a person is randomly selected in the population, what is the chance that she is a smoker having lung disease?
(2) If a person is randomly selected in the population, what is the chance that she is a smoker?
(3) If a person is randomly selected and is discovered to be a smoker, what is the chance that she has lung disease?"
According to the information given in the problem, we can sketch a graph as Figure 5. Labels and area information to each sub-regions are assigned properly.
Assume the circular disk has an area of 1. Now we can answer the questions quickly as follows.
(1) The answer is simply the area of region "adba", which is 6%·92% = 0.0552. (2) The answer is the area of region "edbae", which is 6% · 92% + 94% · 24% = 0.2808. This is, in essence, the total probability formula P(S) = P(L ∩ S) + P(L ∩ S).
(3) Recognizing that this involves conditional probability and is the ratio of two relevant areas, (area of "adba"/area of "edbae")=0.0552/0.2808=0.1966.
Difficulty with the Venn diagram
The Venn diagram is known as the standard graphical tool for set theory. Both Venn diagram and the turtleback diagram use regions to represent sets. However, there is a major difference. In a turtleback diagram, as illustrated in Figure 4, straight lines, such as line "adc", "db" etc, are used to split the sample space and regions. In contrast, the Venn diagram represents events by drawing circular disks. Partitioning the sample space Ω in such a way would cause substantial difficulty in handling the complement operation, one crucial ingredient in conditional probability. One has to deal with a setting where the complement of a region would surround the region itself, for example, in Figure 6, "S" and "L" surround "S" and "L", respectively. This would cause extra burden to the human brain or the visual system. We will illustrate with the 'Lung disease and smoking' example. Figure 6: The Venn diagram approach to the 'Lung disease and smoking' example.
In Figure 6, one would find it tricky to label the region and put area information for "L" (which is 94%) without causing confusion. Moreover, it may require some extra work (versus simply "reading" from the graph) to assign the label "LS", or to calculate the area of this region. In contrast, the turtleback diagram (c.f. Figure 5) introduces straight lines, e.g., "adc", "ed", and "db", which readily avoids obstacles caused by set intersections or complements in a Venn diagram.
Semantic equivalence of the turtleback and the tree diagram
Given a graphical representation, it is natural to ask questions about its expressive power-will it be expressive enough to represent a complicated or very abstract concept? We will show that the turtleback diagram is equally expressive as the tree diagram.
The way that the turtleback diagram progressively refines the partition over the sample space is essentially a recursive space partition, where the sets involved in the partition are organized as a chain of enclosing sets. For example, in Figure 4, we have
(A ∩ B) ⊆ B ⊆ Ω, and A ∩ B ⊆ B ⊆ Ω.(6)
By equivalence (see, for example, [27]) between the recursive space partition and the tree structure, we can actually show the "semantic" equivalence between the turtleback diagram and the tree diagram. The remaining of this section is dedicated to this. Let a tree node correspond to a set in a recursive space partition with the following three properties:
1) The root node corresponds to the sample space Ω; 2) All the child nodes of a node form a decomposition of this node; 3) Down from the root node, the nodes along any path form a chain of enclosing sets.
Property 2) entails the total probability formula, and property 3) corresponds to a refinement of a partition. This allows one to turn the turtleback diagram in Figure 4 into a tree representation, that is, the left panel of Figure 7. The "chain" property forces a child node to be a restriction of its parent node. We can use this to simplify the labels for the tree nodes, e.g., the left panel becomes the right in Figure 7. Note that in the right panel, really node A corresponds to the set Ω ∩ B ∩ A, that is, the intersection of all sets along the path from the root to node A (i.e., the tree path * → B → A). For real world conditional probability problems, often the following formula is used instead of (1), due to availability of information from multiple sources
P(A|B) = P(A ∩ B) i P(B ∩ A i )(7)
where i A i = Ω. This requires the calculation of probabilities in the form of P(B ∩ A i ), or in other words, the probability of the intersection of multiple events.
In Figure 7, by construction node A, through path * → B → A, has a size P(Ω ∩ B ∩ A), and node B has size P(B). We can now endow the weight of edge B → A according to the proportion of node A (treated as a subset of B) out of B, or the probability of transition to node A given that one has reached node B from the root. This equals P(A|B). Such a definition is valid as the size of nodes A, A and B satisfies P(A ∩ B) + P(A ∩ B) = P(B). Thus, in Figure 8, the probability that one arrives at a node, say A, along the path Ω → B → A is given by
P(ΩBA) = P(Ω ∩ B ∩ A) = P(BA) = P(B) · P(A|B),(8)
which is simply the product of edge weights along the path Ω → B → A (the edge weight for Ω → B is P(B)). Same reasoning extends to any node in a tree. Thus we have provided a tree-based interpretation of the turtleback diagram for conditional probability. Such an algebraic system on the tree has the following two properties:
1. The probability of arriving at any node equals the product of edge weights along the path. 2. The weight of an edge H → L has weight given by P(L| * , ..., H).
This is exactly what a tree diagram would represent. The above properties extend readily to a series of events. For example, the probability of a series of events, B → C → D can be computed as the probability of arriving at node D along the tree path → B → C → D (c.f., Figure 8)
P(B ∩ C ∩ D) = P( * → B) · P(B → C) · P(C → D)
= P(B) · P(C|B) · P(D|B, C).
This approach applies even for non-sequential events, as one can artificially attach an order to the events according to the "arrival" of relevant information.
Thus, we have shown the semantic equivalence between the turtleback diagram and the tree diagram. Their difference is mainly on the visual representation, which matters as visual tools.
The tree diagram appears to be less intuitive than the turtleback diagram as there is no longer an association between the area of a region and its probability (one may use the thickness of an edge to indicate the probability, but that is less attractive too). However, the tree diagram seems to scale better to large problems.
Case studies
We consider four examples in case study, including the 'Lung disease and smoking' example, the 'History and war' example, the 'Lucky draw' example, and 'the urn model' example [1]. As a matter of fact, very few students (about 10 − 15%) can do the 'History and war' example completely correctly in an in-class practice, after explaining to them the non-graph based concept of conditional probability. That motivated us to adopt the graph-based approach. In the following, we provide the details of the examples. Figure 9: The tree diagram approach for the 'Lung disease and smoking' example. The letters "L" and "L" stand for "with lung disease" and "without lung disease", "S" and "S" for "smoking" and "nonsmoking", respectively.
The lung disease and smoking example
With the tree diagram, the answer to (1) is the probability of reaching node S along the path → L → S, which is the product of edge weights along this path and is calculated as 6% · 92% = 0.0552. The solution to (2) is the sum of products of edge weights along two paths, → L → S and → L → S, that is, 6%·92%+94%·24% = 0.2808, and (3) by the ratio of the product of edge weights along path → L → over that over two paths, which is 0.0552/0.2808 = 0.1966.
The History and War example
This example is artificially created so that it has a similar problem structure as the 'Lung disease and smoking' example. It is described as follows.
"According to a market research about the preference of movies, 10% of the population like movies related to history. Of those who like movies related to history, 90% also like movies related to wars; of those who do not like movies related to history, only 30% like movies related to wars. Answer the following questions.
(a) If a person is randomly selected in the population, what is the chance that she likes both movies related to wars and movies related to history?
(b) If a person is randomly selected in the population, what is the chance that she likes movies related to wars?
(c) If a person is randomly selected and is discovered to like movies related to wars, what is the chance that she likes movies related to history?"
We can construct a turtleback diagram as the left panel of Figure 10. One can quickly answer the questions as follows. (a) is the area of region "adba", which is given by 10% · 90% = 0.09, (b) is the total area of region "edbae", which is given by 10% · 90% + 90% · 30% = 0.36, and (c) is the ratio of (a) and (b) which is 0.09/0.36 = 0.25. Figure 10: Solving the 'History and War' example with the turtleback diagram and tree diagram, respectively. The letters "H" and "H" stand for "like movies related to history" and "do not like movies related to history", "W " and "W " for "like movies related to wars" and "do not like movies related to wars", respectively.
Similarly, the right panel of Figure 10 is a tree diagram. One can answer the questions as follows. (a) is the product of edge weights along the path → H → W , which is given by 10% · 90% = 0.09, (b) is the sum of the product of edge weights along two paths, → H → W and → H → W , which is given by 10% · 90% + 90% · 30% = 0.36, and (c) is the ratio of (a) and (b) which is 0.09/0.36 = 0.25.
The lucky draw example
The lucky draw example is taken from the popular lucky draw game. This example is especially useful as many sampling without replacement problems can be converted to this and solved easily. Here we take a simplified version with the total number of tickets being 5 and there is only one prize ticket. The description is as follows.
"There are 5 tickets in a box with one being the prize ticket. 5 people each randomly draws one ticket from the box without returning the drawn ticket to the box. Is this a fair game (i.e., each draws the prize ticket with the same chance)?" Figure 11: The tree diagram for the 'Luck draw' game. The letters "P " and "N " denote the prize ticket and non-prize ticket, respectively. Figure 11 depicts the process of ticket drawing. As here our interest is the prize ticket, the tree branch that has already seen the prize ticket will not grow further. Easily the probability of getting the prize ticket at the first draw is 1/5. Following Figure 11, the probability of getting the prize ticket at the second draw is the product of edge weights along the path → N → P , which is 0.8 · 0.25 = 0.2. Similarly, the probability of getting the prize ticket at the third draw is given by 0.8 · 0.75 · 1/3 = 0.2, and so on. Figure 12 is the turtleback diagram for the 'Luck draw' game. Easily the probability of getting the prize ticket at the first draw is the area of the region labelled Figure 12: The turtleback diagram for the 'Luck draw' game. The letters in the labels indicates status of each attempt, "P " for prize and "N " for a non-prize ticket. For example, "NNP" means getting non-prize tickets for the first two draws and the prize ticket at the third draw. The percentage next to the label indicates the probability of a prize at the last draw, conditional on the outcome of all preceding draws. For example, "25%" next to "NP" means the conditional probability of getting a prize is 25% in the second draw if the first draw is not a prize. Or in other words, that is the ratio of the area of the slice containing "NP" to all slices after the first slice is taken away.
as "P", which is 0.2. Following the figure, the probability of getting the prize ticket at the second draw is the area of the region labelled as "NP", which is 0.8 · 25% = 0.2. Similarly, the probability of getting the prize ticket at the third draw is given by 0.8 · 75% · 1/3 = 0.2, and so on.
An urn model example
This can be viewed as an extension of the lucky draw problem in the sense that there are more than one prize tickets here. Note that this example mainly serves to demonstrate that both the tree and the turtleback diagram could be used to solve problems of such a complexity (one can solve this problem quickly by distinguishing the two green balls and apply result of the lucky draw game 2 ). Assume there are 2 greens balls and 3 red balls. The problem is described as follows.
"There are 2 green balls and 3 red balls in an urn. One randomly picks one ball for five times from the urn without returning. Will each draw have the same chance of getting the green ball?" Figure 13: The tree-based approach for an urn model with 2 green balls and 3 red balls. We use the to denote the root node, and "R" and "G" for red ball and green ball, respectively. As our interest is the green balls, so tree branches that have seen 2 green balls will not grow any further. Figure 13 is the tree diagram for the urn model. We are not going to calculate the probability of getting a green ball for each draw, instead we only do it for the third draw. The probability of getting a green ball at the third draw is give by the sum of the product of edge weights along three paths
→ G → R → G, → R → R → G, → R → G → G,
which is (2/5)(3/4)(1/3) + (3/5)(1/2)(2/3) + (3/5)(1/2)(1/3) = 2/5. One can similarly calculate that the probability of getting a green ball at other draws all equal to 2/5. Figure 14 is the turtleback diagram for the urn model. To calculate the probability that the third draw gets a green ball, we simply sum up the area of all regions with a label such that the third letter is "G". That is, the total area of regions labelled as "RGG", "GRG", "RRGG", "RRGRG", which is
3 5 · 1 2 · 1 3 + 2 5 · 3 4 · 1 3 + 3 5 · 1 2 · 2 3 · 1 2 + 3 5 · 1 2 · 2 3 · 1 2 = 0.4.
The calculation seems a little tedious, but conceptually very simple, as long as one could follow the way the regions are partitioned.
Empirical data
We carried out case studies on over 200 students. This includes students in the elementary statistics class, STAT235 (non-calculus based), at University of Figure 14: The turtleback diagram for an urn model with 2 green balls and 3 red balls. "R" and "G" are used to denote red ball and green ball, respectively. As our interest is the green balls, so tree branches that have seen 2 green balls will not grow any further. Each letter, "G" or "R", indicates the outcome of a particular draw. For example, a "RGRG" indicates that the first draw gets a red ball, the second draw a green ball, the third a red and the fourth a green ball. Table 1 gives a summary of students involved in the case studies.
The study is carried out as follows. First we explain to students the concept of conditional probability with a non-graph based approach. Then we continue with two exercises. In the first exercise, we explain to students the 'Lung disease and smoking' example, with both the turtleback and the tree diagram, and have students solve the 'History and war' problem, or vice versa (for different classes we were teaching). In another exercise, we explain the 'Lucky draw' example, and have students solve the 'Urn model' problem, or vice versa. Due to time constraints on the course schedule, we did not ask students to solve problems using a particular technique followed by its discussion. Rather we discussed both the turtleback and the tree diagrams, and let students choose one of them for problem solving. Table 2 is a breakdown of the number of students involved. Table 3: Data collected in the case studies on whether graphs help understand the concept of conditional probability, and the preference between the turtleback diagram and the tree diagram.
Course
We collect two types of data from the case studies, one on students' preference between graph and non-graph based approach, and the other on students' preference between the turtleback and the tree diagram. Here, except for the case of non-graph based approach, by preference we mean the students actually used the technique for problem solving, and nearly in all such cases they could apply it correctly in solving the assigned problem; so we use this as measurement of learning outcome (with an understanding that further experiments may be needed to validate this). The results are reported in Table 3. The data collected are quite encouraging. About 78-88% students found a graph tool helpful. For the 'Lucky draw' and the 'Urn model', fewer students found it helpful. This is possibly because these two problems appear to be harder to students: even a graphical tool may not help them much. Further experiments are needed to validate or understand this.
In terms of a preference for which graphical tool, the results show an interesting pattern. For the 'Lung disease and smoking' and the 'War and history' example, more students prefer the turtleback diagram to the tree diagram, around 53-54% vs 33-34%. The 'Lucky draw' and the 'Urn model' examples exhibit an opposite pattern, more students prefer the tree diagram to the turtleback diagram, around 46-48% vs 31-34% 3 . This is probably due to the fact that, in the first two examples, the sample spaces and events involve populations in the usual sense, while the last two examples involve sequential decisions, for which a tree structure that represents the decision dichotomy may be more natural (although in such cases, the concept of conditional probability is not as natural as that in the turtleback diagram). Further experiments are needed to confirm this. The advantage of the turtleback diagram over the tree diagram appears to decrease as the problem becomes harder, but this is not a serious problem for beginning students as those who most need help from a graphical representation are just those who could not solve simple problems. Moreover, we do not expect one single graphical tool can help solve all the problems, rather different people may use different tools for a particular problem.
Potential research questions
Many instances of conditional probability occur in sampling without replacement. Tarr and Jones [8] describe a framework for assessing middle school students' thinking in conditional probability and independence, which is elaborated in [12]. This framework is a levels model, with 4 levels-Subjective, Transitional, Informal Quantitative, and Numerical -subject to all the difficulties such a model has as students transition from one level to another.
Research Question 1: Are turtleback diagrams, as compared to tree diagrams, helpful to students, at any or all of the Tarr
Conclusions
Motivated by difficulties encountered by many undergraduate students new to statistics, we re-examined the definition and representation of conditional probability, and presented a Venn-diagram like approach: the turtleback diagram. We discussed our graphical tool in the context of other graphical models for conditional probability, and carried out case studies on over 200 students of elementary statistics or probability classes. Our case study results are encouraging and the graph-based approaches could potentially lead to significant improvements in both the students' understanding of conditional probability and problem solving. While the existing tree diagram is preferred to the turtleback diagram on problems that involve a sequential decision, the turtleback diagram is considered more helpful in settings where the underlying population resembles the usual human population; it is exactly in such situations that weaker students are more likely to need help. Though the turtleback diagram appears very different from the tree diagram, we are able to unify them and show their equivalence in terms of semantics.
Our discussion suggests a simple framework for visualizing abstract concepts, that is, a suitable graph representation of the abstract concept followed by a simple post-processing in the visual-brain system. A good visualization idea needs to balance both. We are able to use such a framework to interpret the difficulty encountered by the tree diagram, and aid our development of the turtleback diagram. Further studies are expected to validate or to adopt such a framework to general visualization tasks. Given the increasingly important role played by data visualization in data science and exploratory data analysis [15,32,16,33], it would be worthwhile to give a few remarks here comparing the graph representation of abstract concepts and data visualization. These two concepts are different yet closely related. Graphical representation aims to understand an abstract (or complicated) concept by representing elements of the concept with a graph, while data visualization seeks to understand the data or the information behind by displaying aspects (i.e., descriptive statistics) of the data. In terms of implementation, as both aim to help understanding or reasoning, the used graphical objects need to be simple (though simple in different ways in the two cases). In data visualization, the graphical objects need to be simple so that people can quickly grasp the information conveyed or to understand the concept behind without resorting to paper and pencil; in graphical representation of concepts, the objects need to be conceptually simple and easy to manipulate for applications of the concepts.
Our case studies suggest that it is worthwhile to introduce such graphical tools to students whose success would seem to depend on them. We hope that this will benefit our statistics colleagues who are teaching elementary statistics and students who are struggling with the concept of conditional probability and its application to problem solving. The potential savings in time can be huge.
Figure 3 :
3An iconic representation of the effectiveness of a cancer test.
Figure 4 :
4Illustration of the turtleback diagram for conditional probability. The left panel shows the partition of Ω by Ω = B ∪ B, the middle panel shows event B is further partitioned by
Figure 7 :
7The tree diagram representation of the turtleback diagram inFigure 4.
Figure 8 :
8The tree diagram approach illustrated.
As a conservative estimate, assume each year there are about 1.5 million bachelor's degrees awarded in US (about 1.67 million awarded in 2009). Assume there are about 200, 000 of them have taken an elementary statistics class, and about 10% of them need help and succeed with our proposed approach, and further assume an average class size of 40. If each instructor saves 2 hours of time in each ele-mentary statistics class and each student who benefits from our approach saves 1 hour, then the estimated total amount of time saved is at least 30, 000 hours per year in the U.S. alone.
Table 2 :
2Number of students involved in the empirical study breakdown by course and problem.Neither Either one
Prefer
Prefer
Question
helpful
helpful
Turtleback
Tree
Lung disease and smoking
13.7%
86.3%
53.0%
33.3%
War and history movies
11.1%
88.9%
54.6%
34.3%
Lucky draw
17.1%
82.9%
34.2%
48.6%
Urn model
21.9%
78.1%
31.6%
46.5%
-Jones framework levels, in understanding conditional probability. If so, how can we measure and assess the comparative utility of turtleback diagrams compared to tree diagrams?Research Question 2: Related to Research Question 1, specifically, how helpful are turtleback diagrams in helping students understand conditional probability in the context of sampling without replacement?Conditional probability is increasingly being introduced into middle school in the United States. The Conference Board of the Mathematical Sciences[31] stated:Of all the mathematical topics now appearing in middle grades curricula, teachers are least prepared to teach statistics and probability. Many prospective teachers have not encountered the fundamental ideas of modern statistics in their own K-12 mathematics courses...Even those who have had a statistics course probably have not seen material appropriate for inclusion in middle grades curricula. (p. 114)Research Question 3: Are turtleback diagrams helpful to middle school teachers of probability and statistics in (a) enhancing their own understanding of conditional probability and (b) assisting them to better teach conditional probability? If so, how and to what extent?
In Deacon's framework, there are three levels of referential relationship in a cognitive process, including iconic, indexical, and symbolic reference, where higher levels are built hierarchically upon lower levels.
Label the two green balls as G 1 and G 2 , respectively. Then the probability of getting a green ball at each draw is simply that of getting G 1 or G 2 . Either G 1 or G 2 can be treated as the only prize ticket in the lucky draw game thus the probability of getting either one is 1/5, and so the probability of getting a green ball at any draw is always 2/5.
Since in all cases, the sample size is large enough and the difference between contrast groups is significant, we did not carry out a hypothesis testing using the reported data.
AcknowledgementThe authors are grateful to Professor Yong Zeng at UMKC for kindly pointing to the 'Lung disease and smoking' example, and for encouragement and support on some of the case studies.
J Rice, Mathematical Statistics and Data Analysis. Duxbury PressSecond EditionRice, J. (1995) Mathematical Statistics and Data Analysis (Second Edition). Duxbury Press.
Statistical Reasoning and Methods. R Johnson, K Tsui, John WileyJohnson, R. and Tsui, K. (2003) Statistical Reasoning and Methods. John Wiley.
The language of conditional probability. J S Ancker, Journal of Statistics Education. 14Ancker, J. S. (2006) The language of conditional probability. Journal of Statistics Education, 14, 1-5.
Introductory Statistics. P Mann, John WileyMann, P. (2003) Introductory Statistics. John Wiley.
Causal schemas in judgments under uncertainty. A Tversky, D Kahneman, Progress in social psychology. 1Tversky, A. and Kahneman, D. (1980) Causal schemas in judgments under uncertainty. Progress in social psychology, 1, 49-72.
Does the teaching of probability improve probabilistic intuitions? Educational studies in mathematics. E Fischbein, A Gazit, 15Fischbein, E. and Gazit, A. (1984) Does the teaching of probability improve probabilistic intuitions? Educational studies in mathematics, 15, 1-24.
R Falk, Conditional probabilities: insights and difficulties. Proceedings of the Second International Conference on Teaching Statistics. Falk, R. (1986) Conditional probabilities: insights and difficulties. Proceed- ings of the Second International Conference on Teaching Statistics, 292-297.
A framework for assessing middle school students? Thinking in conditional probability and independence. J E Tarr, G A Jones, Mathematics Education Research Journal. 9Tarr, J. E. and Jones, G. A. (1997) A framework for assessing middle school students? Thinking in conditional probability and independence. Mathematics Education Research Journal, 9, 39-59.
Understanding conditional probability. S Tomlinson, R Quinn, Teaching Statistics. 19Tomlinson, S. and Quinn, R. (1997) Understanding conditional probability. Teaching Statistics, 19, 2-7.
The confounding effects of the phrase '50-50 Chance' in making conditional probability judgments. J E Tarr, Focus on Learning Problems in Mathematics. 24Tarr, J. E. (2002) The confounding effects of the phrase '50-50 Chance' in making conditional probability judgments. Focus on Learning Problems in Mathematics, 24, 35-53.
Some challenges for the use of computer simulations for solving conditional probability problems. G C Yáñez, The 6th International Conference on Teaching Statistics. Cape Town, South AfricaYáñez, G. C. (2002) Some challenges for the use of computer simulations for solving conditional probability problems. The 6th International Conference on Teaching Statistics, Cape Town, South Africa.
How can teachers build notions of conditional probability and independence?. J E Tarr, J K Lannin, Exploring Probability in School. USSpringerTarr, J. E. and Lannin, J. K. (2005) How can teachers build notions of con- ditional probability and independence? In Exploring Probability in School (pp. 215-238), Springer, US.
The role of visual representations in the learning of mathematics. A Arcavi, Educational studies in mathematics. 52Arcavi, A. (2003) The role of visual representations in the learning of math- ematics. Educational studies in mathematics, 52, 215-241.
Research on visualization in learning and teaching mathematics. Handbook of research on the psychology of mathematics education. N C Presmeg, Sense PublishersRotterdamPresmeg, N. C. (2006) Research on visualization in learning and teach- ing mathematics. Handbook of research on the psychology of mathematics education, Sense Publishers, Rotterdam, 205-235.
Exploratory Data Analysis. J Tukey, Addison-WesleyTukey J. (1977) Exploratory Data Analysis. Addison-Wesley.
Visualizing Data. W S Cleveland, Hobart PressCleveland, W. S. (1993) Visualizing Data. Hobart Press.
Bayes' Theorem Examples: A Visual Introduction For Beginners. D Morris, 13Morris, D. (2016) Bayes' Theorem Examples: A Visual Introduction For Beginners. ISBN-13: 978-1549761744.
Bayes Theorem Examples: Visual Book for Beginners. R Collins, CreateSpace Independent Publishing Platform ISBN. 13Collins, R. (2017) Bayes Theorem Examples: Visual Book for Beginners. CreateSpace Independent Publishing Platform ISBN-13: 978-1547270385.
. A Gelman, J B Carlin, H S Stern, D B Dunson, A Vehtari, D B Rubin, Chapman and HallLondon3rd EditionGelman, A. and Carlin, J. B. and Stern, H.S. and Dunson, D.B. and Ve- htari, A. and Rubin, D.B. (2013) Bayesian Data Analysis, 3rd Edition, Chapman and Hall, London.
Cogwheels of the Mind: The Story of Venn Diagrams. A W F Edwards, Johns Hopkins University PressBaltimore, MDEdwards, A. W.F. (2004) Cogwheels of the Mind: The Story of Venn Dia- grams. Johns Hopkins University Press, Baltimore, MD.
How to improve Bayesian reasoning without instruction: frequency formats. G Gigerenzer, U Hoffrage, Psychological review. 1024Gigerenzer, G. and Hoffrage, U. (1995) How to improve Bayesian reasoning without instruction: frequency formats. Psychological review, 102(4), 684- 705.
Facilitating normative judgments of conditional probability: Frequency or nested sets?. K Yamagishi, Experimental Psychology. 50Yamagishi, K. (2003) Facilitating normative judgments of conditional prob- ability: Frequency or nested sets? Experimental Psychology, 50, 97-106.
G L Brase, Pictorial representations in statistical reasoning. Applied Cognitive Psychology. 23Brase, G. L. (2009) Pictorial representations in statistical reasoning. Ap- plied Cognitive Psychology, 23, 369-381.
Frequency illusions and other fallacies. S A Sloman, D Over, L Slovak, J M Stibel, Organizational Behavior and Human Decision Processes. 91Sloman, S. A., Over, D., Slovak, L. and Stibel, J. M. (2003) Frequency illusions and other fallacies. Organizational Behavior and Human Decision Processes, 91, 296-309.
The symbolic species: The co-evolution of language and the brain. T W Deacon, WWNorton & Company, NYDeacon, T. W. (1998) The symbolic species: The co-evolution of language and the brain. WW Norton & Company, NY.
. G Chartrand, P Zhang, Discrete Mathematics. Waveland Press, IncChartrand, G., Zhang, P. (2011) Discrete Mathematics. Waveland Press, Inc.
Classification and Regression Trees. L Breiman, J Friedman, R Olshen, C Stone, Wadsworth, CABreiman, L., Friedman, J., Olshen, R. and Stone, C. (1984) Classification and Regression Trees. Wadsworth, CA.
Random projection trees and low dimensional manifolds. S Dasgupta, Y Freund, Fortieth ACM Symposium on Theory of Computing (STOC). Dasgupta, S. and Freund, Y. (2008) Random projection trees and low di- mensional manifolds. Fortieth ACM Symposium on Theory of Computing (STOC), 537-546.
Fast approximate spectral clustering. D Yan, L Huang, M Jordan, Proceedings of the 15th international conference on Knowledge discovery and data mining (SIGKDD). the 15th international conference on Knowledge discovery and data mining (SIGKDD)Yan, D., Huang, L. and Jordan, M. (2009) Fast approximate spectral clus- tering. Proceedings of the 15th international conference on Knowledge dis- covery and data mining (SIGKDD), 907-916.
N A Weiss, Introductory Statistics. Addison-WesleyWeiss, N. A. (2012) Introductory Statistics. Addison-Wesley, 193.
The mathematical education of teachers. Conference Board of the Mathematical Sciences. American Mathematical SocietyConference Board of the Mathematical Sciences (2001) The mathematical education of teachers. American Mathematical Society, Providence, RI.
The Visual Display of Quantitative Information. E Tufte, Graphics PressCTTufte E. (1983) The Visual Display of Quantitative Information. Graphics Press, CT.
Visualize this: the flowing data guide to design, visualization, and statistics. N Yau, WileyYau, N. (2011) Visualize this: the flowing data guide to design, visualiza- tion, and statistics. Wiley.
| []
|
[]
| []
| []
| []
| This is a survey talk on one of the best known quantum knot invariants, the colored Jones polynomial of a knot, and its relation to the algebraic/geometric topology and hyperbolic geometry of the knot complement. We review several aspects of the colored Jones polynomial, emphasizing modularity, stability and effective computations. The talk was given in the Mathematische ArbeitstagungThe Jones polynomial has a unique extension to a polynomial invariant J L,c (q) of links L together with a coloring c of their components are colored by positive natural numbers that satisfy the following ruleswhere (L ∪ K, c ∪ {N}) denotes a link with a distinguished component K colored by N and K (2) denotes the 2-parallel of K with zero framing. Here, a natural number N attached to a component of a link indicates the N-dimensional irreducible representation of the Lie algebra sl(2, C). For a detailed discussion on the polynomial invariants of links that come from quantum groups, see [Jan96, Tur88, Tur94]. The above relations make clear that the colored Jones polynomial of a knot encodes the Jones polynomials of the knot and its 0-framed parallels.Three limits of the colored Jones polynomialIn this section we will list three conjectures, the MMR Conjecture (proven), the Slope Conjecture (mostly proven) and the AJ Conjecture (less proven). These conjectures relate the colored Jones polynomial of a knot with the Alexander polynomial, with the set of slopes of incompressible surfaces and with the PSL(2, C) character variety of the knot complement.2.1. The colored Jones polynomial and the Alexander polynomial. We begin by discussing a relation of the colored Jones polynomial of a knot with the homology of the universal abelian cover of its complement. The homology H 1 (M, Z) ≃ Z of the complement M = S 3 \ K of a knot K in 3-space is independent of the knot K. This allows us to consider the universal abelian cover M of M with deck transformation group Z, and with homology | 10.1007/s40687-018-0127-3 | [
"https://arxiv.org/pdf/1201.3314v3.pdf"
]
| 51,762,280 | 1201.3314 | c032975d4867ad9b2ec6ae695125f85899954fbf |
2 Apr 2013
2 Apr 2013June 24-July 1, 2011. Date: January 15, 2012.QUANTUM KNOT INVARIANTS STAVROS GAROUFALIDIS To Don Zagier, with admiration The author was supported in part by NSF. 1991 Mathematics Classification. Primary 57N10. Secondary 57M25.and phrases: Jones polynomialknotsQuantum Topologyvolume conjectureNahm sumsstabilitymodularitymodular formsmock-modular formsq-holonomic sequenceq-series 1
This is a survey talk on one of the best known quantum knot invariants, the colored Jones polynomial of a knot, and its relation to the algebraic/geometric topology and hyperbolic geometry of the knot complement. We review several aspects of the colored Jones polynomial, emphasizing modularity, stability and effective computations. The talk was given in the Mathematische ArbeitstagungThe Jones polynomial has a unique extension to a polynomial invariant J L,c (q) of links L together with a coloring c of their components are colored by positive natural numbers that satisfy the following ruleswhere (L ∪ K, c ∪ {N}) denotes a link with a distinguished component K colored by N and K (2) denotes the 2-parallel of K with zero framing. Here, a natural number N attached to a component of a link indicates the N-dimensional irreducible representation of the Lie algebra sl(2, C). For a detailed discussion on the polynomial invariants of links that come from quantum groups, see [Jan96, Tur88, Tur94]. The above relations make clear that the colored Jones polynomial of a knot encodes the Jones polynomials of the knot and its 0-framed parallels.Three limits of the colored Jones polynomialIn this section we will list three conjectures, the MMR Conjecture (proven), the Slope Conjecture (mostly proven) and the AJ Conjecture (less proven). These conjectures relate the colored Jones polynomial of a knot with the Alexander polynomial, with the set of slopes of incompressible surfaces and with the PSL(2, C) character variety of the knot complement.2.1. The colored Jones polynomial and the Alexander polynomial. We begin by discussing a relation of the colored Jones polynomial of a knot with the homology of the universal abelian cover of its complement. The homology H 1 (M, Z) ≃ Z of the complement M = S 3 \ K of a knot K in 3-space is independent of the knot K. This allows us to consider the universal abelian cover M of M with deck transformation group Z, and with homology
The Jones polynomial of a knot
Quantum knot invariants are powerful numerical invariants defined by Quantum Field theory with deep connections to the geometry and topology in dimension three [Wit89]. This is a survey talk on the various limits the colored Jones polynomial [Jon87], one of the best known quantum knot invariants. This is a 25 years old subject that contains theorems and conjectures in disconnected areas of mathematics. We chose to present some old and recent conjectures on the subject, emphasizing two recent aspects of the colored Jones polynomial, Modularity and Stability and their illustration by effective computations. Zagier's influence on this subject is profound, and several results in this talk are joint work with him. Of course, the author is responsible for any mistakes in the presentation. We thank Don Zagier for enlightening conversations, for his hospitality and for his generous sharing of his ideas with us.
The Jones polynomial J L (q) ∈ Z[q ±1/2 ] of an oriented link L in 3-space is uniquely determined by the linear relations [Jon87] H 1 ( M , Z) a Z[t ±1 ] module. As is well-known this module is essentially torsion and its order is given by the Alexander polynomial ∆ K (t) ∈ Z[t ±1 ] of K [Rol90]. The Alexander polynomial does not distinguish knots from their mirrors and satisfies ∆ K (1) = 1.
There are infinitely many pairs of knots (for instance (10 22 , 10 35 ) in the Rolfsen table [Rol90,BN05]) with equal Jones polynomial but different Alexander polynomial. On the other hand, the colored Jones polynomial determines the Alexander polynomial. This socalled Melvin-Morton-Rozansky Conjecture was proven in [BNG96], and states that (1)Ĵ K,n (e ) = HereĴ K,n (q) = J K,n (q)/J Unknot,n (q) ∈ Z[q ±1 ] is a normalized form of the colored Jones polynomial. The above conjecture is a statement about formal power series. A stronger analytic version is known [GL11a, Thm.1.3], namely for every knot K there exists an open neighborhood U K of 0 ∈ C such that for all α ∈ U K we have lim n J K,n (e α/n ) = 1
∆ K (e α ) ,
where convergence is uniform with respect to compact sets. More is known about the summation of the series (1) along a fixed diagonal i = j + k for fixed k, both on the level of formal power series and on the analytic counterpart. For further details the reader may consult [GL11a] and references therein.
2.2. The colored Jones polynomial and slopes of incompressible surfaces. In this section we discuss a conjecture relating the degree of the colored Jones polynomial of a knot K with the set bs K of boundary slopes of incompressible surfaces in the knot complement M = S 3 \ K. Although there are infinitely many incompressible surfaces in M, it is known that bs K ⊂ Q ∪ {1/0} is a finite set [Hat82]. Incompressible surfaces play an important role in geometric topology in dimension three, often accompanied by the theory of normal surfaces [Hak61]. From our point of view, incompressible surfaces are a tropical limit of the colored Jones polynomial, corresponding to an expansion around q = 0 [Gar11c]. The Jones polynomial of a knot is a Laurent polynomial in one variable q with integer coefficients. Ignoring most information, one can consider the degree δ K (n) ofĴ K,n+1 (q) with respect to q. Since (Ĵ K,n (q)) is a q-holonomic sequence [GL05], it follows that δ K is a quadratic quasi-polynomial [Gar11a]. In other words, we have where a(n) is a periodic sequence of period 4 given by 0, −1/8, −1/2, −1/8 if n ≡ 0, 1, 2, 3 mod 4 respectively. In addition, we have bs (−2,3,7) = {0, 16, 37/2, 20} .
δ K (n) = c K (n)n 2 + b K (n)n + a K (n) , where a K , b K , c K : N −→ Q
In all known examples, c K (N) consists of a single element, the so-called Jones slope. How the colored Jones polynomial selects some of the finitely many boundary slopes is a challenging and interesting question. The Slope Conjecture is known for all torus knots, all alternating knots and all knots with at most 8 crossings [Gar11b] as well as for all adequate knots [FKP11] and all 2-fusion knots [DG12].
2.3. The colored Jones polynomial and the PSL(2, C) character variety. In this section we discuss a conjecture relating the colored Jones polynomial of a knot K with the moduli space of SL(2, C)-representations of M, restricted to the boundary of M. Ignoring 0-dimensional components, the latter is a 1-dimensional plane curve. To formulate the conjecture we need to recall that the colored Jones polynomialĴ K,n (q) is q-holonomic [GL05] i.e., it satisfies a non-trivial linear recursion relation
(2) d j=0 a j (q, q n )Ĵ K,n+j (q) = 0 for all n where a j (u, v) ∈ Z[u ±1 , v ±1
] and a d = 0. q-holonomic sequences were introduced by Zeilberger [Zei90], and a fundamental theorem (multisums of q-proper hypergeometric terms are q-holonomic) was proven in [WZ92] and implemented in [PWZ96]. Using two operators M and L which act on a sequence f (n) by (Mf )(n) = q n f (n), (Lf )(n) = f (n + 1) , we can write the recursion (2) in operator form
P ·Ĵ K = 0 where P = d j=0 a j (q, M)L j .
It is easy to see that LM = qML and M, L generate the q-Weyl algebra. One can choose a canonical recursion A K (M, L, q) ∈ Z[q, M] L /(LM − qML) which is a knot invariant [Gar04], the non-commutative A-polynomial of K. The reason for this terminology is the potential relation with the A-polynomial A K (M, L) of K [CCG + 94]. The latter is defined as follows. Let X M = Hom(π 1 (M), SL(2, C))/C denote the moduli space of flat SL(2, C) connections on M. We have an identification
X ∂M ≃ (C * ) 2 /(Z/2Z), ρ → (M, L)
where {M, 1/M} (resp., {L, 1/L}) are the eigenvalues of ρ(µ) (resp., ρ(λ)) where (µ, λ) is a meridian-longitude pair on ∂M. X M and X ∂M are affine varieties and the restriction map X M −→ X ∂M is algebraic. The Zariski closure of its image lifted to (C * ) 2 , and after removing any 0-dimensional components is a one-dimensional plane curve with defining polynomial A K (M, L) [CCG + 94]. This polynomial plays an important role in the hyperbolic geometry of the knot complement. We are now ready to formulate the AJ Conjecture [Gar04]; see also [Gel02]. The AJ Conjecture was checked for the 3 1 and the 4 1 knots in [Gar04]. It is known for most 2-bridge knots [Lê06], for torus knots and for the pretzel knots of Section 4; see [LT, Tra].
From the point of view of physics, the AJ Conjecture is a consequence of the fact that quantization and the corresponding quantum field theory exists [Guk05,Dim].
The Volume and Modularity Conjectures
3.1. The Volume Conjecture. The Kashaev invariant of a knot is a sequence of complex numbers defined by [Kas97,MM01]
K N =Ĵ K,N (e(1/N))
where e(α) = e 2πiα . The Volume Conjecture concerns the exponential growth rate of the Kashaev invariant and states that
lim N 1 N log | K N | = vol(K) 2π
where Vol(K) is the volume of the hyperbolic pieces of the knot complement S 3 \ K [Thu77]. Among hyperbolic knots, the Volume Conjecture is known only for the 4 1 knot. Detailed computations are available in [Mur04]. Refinements of the Volume Conjecture to all orders in N and generalizations were proposed by several authors [DGLZ09, GM08, GL11a, Gar08]. Although proofs are lacking, there appears to be a lot of structure in the asymptotics of the Kashaev invariant. In the next section we will discuss a modularity conjecture of Zagier and some numerical verification.
3.2. The Modularity Conjecture. Zagier considered the Galois invariant spreading of the Kashaev invariant on the set of complex roots of unity given by
φ K : Q/Z −→ C, φ K a c =Ĵ K,c e a c
where (a, c) = 1 and c > 0. The above formula works even when a and c are not coprime due to a symmetry of the colored Jones polynomial [Hab02]. φ K determines K and conversely is determined by K via Galois invariance. Conjecture 3.1. [Zag10] With the above conventions, there exist ∆(α) ∈ C with ∆(α) 2c ∈ F (ǫ(α)) and A j (α) ∈ F (e(α)) such that
(3) φ(γX) φ(X) ∼ 2π 3/2 e C(M )/ ∆(α) ∞ j=0 A j (α) j .
When γ = 1 0 1 1 and X = N − 1, and with the properly chosen orientation of M, the leading asymptotics of (3) together with the fact that ℑ(C(M)) = vol(M) gives the volume conjecture.
Computation of the non-commutative A-polynomial
As we will discuss below, the key to an effective computation the Kashaev invariant is a recursion for the colored Jones polynomial. Proving or guessing such a recursion is at least as hard as computing the A-polynomial of the knot. The A-polynomial is already unknown for several knots with 9 crossings. For an updated table of computations see [Cul10]. The The non-commutative A-polynomial of the twist knots K p was computed with a certificate by X. Sun and the author in [GS10] for p = −14, . . . , 15. The data is available from http://www.math.gatech.edu/~stavros/publications/twist.knot.data The non-commutative A-polynomial of the pretzel knots KP p = (−2, 3, 3 + 3p) was guessed by C. Koutschan and the author in [GK12a] for p = −5, . . . , 5. The guessing method used an a priori knowledge of the monomials of the recursion, together with computation of the colored Jones polynomial using the fusion formula, and exact but modular arithmetic and rational reconstruction. The data is available from http://www.math.gatech.edu/~stavros/publications/pretzel.data For instance, the recursion relation for the colored Jones polynomial f (n) of the 5 2 = (−2, 3, −1) pretzel knot is given by
b(q n , q)−q 9+7n (−1+q n )(−1+q 2+n )(1+q 2+n )(−1+q 5+2n )f (n)+q 5+2n (−1+q 1+n ) 2 (1+q 1+n )(−1+ q 5+2n )(−1 + q 1+n + q 1+2n − q 2+2n − q 3+2n + q 4+2n − q 2+3n − q 5+3n − 2q 5+4n + q 6+5n )f (1 + n) − q(−1 + q 2+n ) 2 (1 + q 2+n )(−1 + q 1+2n )(−1 + 2q 2+n + q 2+2n + q 5+2n − q 4+3n + q 5+3n + q 6+3n − q 7+3n − q 7+4n + q 9+5n )f (2 + n) − (−1 + q 1+n )(1 + q 1+n )(−1 + q 3+n )(−1 + q 1+2n )f (3 + n) = 0 , where b(q n , q) = q 4+2n (1 + q 1+n )(1 + q 2+n )(−1 + q 1+2n )(−1 + q 3+2n )(−1 + q 5+2n ) .
The recursion relation for the colored Jones polynomial f (n) of the (−2, 3, 7) pretzel knot is given by
b(q n , q) − q 224+55n (−1 + q n )(−1 + q 4+n )(−1 + q 5+n )f (n) + q 218+45n (−1 + q 1+n ) 3 (−1 + q 4+n )(−1 + q 5+n )f (1+n)+q 204+36n (−1+q 2+n ) 2 (1+q 2+n +q 3+n )(−1+q 5+n )f (2+n)+(−1+q)q 180+27n (1+q)(−1+ q 1+n )(−1 + q 3+n ) 2 (−1 + q 5+n )f (3 + n) − q 149+18n (−1 + q 1+n )(−1 + q 4+n ) 2 (1 + q + q 4+n )f (4 + n) − q 104+8n (−1+q 1+n )(−1+q 2+n )(−1+q 5+n ) 3 f (5+n)+q 59 (−1+q 1+n )(−1+q 2+n )(−1+q 6+n )f (6+n) = 0 , where b(q n , q) = q 84+5n (1 − q 1+n − q 2+n + q 3+2n − q 16+3n + q 17+4n + q 18+4n − q 19+5n − q 26+5n + q 27+6n + q 28+6n + q 31+6n − q 29+7n − q 32+7n − q 33+7n − q 36+7n + q 34+8n + q 37+8n + q 38+8n − q 39+9n + q 45+9n − q 46+10n − q 47+10n + q 49+10n + q 48+11n − q 50+11n − q 51+11n − q 54+11n + q 52+12n + q 55+12n + q 56+12n − q 57+13n − q 62+13n + q 63+14n + q 64+14n − q 66+14n + q 67+14n − q 65+15n + q 67+15n − q 69+15n + q 71+15n − q 69+16n + q 70+16n − q 72+16n − q 75+17n − q 78+17n + q 76+18n + q 79+18n − q 83+19n + q 85+19n + q 84+20n − q 86+20n + q 88+20n − q 89+21n + q 91+21n − q 96+22n − q 93+23n + 2q 98+24n − q 99+25n − q 108+26n − q 107+27n + q 109+27n + q 108+28n − q 110+28n + q 112+28n − q 113+29n + q 115+29n + q 112+30n + q 115+30n − q 117+31n − q 120+31n − q 117+32n + q 118+32n − q 120+32n − q 119+33n + q 121+33n − q 123+33n + q 125+33n + q 123+34n + q 124+34n − q 126+34n + q 127+34n − q 123+35n − q 128+35n + q 124+36n + q 127+36n + q 128+36n + q 126+37n − q 128+37n − q 129+37n − q 132+37n − q 130+38n − q 131+38n + q 133+38n − q 129+39n + q 135+39n + q 130+40n + q 133+40n + q 134+40n − q 131+41n − q 134+41n − q 135+41n − q 138+41n + q 135+42n + q 136+42n + q 139+42n − q 133+43n − q 140+43n + q 137+44n + q 138+44n − q 142+45n + q 135+46n − q 139+47n − q 140+47n + q 144+48n ) .
The pretzel knots KP p are interesting from many points of view. For every integer p, the knots in the pair (KP p , −KP −p ) (where −K denotes the mirror of K) Yet, the colored Jones polynomials of (KP p , −KP −p ) are different, and so are the Kashaev invariants and their asymptotics and even the term ∆(0) in the modularity conjecture 3.1. An explanation of this difference is given in [DGb].
Zagier posed a question to compare the modularity conjecture for geometrically similar pairs of knots, which was a motivation for many of the computations in Section 5.2. 5. Numerical asymptotics and the Modularity Conjecture 5.1. Numerical computation of the Kashaev invariant. To numerically verify Conjecture 3.1 we need to compute the Kashaev invariant to several hundred digits when N = 2000 for instance. In this section we discuss how to achieve this.
There are multidimensional R-matrix state sum formulas for the colored Jones polynomial J K,N (q) where the number of summation points are given by a polynomial in N of degree the number of crossings of K minus 1 [GL05]. Unfortunately, this is not practical method even for the 4 1 knot.
An alternative way is to use fusion [KL94,Cos09,GvdV12] which allows one to compute the colored Jones polynomial more efficiently at the cost that the summand is a rational function of q. For instance, the colored Jones polynomial of a 2-fusion knot can be computed in O(N 3 ) steps using [GK12a, Thm.1.1]. This method works better, but it still has limitations.
A preferred method is to guess a nontrivial recursion relation for the colored Jones polynomial (see Section 4) and instead of using it to compute the colored Jones polynomial, differentiate sufficiently many times and numerically compute the Kashaev invariant. In the efforts to compute the Kashaev invariant of the (−2, 3, 7) pretzel knot, Zagier and the author obtained the following lemma, of theoretical and practical use. A computer implementation of Lemma 5.1 is available.
5.2.
Numerical verification of the Modularity Conjecture. Given a sequence of complex number (a n ) with an expected asymptotic expansion a n ∼ λ n n α (log n) β ∞ j=0 c j n j how can one numerically compute λ, α, β and several coefficients c j ? This is a well-known numerical analysis problem [BO99]. An acceleration method was proposed in [Zag01, p.954], which is also equivalent to the Richardson transform. For a detailed discussion of the acceleration method see [GIKM12, Sec.5.2]. In favorable circumstances the coefficients c j are algebraic numbers, and a numerical approximation may lead to a guess for their exact value.
A concrete application of the acceleration method was given in the appendix of [GvdV12] where one deals with several λ of the same magnitude as well as β = 0.
Numerical computations of the modularity conjecture for the 4 1 knot were obtained by Zagier around roots of unity of order at most 5, and extended to several other knots in [GZa, GZb]. As a sample computation, we present here the numerical data for 4 1 at α = 0, computed independently by Zagier and by the author. The values of A k in the table below are known for k = 0, . . . , 150. In addition, we present the numerical data for the 5 2 knot at α = 1/3, computed in [GZa].
φ 4 1 (X) = 3 −1/4 X 3/2 exp(CX) ∞ k=0 A k k!12 k h k h = A/X A = π 3 3/2 C = 1 πLiφ 5 2 (X/(3X + 1))/φ 5 2 (X) ∼ e C/h (2π/h) 3/2 ∆(1/3) ∞ k=0 A k (1/3)h k h = (2πi)/(X + 1/3) F = Q(α) α 3 − α 2 + 1 = 0 α = 0.877 · · · − 0.744 . . . i C = R(1 − α 2 ) + 2R(1 − α) − πi log(α) + π 2 R(x) = Li 2 (x) + 1 2 log x log(1 − x) − π 2 6 [1 − α 2 ] + 2[1 − α] ∈ B(F ) −23 = π 2
1 π 2 π 1 = 3α − 2 π 2 = 3α + 1 π 7 = (α 2 − 1)ζ 6 − α + 1 π 43 = 2α 2 − α − ζ 6 ∆(1/3) = e(−2/9)π 7 3 √ −3 √ π 1 A 0 (1/3) = π 7 π 43 A 1 (1/3) = −952 + 321α − 873α 2 + (1348 + 557α + 26α 2 )ζ 6 α 5 π 3 1 One may use the recursion relations [GK12b] for the twisted colored Jones polynomial to expand the above computations around complex roots of unity [DGa].
6. Stability 6.1. Stability of a sequence of polynomials. The Slope Conjecture deals with the highest (or the lowest, if you take the mirror image) q-exponent of the colored Jones polynomial. In this section we discuss what happens when we shift the colored Jones polynomial and place its lowest q-exponent to 0. Stability concerns the coefficients of the resulting sequence of polynomials in q. A weaker form of stability (0-stability, defined below) for the colored Jones polynomial of an alternating knot was conjectured by Dasbach and Lin, and proven independently by Armond [Arm11].
Stability was observed in some examples of alternating knots by Zagier, and conjectured by the author to hold for all knots, assuming that we restrict the sequence of colored Jones polynomials to suitable arithmetic progressions, dictated by the quasi-polynomial nature of its q-degree [Gar11b,Gar11a]. Zagier asked about modular and asymptotic properties of the limiting q-series.
A proof of stability in full for all alternating links is given in [GL11b]. Besides stability, this approach gives a generalized Nahm sum formula for the corresponding series, which in particular implies convergence in the open unit disk in the q-plane. The generalized Nahm sum formula comes with a computer implementation (using as input a planar diagram of a link), and allows the computation of several terms of the q-series as well as its asymptotics when q approaches radially from within the unit circle a complex root of unity. The Nahm sum formula is reminiscent to the cohomological Hall algebra of motivic Donaldson-Thomas invariants of Kontsevich-Soibelman [KS11], and may be related to recent work of Witten [Wit12] and Dimofte-Gaiotto-Gukov [DGG].
Let Z((q)) = { n∈Z a n q n | a n = 0, n ≪ 0} denote the ring of power series in q with integer coefficients and bounded below minimum q-degree.
Definition 6.1. Fix a sequence (f n (q)) of polynomials f n (q) ∈ Z[q]. We say that (f n (q)) is 0-stable if the following limit exists
lim n f n (q) = Φ 0 (q) ∈ Z[[q]],
i.e. for every natural number m ∈ Z, there exists a natural number n(m) such that the coefficient of q m in f n (q) is constant for all n > n(m).
We say that (f n (q)) is stable if there exist elements Φ k (q) ∈ Z((q)) for k = 0, 1, 2, . . . such that for every k ∈ N we have lim n q −nk f n (q) − k j=0 q jn Φ j (q) = 0 ∈ Z((q)) .
We will denote by
F (x, q) = ∞ k=0 Φ k (q)x k ∈ Z((q))[[x]]
the corresponding series associated to the stable sequence (f n (q)).
Thus, a 0-stable sequence f n (q) ∈ Z[q] gives rise to a q-series lim n f n (q) ∈ Z[[q]]. The qseries that come from the colored Jones polynomial are q-hypergeometric series of a special shape, i.e., they are generalized Nahm sums. The latter are introduced in the next section. where A is a positive definite even integral symmetric matrix and b ∈ Z r . Nahm sums appear in character formulas in Conformal Field Theory, and define analytic functions in the complex unit disk |q| < 1 with interesting asymptotics at complex roots of unity, and with sometimes modular behavior. Examples of Nahm sums is the famous list of seven mysterious q-series of Ramanujan that are nearly modular (in modern terms, mock modular). For a detailed discussion, see [Zag09]. Nahm sums give rise to elements of the Bloch group, which governs the leading radial asymptotics of f (q) as q approaches a complex root of unity. Nahm's Conjecture concerns the modularity of a Nahm sum f (q), and was studied extensively by Zagier, Vlasenko-Zwegers and others [VZ11,Zag07].
The limit of the colored Jones function of an alternating link leads us to consider generalized Nahm sums of the form
(4) Φ(q) = n∈C∩N r (−1) c·n q 1 2 n t ·A·n+b·n (q) n 1 . . . (q) nr
where C is a rational polyhedral cone in R r , b, c ∈ Z r and A is a symmetric (possibly indefinite) symmetric matrix. We will say that the generalized Nahm sum (4) is regular if the function n ∈ C ∩ N r → 1 2 n t · A · n + b · n is proper and bounded below, where mindeg q denotes the minimum degree with respect to q. Regularity ensures that the series (4) is a well-defined element of the ring Z((q)). In the remaining of the paper, the term Nahm sum will refer to a regular generalized Nahm sum.
6.3. Stability for alternating links. Let K denote an alternating link. The lowest monomial of J K,n (q) has coefficient ±1, and dividing J K,n+1 (q) by its lowest monomial gives a polynomial J + K,n (q) ∈ 1 + qZ[q]. We can now quote the main theorem of [GL11b].
Theorem 6.2. [GL11b] For every alternating link K, the sequence (J + K,n (q)) is stable and the corresponding limit F K (x, q) can be effectively computed by a planar projection D of K. Moreover, F K (0, q) = Φ K,0 (q) is given by an explicit Nahm sum computed by D.
An illustration of the corresponding q-series Φ K,0 (q) the knots 3 1 , 4 1 and 6 3 is given in Section 6.4. 6.4. Computation of the q-series of alternating links. Given the generalized Nahm sum for Φ K,0 (q), a multidimensional sum of as many variables as the number of crossings of K, one may try to identify the q-series Φ K,0 (q) with a known one. In joint work with Zagier, we computed the first few terms of the corresponding series (an interesting and nontrivial task in itself) and guessed the answer for knots with a small number of crossings. The guesses are presented in the following table
K c − c + σ Φ * K,0 (q) Φ K,0 (q) 3 1 = −K 1 3 0 2 h 3 h 2 4 1 = K −1 2 2 0 h 3 h 3 5 1 5 0 4 h 5 h 2 5 2 = K 2 0 5 −2 h 4 h 3 6 1 = K −2 4 2 0 h 3 h 5 6 2 4 2 2 h 3 h 4 h 3 6 3 3 3 0 h 2 3 h 2 3 7 1 7 0 6 h 7 h 2 7 2 = K 3 0 7 −2 h 6 h 3 7 3 0 7 −4 h 4 h 5 7 4 0 7 −2 (h 4 ) 2 h 3 7 5 7 0 4 h 3 h 4 h 4 7 6 5 2 2 h 3 h 4 h 2 3 7 7 3 4 0 h 3 3 h 2 3 8 1 = K −3 6 2 0 h 3 h 7 8 2 6 2 4 h 3 h 6 h 3 8 3 4 4 0 h 5 h 5 8 4 4 4 2 h 4 h 5 h 3 8 5 2 6 −4 h 3 ??? K p , p > 0 0 2p + 1 −2 h * 2p h 3 K p , p < 0 2|p| 2 0 h 3 h 2|p|+1 T (2, p), p > 0 2p + 1 0 2p h 2p+1 1
where, for a positive natural number b, h b are the unary theta and false theta series
h b (q) = n∈Z ε b (n) q b 2 n(n+1)−n where ε b (n) = (−1) n if b is odd 1 if b is even and n ≥ 0 −1
if b is even and n < 0
Observe that h 1 (q) = 0, h 2 (q) = 1, h 3 (q) = (q) ∞ .
In the above table, c + (resp. c − ) denotes the number of positive (resp., negative) crossings of an alternating knot K, and Φ * K,0 (q) = Φ −K,0 (q) denotes the q-series of the mirror −K of K, and T (2, p) denotes the (2, p) torus knot.
Concretely, the above table predicts the following identities
(q) −2 ∞ = a,b,c≥0 (−1) a q 3 2 a 2 +ab+ac+bc+ 1 2 a+b+c (q) a (q) b (q) c (q) a+b (q) a+c (q) −3 ∞ = a,b,c,d,e≥0 a+b=d+e (−1) b+d q b 2 2 + d 2 2 +bc+ac+ad+be+ a 2 +c+ e 2 (q) b+c (q) a (q) b (q) c (q) d (q) e (q) c+d (q) −4 ∞ = a,b,c,d,e,f ≥0 a+e≥b,b+f ≥a (−1) a−b+e q a 2 + 3a 2 2 + b 2 + b 2 2 +c+ac+d+ad+cd+ e 2 +2ae−2be+de+ 3e 2 2 −af +bf +f 2 (q) a (q) b (q) c (q) a+c (q) d (q) a+d (q) e (q) a−b+e (q) a−b+d+e (q) f (q) −a+b+f
corresponding to the knots 3 1 4 1 6 3
Some of the identities of the above table have been consequently proven [AD11]. In particular this settles the (mock)-modularity properties of the series Φ K,0 (q) for all but one knot. The q-series of the remaining knot 8 5 is given by an 8-dimensional Nahn sum where S = S(a, b, c, d, e, f, g, h) is given by
Φ 8 5 ,0 (q) = (q) 8 ∞ a,bS = (−1) b+f q 2a+3a 2 − b 2 −2ab+ 3b 2 2 +c+ac+d+ad+cd+e+ae+de+
3f 2 +4af −4bf +ef + 5f 2 2 +g+ag−bg+eg+f g+h+ah−bh+f h+gh (q)a(q) b (q)c(q) d (q)e(q) f (q)g (q) h (q) a+c (q) a+d (q) a+e (q) a−b+f (q) a−b+e+f (q) a−b+f +g (q) a−b+f +h .
The first few terms of the series Φ 8 5 ,0 (q), which somewhat simplify when divided by (q) ∞ , are given by Φ 8 5 ,0 (q)/(q) ∞ = 1−q +q 2 −q 4 +q 5 +q 6 −q 8 +2q 10 +q 11 +q 12 −q 13 −2q 14 +2q 16 +3q 17 +2q 18 + q 19 −3q 21 −2q 22 +q 23 +4q 24 +4q 25 +5q 26 +3q 27 +q 28 −2q 29 −3q 30 −3q 31 +5q 33 +8q 34 +8q 35 +8q 36 + 6q 37 + 3q 38 − 2q 39 − 5q 40 − 6q 41 − q 42 + 2q 43 + 9q 44 + 13q 45 + 17q 46 + 16q 47 + 14q 48 + 9q 49 + 4q 50 − 3q 51 − 8q 52 − 8q 53 − 5q 54 + 3q 55 + 14q 56 + 21q 57 + 27q 58 + 32q 59 + 33q 60 + 28q 61 + 21q 62 + 11q 63 + q 64 − 9q 65 − 11q 66 − 11q 67 − 2q 68 + 9q 69 + 27q 70 + 40q 71 + 56q 72 + 60q 73 + 65q 74 + 62q 75 + 54q 76 + 39q 77 +23q 78 +4q 79 −9q 80 −16q 81 −14q 82 −3q 83 +16q 84 +40q 85 +67q 86 +92q 87 +114q 88 +129q 89 + 135q 90 +127q 91 +115q 92 +92q 93 +66q 94 +35q 95 +9q 96 −12q 97 −14q 98 −11q 99 +13q 100 +O(q) 101 .
We were unable to identify Φ 8 5 ,0 (q) with a known q-series. Nor were we able to decide whether it is a mock-modular form [Zag09]. It seems to us that 8 5 is not an exception, and that the mock-modularity of the q-series Φ 8 5 ,0 (q) is an open problem. Question 6.3. Can one decide if a generalized Nahm sum is a mock-modular form?
Modularity and Stability
Modularity and Stability are two important properties of quantum knot invariants. The Kashaev invariant K and the q-series Φ K,0 (q) of a knotted 3-dimensional object have some common features, namely asymptotic expansions at roots of unity approached radially (for Φ K,0 (q)) and on the unit circle (for K ), depicted in the following figure
The leading asymptotic expansions of K and Φ K,0 (q) are governed by elements of the Bloch group as is the case of the Kashaev invariant and also the case of the radial limits of Nahm sums [VZ11]. In this section we discuss a conjectural relation, discovered accidentally by Zagier and the author in the spring of 2011, between the asymptotics of 4 1 and Φ 6j,0 (q), where 6j is the q-6j symbol of the tetrahedron graph whose edges are colored with 2N [Cos09,GvdV12] The evaluation of the above tetrahedron graph J + 6j,N (q) ∈ 1 + qZ[q] is given explicitly by [Cos09,GvdV12] The sequence (J + 6j,N (q)) is stable and the corresponding series F 6j (x, q) is given by The first few terms of φ 6j,0 (q) are given by φ 6j,0 (q) = 1−q −2q 2 −2q 3 −2q 4 +q 6 +5q 7 +7q 8 +11q 9 +13q 10 +16q 11 +14q 12 +14q 13 +8q 14 − 12q 16 − 26q 17 − 46q 18 − 66q 19 − 90q 20 − 114q 21 − 135q 22 − 155q 23 − 169q 24 − 174q 25 − 165q 26 − 147q 27 −105q 28 −48q 29 +37q 30 +142q 31 +280q 32 +435q 33 +627q 34 +828q 35 +1060q 36 +O(q) 37 .
The next conjecture which combines stability and modularity of two knotted objects has been numerically checked around complex roots of unity of order at most 3. Conjecture 7.1. As X −→ +∞ with bounded denominator, we have φ 6j,0 (e −1/X ) = φ 4 1 (X)/X 1/2 + φ 4 1 (−X)/(−X) 1/2 .
Let us say that two polynomials P (M, L) = M Q(M, L) are essentially equal if their irreducible factors with positive L-degree are equal. Conjecture 2.2. For all knots K, we have A K (M 2 , L, 1) = M A K (M, L).
SL(2, Z) and α = a/c and = 2πi/(X + d/c) where X −→ +∞ with bounded denominators. Let φ = φ K denote the extended Kashaev invariant of a hyperbolic knot K and let F ⊂ C denote the invariant trace field of M = S 3 \ K [MR03]. Let C(M) ∈ C/(4π 2 Z) denote the complex Chern-Simons invariant of M [GZ07, Neu04]. The next conjecture was formulated by Zagier.
A-polynomial is known for the 1-parameter families of twist knots K p [HS04] and pretzel knots KP p = (−2, 3, 3 + 2p) [GM11] depicted on the left and the right part of the following figure where an integer m inside a box indicates the number of |m| half-twists, right-handed (if m > 0) or left-handed (if m < 0), according to the following figure
•
are geometrically similar, in particular they are scissors congruent, have equal volume, equal invariant trace fields and their Chern-Simons invariant differ by a sixth root of unity, • their A-polynomials are equal up to a GL(2, Z) transformation [GM11, Thm.1.4].
Lemma 5 . 1 .
51The Kashaev invariant K N can be numerically computed in O(N) steps.
6. 2 .
2Generalized Nahm sums. In [NRT93] Nahm studied q-hypergeometric series f (q) ∈ Z[[q]] of the form ) n 1 . . . (q) nr
F
6j (x, q) = 1 (1 − q)(q) −n ) 4 ∞ (x 4 q −n+1 ) ∞ ∈ Z((q))[[x]] ,where as usual (x) ∞ = ∞ k=0 (1 − xq k ) and (q) n = n k=1 (1 − q k ).
are periodic functions. In [Gar11b] the author formulated the Slope Conjecture. Conjecture 2.1. For all knots K we have 4c K (N) ⊂ bs K . The movitating example for the Slope Conjecture was the case of the (−2, 3, 7) pretzel knot, where we have [Gar11b, Ex.1.4]δ (−2,3,7) (n) =
37
8
n 2 +
17
2
n =
37
8
n 2 +
17
2
n + a(n),
Rogers-Ramanujan type identities and the head and tail of the colored jones polynomial. Cody Armond, Oliver Dasbach, arXiv:1106.3948PreprintCody Armond and Oliver Dasbach, Rogers-Ramanujan type identities and the head and tail of the colored jones polynomial, 2011, arXiv:1106.3948, Preprint.
The head and tail conjecture for alternating knots. Cody Armond, arXiv:1112.3995PreprintCody Armond, The head and tail conjecture for alternating knots, 2011, arXiv:1112.3995, Preprint.
. Dror Bar-Natan, Knotatlas. Dror Bar-Natan, Knotatlas, 2005, http://katlas.org.
On the Melvin-Morton-Rozansky conjecture. Dror Bar-Natan, Stavros Garoufalidis, Invent. Math. 1251Dror Bar-Natan and Stavros Garoufalidis, On the Melvin-Morton-Rozansky conjecture, Invent. Math. 125 (1996), no. 1, 103-133.
Advanced mathematical methods for scientists and engineers. I. M Carl, Steven A Bender, Orszag, Springer-VerlagNew YorkAsymptotic methods and perturbation theory. Reprint of the 1978 originalCarl M. Bender and Steven A. Orszag, Advanced mathematical methods for scientists and engi- neers. I, Springer-Verlag, New York, 1999, Asymptotic methods and perturbation theory, Reprint of the 1978 original.
Plane curves associated to character varieties of 3-manifolds. ] D + 94, M Cooper, H Culler, D D Gillet, P B Long, Shalen, Invent. Math. 1181+ 94] D. Cooper, M. Culler, H. Gillet, D. D. Long, and P. B. Shalen, Plane curves associated to character varieties of 3-manifolds, Invent. Math. 118 (1994), no. 1, 47-84.
Francesco Costantino, arXiv/0908.0542Integrality of Kauffman brackets of trivalent graphs. Francesco Costantino, Integrality of Kauffman brackets of trivalent graphs, arXiv/0908.0542, 2009.
Marc Culler, Tables of A-polynomials. Marc Culler, Tables of A-polynomials, 2010, http://www.math.uic.edu/~culler/Apolynomials.
On the WKB expansion of linear q-difference equations. Tudor Dimofte, Stavros Garoufalidis, In preparationTudor Dimofte and Stavros Garoufalidis, On the WKB expansion of linear q-difference equations, In preparation.
arXiv:1202.6268The quantum content of the gluing equations. Preprint, The quantum content of the gluing equations, arXiv:1202.6268, Preprint 2012.
Incompressibility criteria for spun-normal surfaces. M Nathan, Stavros Dunfield, Garoufalidis, Trans. Amer. Math. Soc. 36411Nathan M. Dunfield and Stavros Garoufalidis, Incompressibility criteria for spun-normal surfaces, Trans. Amer. Math. Soc. 364 (2012), no. 11, 6109-6137.
Tudor Dimofte, Sergei Gukov, Davide Gaiotto, arXiv:1112.51793-manifolds and 3d indices. PreprintTudor Dimofte, Sergei Gukov, and Davide Gaiotto, 3-manifolds and 3d indices, arXiv:1112.5179, Preprint 2011.
Exact results for perturbative Chern-Simons theory with complex gauge group. Tudor Dimofte, Sergei Gukov, Jonatan Lenells, Don Zagier, Commun. Number Theory Phys. 32Tudor Dimofte, Sergei Gukov, Jonatan Lenells, and Don Zagier, Exact results for perturbative Chern-Simons theory with complex gauge group, Commun. Number Theory Phys. 3 (2009), no. 2, 363-443.
Tudor Dimofte, arXiv:1102.4847Quantum Riemann surfaces in Chern-Simons theory. PreprintTudor Dimofte, Quantum Riemann surfaces in Chern-Simons theory, arXiv:1102.4847, Preprint 2011.
Slopes and colored Jones polynomials of adequate knots. David Futer, Efstratia Kalfagianni, Jessica S Purcell, Proc. Amer. Math. Soc. 1395David Futer, Efstratia Kalfagianni, and Jessica S. Purcell, Slopes and colored Jones polynomials of adequate knots, Proc. Amer. Math. Soc. 139 (2011), no. 5, 1889-1896.
On the characteristic and deformation varieties of a knot. Stavros Garoufalidis, Proceedings of the Casson Fest. the Casson FestTopol. Publ., Coventry7electronicStavros Garoufalidis, On the characteristic and deformation varieties of a knot, Proceedings of the Casson Fest, Geom. Topol. Monogr., vol. 7, Geom. Topol. Publ., Coventry, 2004, pp. 291-309 (electronic).
Chern-Simons theory, analytic continuation and arithmetic. Stavros Garoufalidis, Acta Math. Vietnam. 333Stavros Garoufalidis, Chern-Simons theory, analytic continuation and arithmetic, Acta Math. Vietnam. 33 (2008), no. 3, 335-362.
The degree of a q-holonomic sequence is a quadratic quasi-polynomial, Electron. Stavros Garoufalidis, J. Combin. 182Paper 4, 23Stavros Garoufalidis, The degree of a q-holonomic sequence is a quadratic quasi-polynomial, Elec- tron. J. Combin. 18 (2011), no. 2, Paper 4, 23.
The Jones slopes of a knot. Quantum Topol. 21, The Jones slopes of a knot, Quantum Topol. 2 (2011), no. 1, 43-69.
Knots and tropical curves, Interactions between hyperbolic geometry, quantum topology and number theory. Contemp. Math. 541Amer. Math. Soc, Knots and tropical curves, Interactions between hyperbolic geometry, quantum topology and number theory, Contemp. Math., vol. 541, Amer. Math. Soc., Providence, RI, 2011, pp. 83- 101.
On the relation between the A-polynomial and the Jones polynomial. Rȃzvan Gelca, Proc. Amer. Math. Soc. 1304electronicRȃzvan Gelca, On the relation between the A-polynomial and the Jones polynomial, Proc. Amer. Math. Soc. 130 (2002), no. 4, 1235-1241 (electronic).
Asymptotics of the instantons of Painlevé I. Stavros Garoufalidis, Alexander Its, Andrei Kapaev, Marcos Mariño, Int. Math. Res. Not. IMRN. 3Stavros Garoufalidis, Alexander Its, Andrei Kapaev, and Marcos Mariño, Asymptotics of the instantons of Painlevé I, Int. Math. Res. Not. IMRN (2012), no. 3, 561-606.
The noncommutative A-polynomial of (−2, 3, n) pretzel knots. Stavros Garoufalidis, Christoph Koutschan, Exp. Math. 213Stavros Garoufalidis and Christoph Koutschan, The noncommutative A-polynomial of (−2, 3, n) pretzel knots, Exp. Math. 21 (2012), no. 3, 241-251.
Twisting q-holonomic sequences by complex roots of unity. Stavros Garoufalidis, Christoph Koutschan, Stavros Garoufalidis and Christoph Koutschan, Twisting q-holonomic sequences by complex roots of unity, ISSAC, 2012, pp. 179-186.
The colored Jones function is q-holonomic. Stavros Garoufalidis, T Q Thang, Lê, Geom. Topol. 9electronicStavros Garoufalidis and Thang T. Q. Lê, The colored Jones function is q-holonomic, Geom. Topol. 9 (2005), 1253-1293 (electronic).
Asymptotics of the colored Jones function of a knot. Geom. Topol. 15electronic, Asymptotics of the colored Jones function of a knot, Geom. Topol. 15 (2011), 2135-2180 (electronic).
arXiv:1112.3905Nahm sums, stability and the colored Jones polynomial. Preprint, Nahm sums, stability and the colored Jones polynomial, 2011, arXiv:1112.3905, Preprint.
SL(2, C) Chern-Simons theory and the asymptotic behavior of the colored Jones polynomial. Sergei Gukov, Hitoshi Murakami, Lett. Math. Phys. 862-3Sergei Gukov and Hitoshi Murakami, SL(2, C) Chern-Simons theory and the asymptotic behavior of the colored Jones polynomial, Lett. Math. Phys. 86 (2008), no. 2-3, 79-98.
The A-polynomial of the (−2, 3, 3 + 2n) pretzel knots. Stavros Garoufalidis, Thomas W Mattman, New York J. Math. 17Stavros Garoufalidis and Thomas W. Mattman, The A-polynomial of the (−2, 3, 3 + 2n) pretzel knots, New York J. Math. 17 (2011), 269-279.
The non-commutative A-polynomial of twist knots. Stavros Garoufalidis, Xinyu Sun, J. Knot Theory Ramifications. 1912Stavros Garoufalidis and Xinyu Sun, The non-commutative A-polynomial of twist knots, J. Knot Theory Ramifications 19 (2010), no. 12, 1571-1595.
Three-dimensional quantum gravity, Chern-Simons theory, and the A-polynomial. Sergei Gukov, Comm. Math. Phys. 2553Sergei Gukov, Three-dimensional quantum gravity, Chern-Simons theory, and the A-polynomial, Comm. Math. Phys. 255 (2005), no. 3, 577-627.
Asymptotics of quantum spin networks at a fixed root of unity. Stavros Garoufalidis, Roland Van Der Veen, Math. Ann. 3524Stavros Garoufalidis and Roland van der Veen, Asymptotics of quantum spin networks at a fixed root of unity, Math. Ann. 352 (2012), no. 4, 987-1012.
Empirical relations between q-series and kashaev's invariant of knots. Stavros Garoufalidis, Don Zagier, PreprintAsymptotics of quantum knot invariants. PreprintStavros Garoufalidis and Don Zagier, Asymptotics of quantum knot invariants, Preprint 2013. [GZb] , Empirical relations between q-series and kashaev's invariant of knots, Preprint 2013.
The extended Bloch group and the Cheeger-Chern-Simons class. Sebastian Goette, Christian K Zickert, Geom. Topol. 11Sebastian Goette and Christian K. Zickert, The extended Bloch group and the Cheeger-Chern- Simons class, Geom. Topol. 11 (2007), 1623-1635.
On the quantum sl 2 invariants of knots and integral homology spheres. Kazuo Habiro, Invariants of knots and 3-manifolds. KyotoGeom. Topol. Publ., Coventry4electronicKazuo Habiro, On the quantum sl 2 invariants of knots and integral homology spheres, Invariants of knots and 3-manifolds (Kyoto, 2001), Geom. Topol. Monogr., vol. 4, Geom. Topol. Publ., Coventry, 2002, pp. 55-68 (electronic).
Theorie der Normalflächen. Wolfgang Haken, Acta Math. 105Wolfgang Haken, Theorie der Normalflächen, Acta Math. 105 (1961), 245-375.
On the boundary curves of incompressible surfaces. A Hatcher, Pacific J. Math. 992A. Hatcher, On the boundary curves of incompressible surfaces., Pacific J. Math. 99 (1982), no. 2, 373-377.
A formula for the A-polynomial of twist knots. Jim Hoste, Patrick D Shanahan, J. Knot Theory Ramifications. 132Jim Hoste and Patrick D. Shanahan, A formula for the A-polynomial of twist knots, J. Knot Theory Ramifications 13 (2004), no. 2, 193-209.
Jens Carsten Jantzen, Lectures on quantum groups. Providence, RIAmerican Mathematical Society6Jens Carsten Jantzen, Lectures on quantum groups, Graduate Studies in Mathematics, vol. 6, American Mathematical Society, Providence, RI, 1996.
V F R Jones, Hecke algebra representations of braid groups and link polynomials. V. F. R. Jones, Hecke algebra representations of braid groups and link polynomials, Ann. of Math. (2) 126 (1987), no. 2, 335-388.
The hyperbolic volume of knots from the quantum dilogarithm. R M Kashaev, Lett. Math. Phys. 393R. M. Kashaev, The hyperbolic volume of knots from the quantum dilogarithm, Lett. Math. Phys. 39 (1997), no. 3, 269-275.
H Louis, Sóstenes L Kauffman, Lins, Temperley-Lieb recoupling theory and invariants of 3-manifolds. Princeton, NJPrinceton University Press134Louis H. Kauffman and Sóstenes L. Lins, Temperley-Lieb recoupling theory and invariants of 3- manifolds, Annals of Mathematics Studies, vol. 134, Princeton University Press, Princeton, NJ, 1994.
Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson-Thomas invariants. Maxim Kontsevich, Yan Soibelman, Commun. Number Theory Phys. 52Maxim Kontsevich and Yan Soibelman, Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson-Thomas invariants, Commun. Number Theory Phys. 5 (2011), no. 2, 231- 352.
The colored Jones polynomial and the A-polynomial of knots. T Q Thang, Lê, Adv. Math. 2072Thang T. Q. Lê, The colored Jones polynomial and the A-polynomial of knots, Adv. Math. 207 (2006), no. 2, 782-804.
T Q Thang, Anh T Lê, Tran, arXiv:1111.5258On the AJ conjecture for knots. PreprintThang T. Q. Lê and Anh T. Tran, On the AJ conjecture for knots, arXiv:1111.5258, Preprint 2012.
The colored Jones polynomials and the simplicial volume of a knot. Hitoshi Murakami, Jun Murakami, Acta Math. 1861Hitoshi Murakami and Jun Murakami, The colored Jones polynomials and the simplicial volume of a knot, Acta Math. 186 (2001), no. 1, 85-104.
The arithmetic of hyperbolic 3-manifolds. Colin Maclachlan, Alan W Reid, Graduate Texts in Mathematics. 219Springer-VerlagColin Maclachlan and Alan W. Reid, The arithmetic of hyperbolic 3-manifolds, Graduate Texts in Mathematics, vol. 219, Springer-Verlag, New York, 2003.
Some limits of the colored Jones polynomials of the figure-eight knot, Kyungpook Math. Hitoshi Murakami, J. 443Hitoshi Murakami, Some limits of the colored Jones polynomials of the figure-eight knot, Kyung- pook Math. J. 44 (2004), no. 3, 369-383.
Extended Bloch group and the Cheeger-Chern-Simons class. D Walter, Neumann, Geom. Topol. 8electronicWalter D. Neumann, Extended Bloch group and the Cheeger-Chern-Simons class, Geom. Topol. 8 (2004), 413-474 (electronic).
Dilogarithm identities in conformal field theory. W Nahm, A Recknagel, M Terhoeven, Modern Phys. Lett. A. 819W. Nahm, A. Recknagel, and M. Terhoeven, Dilogarithm identities in conformal field theory, Modern Phys. Lett. A 8 (1993), no. 19, 1835-1847.
. Marko Petkovšek, Herbert S Wilf, Doron Zeilberger, A = B Peters Ltd, M A Wellesley, Donald E. KnuthWith a separately available computer diskMarko Petkovšek, Herbert S. Wilf, and Doron Zeilberger, A = B, A K Peters Ltd., Wellesley, MA, 1996, With a foreword by Donald E. Knuth, With a separately available computer disk.
Knots and links. Dale Rolfsen, Mathematics Lecture Series. 7Publish or Perish IncCorrected reprint of the 1976 originalDale Rolfsen, Knots and links, Mathematics Lecture Series, vol. 7, Publish or Perish Inc., Houston, TX, 1990, Corrected reprint of the 1976 original.
William Thurston, The geometry and topology of 3-manifolds. Berlin; PrincetonSpringer-VerlagUniversitextWilliam Thurston, The geometry and topology of 3-manifolds, Universitext, Springer-Verlag, Berlin, 1977, Lecture notes, Princeton.
Anh T Tran, arXiv:1111.5065Proof of a stronger version of the AJ conjecture for torus knots. PreprintAnh T. Tran, Proof of a stronger version of the AJ conjecture for torus knots, arXiv:1111.5065, Preprint 2012.
The Yang-Baxter equation and invariants of links. V G Turaev, Invent. Math. 923V. G. Turaev, The Yang-Baxter equation and invariants of links, Invent. Math. 92 (1988), no. 3, 527-553.
Quantum invariants of knots and 3-manifolds. de Gruyter Studies in Mathematics. 18Walter de Gruyter & Co, Quantum invariants of knots and 3-manifolds, de Gruyter Studies in Mathematics, vol. 18, Walter de Gruyter & Co., Berlin, 1994.
Nahm's conjecture: asymptotic computations and counterexamples. Masha Vlasenko, Sander Zwegers, Commun. Number Theory Phys. 53Masha Vlasenko and Sander Zwegers, Nahm's conjecture: asymptotic computations and coun- terexamples, Commun. Number Theory Phys. 5 (2011), no. 3, 617-642.
Edward Witten, Quantum field theory and the Jones polynomial. 121Edward Witten, Quantum field theory and the Jones polynomial, Comm. Math. Phys. 121 (1989), no. 3, 351-399.
. Fivebranes , Quantum Topol, , Fivebranes and knots, Quantum Topol. 3 (2012), no. 1, 1-137.
An algorithmic proof theory for hypergeometric (ordinary and "q") multisum/integral identities. S Herbert, Doron Wilf, Zeilberger, Invent. Math. 1083Herbert S. Wilf and Doron Zeilberger, An algorithmic proof theory for hypergeometric (ordinary and "q") multisum/integral identities, Invent. Math. 108 (1992), no. 3, 575-633.
Vassiliev invariants and a strange identity related to the Dedekind eta-function. Don Zagier, Topology. 405Don Zagier, Vassiliev invariants and a strange identity related to the Dedekind eta-function, Topology 40 (2001), no. 5, 945-960.
The dilogarithm function. Frontiers in number theory, physics, and geometry. BerlinSpringerII, The dilogarithm function, Frontiers in number theory, physics, and geometry. II, Springer, Berlin, 2007, pp. 3-65.
Ramanujan's mock theta functions and their applications (after Zwegers and Ono-Bringmann). Exp. No. 986, vii-viii. Astérisque, Ramanujan's mock theta functions and their applications (after Zwegers and Ono- Bringmann), Astérisque (2009), no. 326, Exp. No. 986, vii-viii, 143-164 (2010), Séminaire Bour- baki. Vol. 2007/2008.
Quantum modular forms, Quanta of maths. Clay Math. Proc. 11Amer. Math. Soc, Quantum modular forms, Quanta of maths, Clay Math. Proc., vol. 11, Amer. Math. Soc., Providence, RI, 2010, pp. 659-675.
A holonomic systems approach to special functions identities. Doron Zeilberger, J. Comput. Appl. Math. 323Doron Zeilberger, A holonomic systems approach to special functions identities, J. Comput. Appl. Math. 32 (1990), no. 3, 321-368.
| []
|
[
"Adaptive Cost Coefficient Identification for Planning Optimal Operation in Mobile Robot based Internal Transportation",
"Adaptive Cost Coefficient Identification for Planning Optimal Operation in Mobile Robot based Internal Transportation"
]
| [
"Pragna Das ",
"Member, IEEELluís Ribas-Xirgo "
]
| []
| []
| Decisions in mobile robot based logistic systems can be improved based on knowledge of real-time state of individual parts and environmental factors. In case of battery operated mobile robots, the cost of performance depends not only on physical state but also on state of charge of batteries. The knowledge about these factors can be obtained through cost coefficients by individual robots. Our work focuses on identifying these cost coefficients in a mobile robot used in internal transportation, which can be fed to any standard planning algorithm (like Dijkstra) to optimize total cost of operation. Travel time is one such type of cost coefficients. With suitable predictions of these travel times the cost involved to traverse from one node to another can be known. Suitable state-space model is formulated and Kalman filtering is used to estimate these travel time. Experimental validation of the efficacy of these travel times has been conducted by using them as weights for edges in standard route planning algorithm. Results show that when travel times is used as weights to compute path, the total traversing cost of paths has been reduced by 15% on average in comparison to that of paths obtained by heuristics costs. | null | [
"https://arxiv.org/pdf/1711.05319v2.pdf"
]
| 1,449,790 | 1711.05319 | 5a09d43196969189a9cab1e7e1013706a7805e83 |
Adaptive Cost Coefficient Identification for Planning Optimal Operation in Mobile Robot based Internal Transportation
Pragna Das
Member, IEEELluís Ribas-Xirgo
Adaptive Cost Coefficient Identification for Planning Optimal Operation in Mobile Robot based Internal Transportation
1Index Terms-Mobile robotautonomous systemsplanning and co-ordinationcost parameterparameter estimationcost efficiencyKalman filtering
Decisions in mobile robot based logistic systems can be improved based on knowledge of real-time state of individual parts and environmental factors. In case of battery operated mobile robots, the cost of performance depends not only on physical state but also on state of charge of batteries. The knowledge about these factors can be obtained through cost coefficients by individual robots. Our work focuses on identifying these cost coefficients in a mobile robot used in internal transportation, which can be fed to any standard planning algorithm (like Dijkstra) to optimize total cost of operation. Travel time is one such type of cost coefficients. With suitable predictions of these travel times the cost involved to traverse from one node to another can be known. Suitable state-space model is formulated and Kalman filtering is used to estimate these travel time. Experimental validation of the efficacy of these travel times has been conducted by using them as weights for edges in standard route planning algorithm. Results show that when travel times is used as weights to compute path, the total traversing cost of paths has been reduced by 15% on average in comparison to that of paths obtained by heuristics costs.
Abstract-Decisions in mobile robot based logistic systems can be improved based on knowledge of real-time state of individual parts and environmental factors. In case of battery operated mobile robots, the cost of performance depends not only on physical state but also on state of charge of batteries. The knowledge about these factors can be obtained through cost coefficients by individual robots. Our work focuses on identifying these cost coefficients in a mobile robot used in internal transportation, which can be fed to any standard planning algorithm (like Dijkstra) to optimize total cost of operation. Travel time is one such type of cost coefficients. With suitable predictions of these travel times the cost involved to traverse from one node to another can be known. Suitable state-space model is formulated and Kalman filtering is used to estimate these travel time. Experimental validation of the efficacy of these travel times has been conducted by using them as weights for edges in standard route planning algorithm. Results show that when travel times is used as weights to compute path, the total traversing cost of paths has been reduced by 15% on average in comparison to that of paths obtained by heuristics costs.
Index Terms-Mobile robot, autonomous systems, planning and co-ordination, cost parameter, parameter estimation, cost efficiency, Kalman filtering
I. INTRODUCTION
M OBILE ROBOT (MR) based systems used for internal logistics in factories demand cost efficient decisions on planning and co-ordination. Decisions on planning becomes better when they are based on information about the MR's batteries and environmental factors, obtained during operation of the system [3], [8], [14]. This idea is explained in the following example. Figure 1 illustrates a scaled down automated internal transportation system, typically used for logistics in factories and executed by MRs. In this example, all MRs can execute only one task at a time. Let, at t i , the path computed for A1 to carry some material to P 1 is marked by the dotted line. Again, at time t j (j > i), A1 needs to carry same material to P 1. But now, the battery capability of A1 has decreased due to execution of previous tasks and the condition of the given path has deteriorated (marked by dotted rectangle). Hence, equal amount of time and energy as previous would not be sufficient to reach P 1 at t j by A1. At this juncture, decision on routes can be improved considering [14] like capability, or fitness et cetera of completing the task. This capability can be directly measured by the time(s) needed to perform task(s) by individual robots [17] in the system. In the above example, time required to reach one node from another node can be measured to denote the capability or fitness of the MR. Hence, these quantities are formalized as cost coefficient in order to incorporate the different conditions of the environmental, physical and mechanical parts of any MR into the control decision steps. In Figure 1, the real cost, considering the floor condition, required to traverse from one node to another can lead to a better optimal path to reach P 1. This cost can be known directly by estimating the time to go from one node to another, i.e.-time to travel between two nodes through the connecting edges. The travel time is calculated considering the difference between the departure from one node and reaching the next node and thus travel time is not dependent on the shape of the edge, rather it depends on the time taken to traverse the edge. These travel times which includes the dynamically changing factors, when considered as cost of traversing the edges, can produce a different path than previous, for example the path marked by solid line can be a better decision to traverse to P 1.
Aforementioned cost coefficients like travel time of edges can be obtained irrespective of the kind of task the robot needs to perform. In case of autonomous logistics, travel time between nodes can be thought of one such cost factor which determine the utilities based not only on internal factors of each robot but also on environmental factors, while the the task is traversing from one port to another. On one hand, they arise locally at each MR due to action of actuators, The plot of progressive mean of travel time of (b) shows that the values increase first, then steadily decrease and then increase gradually till complete discharge. But the longer increase of values of travel time in (c) can be attributed to the rough floor, because at equal battery capacity in both cases, more energy is required to traverse in rough surface. Plot of the same arc travel time in different conditions of floor demonstrate that travel time can reflect not only state of charge of batteries [9] but also environmental conditions.
The efficacy of the travel times in planning is evaluated by estimating them for each required edge while making decisions to compute routes. The travel time is considered as weight of connecting edges to facilitate standard planning algorithm like Dijkstra to compute less cost consuming path. Thus, Dijkstra's algorithm is modified by using travel time as cost of edge, instead of heuristic cost based on distance for edges. The total travelling costs of two categories of paths obtained by heuristically gathered cost and real travel time, irrespective of the route planning method, are calculated. Average total costs of path, obtained using static estimation of travel time ( variation of edge travel cost over time is not considered) is roughly 5% less that of paths, obtained by heuristics costs. (Section V). However, travel times of edges (edge costs) vary along time and require to be predicted accordingly during path planning. A good estimation method to accurately predict travel times requires their histories, which can be collected progressively during MR operation. The estimation process start with mean of data obtained from legacy and real observations are obtained during the operation. Thus, the filtering method cannot generate the best estimates at initial few iterations. The estimates get improved over time. Real travel times are obtained by these estimations which can produced different paths than that of other costs like theoretical, heuristic and experimental. In fact, estimating traversal time by Kalman filtering shows that heuristic edge costs can underestimate total costs and, thus, can lead to nonminimal paths.
The contribution of this paper is twofold. Firstly, travel time of edges is identified as a suitable cost coefficient considering an analogy to real, automated and fully functional plant. Secondly, these identified travel times are estimated both statically and dynamically. Further to this, they are utilised in a planning decisions to culminate into cost efficient optimal results for an MR.
The next section highlights the background and previous state of the art (Section II). Section III formulates the problem in the light of an internal logistic system with path traversal as a task. Section IV explains the prototype platform and other details for the system used to conduct experiments. Two experiments and their results are elaborated in Sections V and VI with Algorithm 1 elaborating on the proposed approach which incorporates modification over Dijkstra's algorithm. Section VII concludes with discussions and future directions of investigation.
II. BACKGROUND
So far, path planning has been solved in two distinctly different approaches for autonomous robots. Sampling methods perform reasonably well in solving intricate path planning problems in static and dynamic environment for a single robot [12]. The vehicular dynamics are considered as state in these approaches and the minimum cost path is obtained by spanning the search tree based on the distance between the current state and goal state. Although Suh and Oh in [23] and Achtelik et. al in [1] have used Gaussian process as the cost of the path to incorporate environmental parameters, the search mandates to conceive the vehicular dynamics of the robot. Also, sampling based methods are blocked into local minima and uncertainties in the environment hinder successful results [12]. Further, heuristic approaches like Artificial Neural Network (ANN) [10], Genetic Algorithm (GA) [2], Particle Swarm optimisation (PSO) [24], Ant Colony Optimisation (ACO) [6], et cetera can adapt to uncertainties and changing environment. But, they are computationally expensive which is a major concern for robotic control units equipped with limited resources and real-time constraints [15]. The vehicular mechanical factors and environmental factors are incorporated in travel times in this work to mitigate the identification of the vehicular dynamics. The path is computed in higher level of robotic control and paths are broken down to simple vehicular commands for movements and communicated to the lower-levels of control. Also, simple, deterministic and computationally inexpensive Dijsktra's algorithm is deployed and travel times are incorporated with it to decide paths of minimum cost.
On the other hand, motion planning and task planning, serves as specific coordination problems in MRS. Cost coefficients are usually computed heuristically before hand (offline) using Markov Decision Processes (MDPs). However, motion planning and task planning, as a specific case of coordination problem, are typically NP-hard and are addressed to find tractable and good solutions [11] and are mostly treated as individual specific problems in field of MRS [5], [13], [22].
In this work, path planning has been considered as an example of planning, where travel times are utilized as costs to obtain optimal path in a single robot.
III. PROBLEM FORMULATION
In this work, the focus is on one MR. Also, traversing a path is considered as a task (Section I).
A path P for a robot is usually defined as,
P = (n a , n b ), (n b , n c ), (n c , n d ), (n d , n e ), ...........)(1)
where n p is any node. P can be also expressed in terms of connecting edges as
P = (a a,b ), (a b,c ), (a c,d ), (a d,e ), ...........)(2)
where a p,q is any edge. As explained in Section I, each edge in the floor map is associated with some cost in terms of energy exhaustion and others. Hence, traversing a defined path incurs several travel costs for all edges in the path. Thus time to traverse an edge by a MR can be conceptualized as its cost coefficients. Let X p,q (e, f ) denote edge cost from n p to n q , where e denotes dependency for state of charge of batteries and f denotes dependency for frictional force of the floor. Now, the total cost of traversing P can be written in a form P c as,
P c = (X a,b (e, f ), X b,c (e, f ), X c,d (e, f ), ..., ...)(3)
In equation 3, it is shown that a total path cost is dependent on all edge costs and each edge cost is denoted by its travel time. Hence, X(k) is estimated with respect to increase in k for any edge and used as weight of edge to compute path. The estimated value of X(k+1) depends only on X(k) and the observation of X at (k+1). These experiments and results are explained in Section V. Observations of all possible X(k) for all possible k needs to be made for this above estimation for a single MR.
However, this is not only cumbersome but also impractical to gather such huge amount of observation. This estimation is static as X is estimated without considering its variation with the total elapse of time from start of system. The static estimation approach is progressed to a different model. Observations of all possible X(k) for all possible k is not needed in the latter. A window of previous values of X is decided to form a state vector. This state vector is estimated on every k to find the estimated value of X(k+1). Thus, the current value of X is estimated depending on the previous X's i.e.-travel times of edges which are already being found to form the path, along with the variation of exploration of X due to elapse of time. Thus, X values are dynamically estimated considering its variation over elapse of time. Moreover, the model is allowed to gather the possible values of X itself from the beginning of first call of path planning and use these values to estimate current value. This experiment is elaborated in Section VI.
IV. PROTOTYPED INTERNAL TRANSPORTATION SYSTEM
A prototype scaled down internal transportation system is developed with all essential constituting parts like MRs, tasks, controller architecture and the environment adhering to minute details. The floor is described by means of a topology map G = {∨, ε}, where each port and bifurcation point corresponds to some node n r ∈ ∨ and each link between two nodes forms an edge a e,f ∈ ε. Part (a) of Figure 3 depicts a portion of the whole prototype, where, notation like n 26 designates a node and a 26,27 a edge. Topology maps are generated taking reference from the grid map generated by results of Self Localisation and Mapping (SLAM) in [4] based on a simple assumption, that each free cell in the grid map corresponds to a node in the graph. A selected representative portion from each of the three sections of of Coca-Cola Iberian Partners in Bilbao, Spain are extracted to form three topological maps. They are provided in Figure 4. Part (a) of Figure 4 illustrates Map 1 which is a representative of winding racks in the warehouse facility, Part (b) shows Map 2 which represents randomly placed racks and Part (c) shows Map 3 which represents racks organized in a hub. The scaled robots are built controller, ultrasound sensor and a camera, as illustrated in Part (b) of Figure 3. The DC servo motors drive the wheels of the MR and Li-ion batteries energize them. Each MR has its individual controller in decentralized architecture [19], [20]. Travel times for three different length of arcs in all three maps and four different conditions of surface are recorded till complete exhaustion of batteries. This generates observation data for all possible X for all ks. This cumbersome process of acquiring the observations for generating on-line estimates is mitigated by a non-linear functional model of X p,q incorporating its evolution over passage of time and also allowing to gather information about X p,q for different arcs in the map gradually with time.
V. EXPERIMENT I: USING STATIC ESTIMATES OF TRAVEL
TIMES IN ROUTE PLANNING
A. Procedure
The state-space model provided in equations 4 and 5 is used to estimate travel times, considering them as cost of edges. From now X p,q will be written as X for simplicity,
X(k) = X(k − 1) + ω(k) (4) Y (k) = X(k) + η(k)(5)
The state vector in equation 4 is a single variable X depending on k, k being the number of edges already found in the path (Section III). Hence, X is estimated over and over again for different connecting edges of every new exploring node. Y (k) in equation 5 is the observation variable for X. This model involves two error terms ω(k) and η(k) which are independent and normally distributed. According to equations 4 and 5, X(k) depends only on the travel time of edge between current node and its predecessor i.e.-X(k-1), though in reality, it depends on Xs for all the previous edges in the path and its own variation over the time. Thus, this process of estimation is static. (Section III). Equations 6 and 7 are obtained after applying Kalman Filtering method [21] on equations 4 and 5.
X − (k) =X(k − 1) P − (k) = P (k − 1) + σ 2 ω (6) K(k) = P − (k) [P − (k) + σ 2 η ] P (k) = P − (k) − [P − (k) 2 [P − (k) + σ 2 η ]](7)X(k) =X − (k) + [P − (k) P − (k) + σ 2 η ] * ω(k) where, ω(k) = [Y (k) −X − (k)]
X − (k) produces the apriori value of X and P − produces the associated covariance, σ 2 ω being the covariance of process noise ω(k).X(k) provides the predicted estimate of X(k), aŝ X − (k) is corrected in equation 7 with the help of Kalman Gain K(k). P − (k) provides the associated covariance matrix, σ 2 η being the covariance of the observation noise η(k). For example, a sample route computation is illustrated in Figure 5. Let n a be source and n w destination at P 16. So, path computation using Dijkstra's algorithm starts at n a with its neighbors n b , n c and n d . So, X a,b , X a,c , X a,d are required to be estimated at k, when k is 1 as this will be first edge being traversed.
X(0) = E[X(0)](8)P (0) = E[(X(0) − E[X(0))(X(0) − E[X(0)) T ](9)
We use equation 6 to obtainX − (1) for X a,b , X a,c , X a,d separately depending on X(0) using equation 8. Similarly, we get separate P − (1) using equation 9. Next, we obtainX(1) (estimate) and P (1) for X a,b , X a,c , X a,d using equation 7.
Comparison of estimated values of X a,b , X a,c , X a,d will provide the least cost of traversing from n a to any of its neighbor. Let, the least cost edge be a a,c . So n a will become the predecessor of n c , i.e.-to reach n c , the edge should come from n a . When n c will be explored, the value for k is 2 as n c has 1 predecessor. The next least cost edge from n c in the path is required to be known. Thus, X c,e , X c,f , X c,g needs to be estimated. Thus, observation Y (k) of X at current k is required to estimate X. Thus observation values for travel costs of all possible Xs for all possible ks were collected. This above process is explained in Algorithm 1. This static experiment is conducted to verify that weights of edges can be estimated online during exploration of Dijkstra's algorithm using a state-space model. Also, it is verified that the estimated values of X are correct and real through this experiment, as the values can be compared to real observations.
B. Results
Paths are computed repeatedly for 20, 40, 60 and 80 times in each topological graph ( Figure 4) using both original Dijkstra's algorithm using Euclidean distance based heuristic weights of edges and the modified one (Algorithm 1). The choices of source and destination are fed from the decided list of sources and destinations for each call of route computation. Total path costs obtained using heuristic weights are compared with paths obtained using Algorithm 1. The vertical bars of Eucl and SEC in Figure 6 represent the average total path costs for heuristic cost based routes and static estimates based routes respectively. Vertical bar Eucl shows that average total path costs never change with increase in number of repetitive calls, as heuristic weights do not change over time and does not reflect the true cost of traversal.
Vertical bar SEC shows that average total path costs obtained by Algorithm 1 is 5% less in case of Map 2 and Map 3 and 2% less in Map 1 than that of heuristic cost based Dijkstra's algorithm. Average total path costs increases with number of repetitions as shown by vertical bar SEC, as duration of performance increases with increase of repetitions. This happens due to the dependency of current edge cost on previous edge cost (equations 4 and 5). But, this variation does not truly reflect the variation of travel time due to time-varying factors. The bi-linear model [18], provided in equation ?? is used to model the change of travel costs depending upon all the previous travel costs. X is formed as a function of its past histories over k, considering the progressive change ξ with respect to k. After start of computing a path, the real travel time of edges are recorded when the MR actually traverses it. This travel times of edges are used as the observation values for the next call of path planning. Thus observation values of travel times of each edge is grown during run-time.
X(k) + a 1 X(k − 1) + ..... + a j X(k − j) = ξ k + b 1 ξ(k − 1) +... + b l ξ(k − l) + c rz ξ(k − r)X(k − z)
The double summation factor over X and ξ in the above equation provides the nonlinear variation of X due to state of batteries and changes in environment. The state space form of the bi-linear model is given in equations 10 and 11. In equation 10, the state vector s(k) is of the form (1, ξ(k − l + 1), ...., ξ(k), X(k − j + 1), ......, X(k)) T . Here, j and l denote number of previously estimated Xs and previous
innovations of X respectively. The term regression no denotes the values of j and l and is chosen as a design parameter. The regression no is increased from 2 to 9 and the effects on total edge travel cost of paths is demonstrated in Section VI-B.
s(k) = F (s(k − 1))s(k − 1) + V ξ(k) + Gω(k − 1) (10) Y (k) = Hs(k − 1) + ξ(k) + η(k)(11)
The state transition matrix F in the equation 10 has the form of
F = 1 0 0 . . .µ ψ l ψ l−1 . . . ψ 1 . . . − φ j − φ j−1 · · · − φ 1
The number of rows of F is given by (2*regression no + 1). The ψ terms in F are denoted as in equation 12
ψ l = b l + l i=1 c li X(k − i)(12)
All the φ terms in F are constants. The term µ is the average value of X till k. Also, the matrix V in 10 is denoted as
V = 0 0 0 . . . 1 . . . 0 0 . . . 1
The number of rows of V is again given by (2*regression no + 1). The matrix H in 11 is denoted as
H = 0 0 0 . . . 0 . . . 0 0 . . . 1
Kalman filtering is applied on the state-space model (equa-6 Fig. 6: Comparison cost tions 10 and 11) resulting in equations 13 and 15 to estimate s repeatedly to obtain X for the connecting edges at each node to compute path using Dijkstra's algorithm.
s − (k) = F (s(k − 1))s(k − 1) + V ξ(k) + Gω(k − 1) (13) P − (k) = F (s(k))P (k − 1)F T (s(k − 1)) + Q(k − 1) (14)
In equation 13,ŝ − (k) provides the apriori estimate of s. P − provides the associated covariance matrix where Q(k-1) provides the covariance for the process noise ω(k-1).
K(k) =P − (k)H T [HP − (k)H T + R(k)](15)s(k) =ŝ − (k) + K(k)[Y (k) − Hŝ − (k)] P (k) = [I − (K(k))H]P − (k)
In equation 15, K(k) is the Kalman gain, R(k) being the covariance of observation noise η(k).ŝ(k) provides the estimated state vector s at k.
s(0) = E[s(0)](16)P (0) = E[(s(0) − E[s(0))(s(0) − E[s(0)) T ](17)
In Figure 5, the path computation starts at n a . Let the values of j and l are equal which is 2. At start, k is 1. Now s cannot be formed as minimum 2 previous travel costs are needed. Exploration proceeds with average travel cost for the edges. When n c needs to be explored, value of k becomes 2 as one travel cost has been known connecting n c to its predecessor n a . s can be now formed as X(1) is known. Again, n a is the source and so X(0) is 0. ξ is assumed to be N (0.1,0.1). At k =2, s(1) takes the form (1, ξ(0), ξ(1), X(0), X(1)) T . Equation 13 and 15 are used to estimate s(2) separately for all edges arising out of n c to obtain X for each edge. From equations 16 and 17, s(0) and P (0) can be obtained. Let at n g , k = 4. Hence, X(3) will be travel cost from n e (predecessor of n g ) to n g , X(2) will be travel cost from n c (predecessor of n e ) to n e , X(1) will be travel cost from n a (predecessor of n c ) to n c . Thus, s(3) = (1, ξ(2), ξ(3), X(2), X(3)) T and s(4) = (1, ξ(3), ξ(4), X(3), X(4)) T needs to be computed. This approach is different from Algorithm 1 in the way the X is estimated.
B. Results
The process of path computation is exactly similar to previous experiment. Only difference is in the estimation procedure of X, which is based on the bil-inear state space model. The b and c are chosen as normal distribution. Along with the repetitions of path computations, the value φ, mean and covariance of b and c are increased from -0.4 to 0.4 and from -0.2 to 0.2 respectively. Negative values of φ produced too high estimates while values greater than 0.2 produced negative estimates. Similarly, mean and covariance values less than 0.1 produce high estimates and more than 0.1 produce negative estimates. Thus, 0.2 is found as the suitable value of φ and N (0.1,0.1) suits for both b and c. Also, the regression no from 2 to 9 for each of 20, 40, 60 and 80 repetitive computation and average total path costs obtained on each case are plotted in Figure 6. The vertical bars marked from Reg 2 to Reg 9 represent the average total path costs for dynamic estimates based routes, which shows that they are 15% less on average than heuristic euclidean cost for all three maps in each set of repetitions. This difference is increased with the increase of regression no, though the rate of increase is low, as the change of X itself is not broadly spread with standard deviation of 0.219 on average. The average total path cost increases with increase in number of repetitions as edge travel cost increases with elapse of time. The observation Y (k) developed during run-time is considered as signal and the values of ω are modified to increased the Signal-to-Noise Ratio (SNR) from 10dB to 50 dB along with the repetitions of path planning. The vertical bars marked 10dB, 25dB and 50dB in Figure 6 plots the average path costs obtained by changing the SNR for each regression no. which shows that with the increase of SNR, the average travel cost decreases.
C. Path comparison
Part (a) of Figure 7 for each x i ∈ V do 3:
π[x i ] = inf inity 4: d[x i ] = N IL 5:
end for 6: end function 7: function FIND PREV((u, s)) input: u-current node,s-source node, returns: prev∨-predecessor of u, noP red -number of predecessors till s w = KF (pW ,k,Y (k)) 16 end while 35: end function costs, statically estimated and dynamically estimated edge travel costs respectively for the same pair of source and destination nodes in Map 2 including only the variation induced by discharge of batteries. Here, P cA , P cB and P cC stands for the general P c vector explained in Section III for PathA, PathB and PathC respectively. P cA , P cB and P cC have many common elements, despite having different elements. Thus, the total travel cost in these 3 paths are different. After obtaining the total travel costs of PathA, PathB and PathC, it can be stated that,
P cB < P cA by5%and P cC < P cA by15%
This also establishes the proposal that heuristics based path planning can underestimate real edge travelling costs and lead to expensive paths. PathA and PathC in (b) and (c) of Figure 7 are obtained in Map 1 by heuristic based edge weights and dynamically estimated edge travel costs respectively, when floor condition is changed in dotted line zone to moderately rough and solid line zone to lightly rough after 20 calls for route computation. PathA in both (b) and (c) contains edges in both rough zones in the floor, while PathC in (b) clearly avoids the zone with moderate roughness, though having few edges in the lightly rough zone. This happens because Dijkstra's algorithm finds that cost incurred in traversing the lightly rough zone to be less than that of the additional edges required to avoid the zone. This proves that modification of Dijkstra's algorithm using dynamically estimated travel cost does not disrupt the computational robustness of the algorithm. Also, when the lightly rough zone is made heavily rough, PathC deviates to other direction and adding more nodes. Thus again, estimated travel times of edges help Dijkstra's algorithm to find a cost effective path.
D. Real cost saving for paths
In (b) of Figure 7, there are total 12 edges from the 2 rough zones comprised in PathA. The path cost of PathA obtained using heuristic weights is not the correct one as travel costs of each of these 12 edges are more than assumed. Let, a variable δ accounts for the additional edge costs in each edge. Path cost of PathA is obtained as 98.210 from results, but in reality path cost of PathA should be (98.210 +12*δ). The value of δ can never be zero as changes in environment ans batteries will always be present. When more zones will have changed floor conditions, more edges will have increased edge cost. So, the coefficient of δ will increase and also the true value of travel cost of paths. Thus, the difference between travel costs of paths obtained by heuristic cost and estimated travel time will always increase with the increase of hostility in the environment.
VII. DISCUSSION AND CONCLUSION
The travel times of edges are identified as one of the cost coefficients in internal automated logistics. A formulation is devised to estimate travel times online during path computation considering its time-varying components. Moreover, suitable observations for travel time are recorded in scenarios with analogy to real factory in a scaled platform developed in the laboratory. They are instrumental for feeding into estimation algorithms to estimate travel time. Path is found using Dijkstra's algorithm based on both heuristic weights of edges and estimated travel times of edges as weights. Results show that paths computed using travel time as weights of edges have lesser total path cost than that of obtained by heuristic weights. Environmental factors based costs are modeled as Gaussian process regression from already obtained finite measured data in [23], but it does not include time-varying changes in batteries and environment. On the other hand, sampling based heuristic path planning [7], [16] requires to explore a significant portion of the graph needs to find a suitable path, which is computationally expensive. Nevertheless, in this work, the cost of traversing every edge is estimated, which facilitates to apply deterministic path planning algorithms like Dijkstra's algorithm, Bellmont-Ford algorithm et cetera. The approach used in single-task case in this work can be extended in multi-task scenarios for a MR, where cost coefficient for different tasks has to be found out. This is a direction for future consideration. During the run-time of MRS, every estimated value of travel time has context depending on various environmental and inherent factors. Travel time of one MR can provide contextual information to other MRs in an multi-robot system (MRS) and contribute in estimating travel time for them. This enhances further investigation towards implementing collaborative or collective intelligence in MRS.
Fig. 1 :
1An example utility
other mechanical factors, but on the other they are significantly influenced by environmental factors like battery capacity (in case of battery powered MRs), conditions of the floor, conditions of material to be transported, performance and behavior of other AGVs, et cetera, as all or most of these factors determine the state of the robot at every instance of time. Influence of factors like friction forces of floor, slope, mechanical part can be corrected by local control on individual MR (lower levels), but factors like traffic condition, conditions of material, behavior of other MRs are beyond the scope of control by lower levels. Hence, considering cost coefficients at lower levels of actuation and control cannot make better control decisions. So these parameters are investigated at higher level to utilize them efficiently. These costs are time dependant and have sources of error from battery exhaustion, surface condition of shop floor, wear and tear of tyre, et cetera. Hence over the passage of time, the values will change. For example, Part (b) in Figure ?? plots the progressive mean of observed values of travel time for an mth edge first only with the change of state of charge of batteries and Part (c) with both the change of state of the charge of batteries and the floor condition from rough at the beginning to smooth. Part (a) of Figure ?? plots the cell voltage over time of Li-ion batteries.
Fig. 3 :
3Scale downed prototype platform Thus, the X denotes general travel time of any edge. Also, travel time of any edge X depends on all the previous edges the robot has already traversed. The reason being the discharge of batteries and (or not) possible change of environment. Thus, travel time X becomes a function of k as X(k) where, k = number of time a MR has performed the task of traversing any edge.
Fig. 4 :
4Three representative topological mapsFig. 5: Sample run of route computation VI. EXPERIMENT II: USING DYNAMIC ESTIMATES OF TRAVEL TIMES IN ROUTE PLANNING A. Procedure
. . . 0 . . . 0 0 . . . 0 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 0 . . . 0 . . . 0 1 . . . . . . 0 . . . 0 0 0 . . . 1
= count of predecessors till s 10: end function11: function KF((pW, k, Y (k))) input: pW -value of travel time at k -1, k-instance for estimation, Y -observation variable, Returns:X(k)-travel cost from u to v 12:Apply KF on state-space model to obtainX(k)13: end function 14: function FIND COST(u, v, k, pW, Y (k)) Input: u-current node, vneighbor node, kinstance of estimation,pWcost between prevu and u, Y (k) -observation of travel time between u and v ,Returns:w-estimated travel time (cost) from u to v 15:
Fig. 7 :
7Paths in different conditions
: end function17: function RELAX(u, v, w)Inputs: u-current node, vneighbor node, westimated travel time (cost) from u to v, Returns: d[v]-attribute for each each node, π[v]-predecessor of each node18: ifd[v] > d[u] + w(u, v) then 19: d[v] = d[u] + w(u, v)end if 22: end function 23: function MAIN(∨, ε, Y, s) Inputs: ∨-list of nodes, ε-list of edges, s-source node, Y -observation matrix,Returns: π[v]-predecessor of each node, w-edge weight matrix while Q! =0 do u := Extract min-priority queue(Q) p∨[u], npred = f ind prev(u, s) k = npred+1 pW = w[pε[u], u] P := P u for each v ∈ Adj[u] do 31: w[u,v] = f ind cost(u,v,k, pW ,Y (k))20:
π[v] = u
21:
24:
P := NIL
25:
Q := queue(∨)
26:
k := 0
27:
pε[s] = 0
28:
w[pε[s], s] = 0 initialise single source(∨, ε, s)
29:
30:
32:
relax(u,v,w)
33:
end for
34:
Path planning for motion dependent state estimation on micro aerial vehicles. W Markus, Steven Achtelik, Maria Weiss, Roland Chli, Siegwart, Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEEMarkus W Achtelik, Steven Weiss, Maria Chli, and Roland Siegwart. Path planning for motion dependent state estimation on micro aerial vehicles. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 3926-3932. IEEE, 2013.
Global path planning for mobile robots in large-scale grid environments using genetic algorithms. Anis Maram Alajlan, Imen Koubaa, Hachemi Chaari, Adel Bennaceur, Ammar, Individual and Collective Behaviors in Robotics (ICBR), 2013 International Conference on. IEEEMaram Alajlan, Anis Koubaa, Imen Chaari, Hachemi Bennaceur, and Adel Ammar. Global path planning for mobile robots in large-scale grid environments using genetic algorithms. In Individual and Collective Behaviors in Robotics (ICBR), 2013 International Conference on, pages 1-8. IEEE, 2013.
Coalition formation games for dynamic multirobot tasks. Haluk Bayram, Bozma, The International Journal of Robotics Research. 355Haluk Bayram and H Işıl Bozma. Coalition formation games for dy- namic multirobot tasks. The International Journal of Robotics Research, 35(5):514-527, 2016.
Graph slam based mapping for agv localization in large-scale warehouses. Patric Beinschob, Christoph Reinke, Intelligent Computer Communication and Processing. IEEE2015 IEEE International Conference onPatric Beinschob and Christoph Reinke. Graph slam based mapping for agv localization in large-scale warehouses. In Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Con- ference on, pages 245-248. IEEE, 2015.
Multiagent path planning with multiple tasks and distance constraints. Subhrajit Bhattacharya, Maxim Likhachev, Vijay Kumar, Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEESubhrajit Bhattacharya, Maxim Likhachev, and Vijay Kumar. Multi- agent path planning with multiple tasks and distance constraints. In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages 953-959. IEEE, 2010.
A fast twostage aco algorithm for robotic path planning. Xiong Chen, Yingying Kong, Xiang Fang, Qidi Wu, Neural Computing and Applications. 222Xiong Chen, Yingying Kong, Xiang Fang, and Qidi Wu. A fast two- stage aco algorithm for robotic path planning. Neural Computing and Applications, 22(2):313-319, 2013.
A lattice-based approach to multi-robot motion planning for non-holonomic vehicles. Marcello Cirillo, Tansel Uras, Sven Koenig, 2014 IEEE/RSJ International Conference on. IEEEIntelligent Robots and SystemsMarcello Cirillo, Tansel Uras, and Sven Koenig. A lattice-based approach to multi-robot motion planning for non-holonomic vehicles. In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ Inter- national Conference on, pages 232-239. IEEE, 2014.
Implicit adaptive multi-robot coordination in dynamic environments. Mitchell Colby, Jen Jen Chung, Kagan Tumer, Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEEMitchell Colby, Jen Jen Chung, and Kagan Tumer. Implicit adaptive multi-robot coordination in dynamic environments. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 5168-5173. IEEE, 2015.
Predicting battery level analysing the behaviour of mobile robot. Pragna Das, Lluís Ribas-Xirgo, XVII Workshop of Physical Agents Book of Proceedings. Pragna Das and Lluís Ribas-Xirgo. Predicting battery level analysing the behaviour of mobile robot. In XVII Workshop of Physical Agents Book of Proceedings, pages 91-98, 2016.
A generalized neural network approach to mobile robot navigation and obstacle avoidance. Dan Hamid Dezfoulian, Imran Shafiq Wu, Ahmad, In Intelligent Autonomous Systems. 12SpringerS Hamid Dezfoulian, Dan Wu, and Imran Shafiq Ahmad. A generalized neural network approach to mobile robot navigation and obstacle avoid- ance. In Intelligent Autonomous Systems 12, pages 25-42. Springer, 2013.
Market-based multirobot coordination: A survey and analysis. M B Dias, R Zlot, N Kalra, A Stentz, Proceedings of the IEEE. 947M. B. Dias, R. Zlot, N. Kalra, and A. Stentz. Market-based multi- robot coordination: A survey and analysis. Proceedings of the IEEE, 94(7):1257-1270, July 2006.
Sampling-based robot motion planning: A review. Mohamed Elbanhawi, Milan Simic, IEEE Access. 2Mohamed Elbanhawi and Milan Simic. Sampling-based robot motion planning: A review. IEEE Access, 2:56-77, 2014.
Distributed online dynamic task assignment for multi-robot patrolling. Alessandro Farinelli, Luca Iocchi, Daniele Nardi, Autonomous Robots. Alessandro Farinelli, Luca Iocchi, and Daniele Nardi. Distributed on- line dynamic task assignment for multi-robot patrolling. Autonomous Robots, pages 1-25, 2016.
Are (explicit) multi-robot coordination and multi-agent coordination really so different. B Gerkey, J Maja, Mataric, Proceedings of the AAAI spring symposium on bridging the multi-agent and multi-robotic research gap. the AAAI spring symposium on bridging the multi-agent and multi-robotic research gapB Gerkey and Maja J Mataric. Are (explicit) multi-robot coordination and multi-agent coordination really so different. In Proceedings of the AAAI spring symposium on bridging the multi-agent and multi-robotic research gap, pages 1-3, 2004.
Heuristic approaches in robot path planning: A survey. Thi Thoa Mac, Cosmin Copot, Robin De Duc Trung Tran, Keyser, Robotics and Autonomous Systems. 86Thi Thoa Mac, Cosmin Copot, Duc Trung Tran, and Robin De Keyser. Heuristic approaches in robot path planning: A survey. Robotics and Autonomous Systems, 86:13-28, 2016.
Optimal path planning in cooperative heterogeneous multi-robot delivery systems. Neil Mathew, L Stephen, Steven L Smith, Waslander, Algorithmic Foundations of Robotics XI. SpringerNeil Mathew, Stephen L Smith, and Steven L Waslander. Optimal path planning in cooperative heterogeneous multi-robot delivery systems. In Algorithmic Foundations of Robotics XI, pages 407-423. Springer, 2015.
Model based on-line energy prediction system for semi-autonomous mobile robots. R Parasuraman, K Kershaw, P Pagala, M Ferre, 5th International Conference on Intelligent Systems, Modelling and Simulation. R. Parasuraman, K. Kershaw, P. Pagala, and M. Ferre. Model based on-line energy prediction system for semi-autonomous mobile robots. In 2014 5th International Conference on Intelligent Systems, Modelling and Simulation, pages 411-416, 2014.
Current developments in time series modelling. Mb Priestley, Journal of Econometrics. 371MB Priestley. Current developments in time series modelling. Journal of Econometrics, 37(1):67-86, 1988.
An agent-based model of autonomous automated-guided vehicles for internal transportation in automated laboratories. Lluís Ribas, - Xirgo, Ismael F Chaile, ICAART (1). Lluís Ribas-Xirgo and Ismael F Chaile. An agent-based model of autonomous automated-guided vehicles for internal transportation in automated laboratories. In ICAART (1), pages 262-268, 2013.
An approach to a formalized design flow for embedded control systems of micro-robots. Lluís Ribas-Xirgo, Joaquín Saiz-Alcaine, Jonatan Trullàs-Ledesma, A Josep Velasco-González, Industrial Electronics, 2009. IECON'09. 35th Annual Conference of IEEE. IEEELluís Ribas-Xirgo, Joaquín Saiz-Alcaine, Jonatan Trullàs-Ledesma, and A Josep Velasco-González. An approach to a formalized design flow for embedded control systems of micro-robots. In Industrial Electronics, 2009. IECON'09. 35th Annual Conference of IEEE, pages 2187-2192. IEEE, 2009.
Kalman and extended kalman filters: Concept, derivation and properties. Maria Isabel Ribeiro, Maria Isabel Ribeiro. Kalman and extended kalman filters: Concept, derivation and properties, 2004.
Implan: scalable incremental motion planning for multi-robot systems. Indranil Saha, Rattanachai Ramaithitima, Vijay Kumar, J George, Pappas, Sanjit, Seshia, Cyber-Physical Systems (ICCPS). Indranil Saha, Rattanachai Ramaithitima, Vijay Kumar, George J Pappas, and Sanjit A Seshia. Implan: scalable incremental motion planning for multi-robot systems. In Cyber-Physical Systems (ICCPS), 2016
ACM/IEEE 7th International Conference on. IEEEACM/IEEE 7th International Conference on, pages 1-10. IEEE, 2016.
A cost-aware path planning algorithm for mobile robots. Junghun Suh, Songhwai Oh, Intelligent Robots and Systems (IROS). Junghun Suh and Songhwai Oh. A cost-aware path planning algorithm for mobile robots. In Intelligent Robots and Systems (IROS), 2012
IEEE/RSJ International Conference on. IEEEIEEE/RSJ International Conference on, pages 4724-4729. IEEE, 2012.
Robot path planning in uncertain environment using multi-objective particle swarm optimization. Yong Zhang, Dun-Wei Gong, Jian-Hua Zhang, Neurocomputing. 103Yong Zhang, Dun-wei Gong, and Jian-hua Zhang. Robot path planning in uncertain environment using multi-objective particle swarm optimiza- tion. Neurocomputing, 103:172-185, 2013.
| []
|
[
"Toda systems for Takiff algebras",
"Toda systems for Takiff algebras"
]
| [
"Michael Lau [email protected] \nDépartement de mathématiques et de statistique\nUniversité Laval\nG1V0A6QuébecQCCanada\n"
]
| [
"Département de mathématiques et de statistique\nUniversité Laval\nG1V0A6QuébecQCCanada"
]
| []
| We study completely integrable systems attached to Takiff algebras g N , extending open Toda systems of split simple Lie algebras g. With respect to Darboux coordinates on coadjoint orbits O, the potentials of the hamiltonians are products of polynomial and exponential functions. General solutions for equations of motion for g N are obtained using differential operators called jet transformations. These results are applied to a 3-body problem based on sl(2), and to an extension of soliton solutions for A ∞ to associated Takiff algebras. The new classical integrable systems are then lifted to families of commuting operators in an enveloping algebra, solving a Vinberg problem and quantizing the Poisson algebra of functions on O. | null | [
"https://arxiv.org/pdf/2207.06348v1.pdf"
]
| 250,493,150 | 2207.06348 | fa793a7df2196a91db579bc7b7e4b7da735fdfa0 |
Toda systems for Takiff algebras
13 Jul 2022
Michael Lau [email protected]
Département de mathématiques et de statistique
Université Laval
G1V0A6QuébecQCCanada
Toda systems for Takiff algebras
13 Jul 2022arXiv:2207.06348v1 [math-ph]Toda systemsTakiff algebrastruncated current Lie algebrasVinberg problemjet transformationsclassical integrable systems MSC2020: 17B80 (primary); 37J3537K1017B35 (secondary)
We study completely integrable systems attached to Takiff algebras g N , extending open Toda systems of split simple Lie algebras g. With respect to Darboux coordinates on coadjoint orbits O, the potentials of the hamiltonians are products of polynomial and exponential functions. General solutions for equations of motion for g N are obtained using differential operators called jet transformations. These results are applied to a 3-body problem based on sl(2), and to an extension of soliton solutions for A ∞ to associated Takiff algebras. The new classical integrable systems are then lifted to families of commuting operators in an enveloping algebra, solving a Vinberg problem and quantizing the Poisson algebra of functions on O.
Introduction
Until early computer models suggested otherwise [FPU], it was widely believed that introducing nonlinearity into particle interactions would result in equipartition of energy among vibrational modes in a lattice. The hunt for exactly solvable nonlinear lattice models to test this hypothesis led to the discovery of (type A) Toda systems [Tod]. Absence of thermalization was subsequently understood as a consequence of soliton solutions, related to those of the Boussinesq and Korteweg-de Vries equations through approximations of continuum limits of the Toda equations of motion [TW]. Conserved quantitites and Liouville integrability were obtained by Flaschka and Hénon [Fla, Hen], and later extended to open Toda systems of all simple types by Kostant [Kos].
In this paper, we introduce and study a new family of completely integrable systems attached to Takiff algebras g N = g ⊗ K K[u]/ u N +1 . Takiff algebras are finite dimensional quotients of current Lie algebras g ⊗ K K[u] for split simple Lie algebras g over the real or complex field K. For example, g N is the simple Lie algebra g when N = 0, and when N = 1, g N is an extension of g by its adjoint representation. These Lie algebras are neither semisimple nor reductive when N > 0, but nonetheless have nondegenerate symmetric invariant bilinear forms. This easy but crucial observation identifies them with their duals, and lets us define integrable systems on coadjoint orbits using a Lax approach.
Takiff algebras have appeared many times in recent mathematical physics literature, for example, in [LT], where they are used to construct a finite dimensional Lie group governing the Bloch-Iserles equation, in [BR], where their non-semisimplicity is exploited to find indecomposable representations that are not irreducible, in the context of logarithmic conformal field theory, in [CO], where they are used to study coupled Hirota bilinear equations, and in [MY], where they appear as Lie algebras of jet groups, later used to study associated varieties of affine W -algebras [Ara].
We deform the Lie bracket of g N using a classical r-matrix satisfying the modified Yang-Baxter equation, and consider distinguished coadjoint orbits O of the corresponding Lie groups. The orbits O are symplectic manifolds parametrized by Lax matrices. In contrast with the split simple Lie algebra case, these matrices no longer have enough eigenvalues to generate an integrable system. To resolve this difficulty, we enlarge the supply of functions by replacing ordinary traces of matrices with traces along superdiagonals in a faithful representation. This leads to a classification of invariant bilinear forms on g N , and lets us construct a maximal independent family I N (g) of Poissoncommuting functions on O, generalizing the Toda systems for finite dimensional split simple Lie algebras (Theorem 3.8).
Equations of motion (4.11) and (4.12) are then explicitly derived in terms of position and momentum coordinates on coadjoint orbits, using exponential generating functions defined in Proposition 4.3. We concentrate on evolution equations governed by hamiltonians with the usual quadratic expression for kinetic energy, as the most physically relevant members of new integrable hierarchies. The corresponding potentials are products of polynomial and exponential functions and have not, to the best of our knowledge, previously appeared in the literature. Standard techniques based on factorisation in Lie groups are then used to exactly solve a minimal rank example.
In Section 4.3, we introduce differential operators D n called jet transformations. Applied to any solution of the Toda equations of motion for a split simple Lie algebra g, the jet transformations give solutions to the equations attached to the Takiff algebras g N . Since general solutions of classical Toda equations are known for all split simple Lie algebras g by [Kos], jet transformations provide general solutions for all Takiff algebras g N . We illustrate with the general solution of a 3-body problem based on sl(2) and an extension of soliton solutions for the A ∞ -lattice to Takiff algebras.
The final section considers the Vinberg problem of identifying commutative subalgebras of enveloping algebras, corresponding to the new integrable systems I N (g) studied in this paper. This is done with a Harish-Chandra projection and r-deformation of an analogous construction [Mol] of generators for centres of enveloping algebras of Takiff algebras. Conserved quantities in I N (g) then appear as principal symbols of images of the generators. Theorem 5.4 can thus be seen as a quantization of I N (g) in an enveloping algebra whose associated graded algebra is the ring of functions on O.
Takiff algebras
Let g be a nonzero finite-dimensional Lie algebra over a field K of characteristic zero. Without loss of generality, we fix a faithful representation ρ : g → gl(M ) of minimal dimension M , and identify g with its image in the general linear algebra gl(M ). For each nonnegative integer N , let
g N = g ⊗ K K[u]/ u N +1
be the Takiff algebra, or truncated current algebra, of degree N , where u N +1 is the principal ideal generated by u N +1 in the polynomial ring K[u]. We write x(i) for the image x ⊗ (u i + u N +1 ) of x ⊗ u i in g N . The Lie bracket on g N is given by linear extension of the bracket on g:
[x(i), y(j)] = [x, y](i + j),
keeping in mind that x(i) = 0 for i > N . Note that g N generalizes g = g 0 and is a quotient of the current algebra g ⊗ K K [u].
The Lie algebra g N embeds in gl((N + 1)M ) as a collection of matrices of diagonally repeating M × M blocks:
x 0 + x 1 (1) + · · · + x N (N ) ֒→ x 0 x 1 · · · x N 0 x 0 · · · x N −1 . . . . . . . . . . . . 0 0 · · · x 0 ,
for all x 0 , . . . , x N ∈ g ⊆ gl(M ). By convention, the superdiagonals of such matrices in gl((N + 1)M ) will be numbered from 0 to (N + 1)M − 1, with the principal diagonal as the 0th superdiagonal. For A ∈ gl((N + 1)M ), the trace along the ℓM th superdiagonal will be denoted by tr ℓ (A), so that
tr ℓ (x(i)) = (N + 1 − ℓ)δ iℓ tr(x),
for all x ∈ g and 0 ≤ i ≤ N . Note that tr 0 is the ordinary trace of a matrix in gl((N + 1)M ). When N > 0, the Lie algebra g N is never reductive, but nonetheless has a symmetric invariant bilinear form (−|−) N which is nondegenerate whenever g is semisimple:
(x(i)|y(j)) N = δ i+j,N tr(xy) = tr N (x(i)y(j)), for all x, y ∈ g and i, j = 0, . . . , N, where xy and x(i)y(j) are the associative products of M × M and (N + 1)M × (N + 1)M matrices, respectively. This observation was the starting point for our study of Takiff algebras, since it lets us identify them with their duals and explore Toda systems in a non-reductive setting. We write ν : g N → g * N for the induced isomorphism, where ν(x)(y) = (x|y) N for all x, y ∈ g N . More generally, Lemma 2.1 [CO,Theorem 2.2] If (−, −) g is a nondegenerate symmetric invariant bilinear form on g and c = (c 0 , . . . , c N ) ∈ K N +1 , then (x(i), y(j)) c = c i+j (x, y) g , for all x, y ∈ g and 0 ≤ i, j ≤ N Lemma 2.2 If g is simple and K is algebraically closed, then for any symmetric invariant bilinear form (−, −) : g N × g N → K, there exist c 0 , . . . , c N ∈ K so that (x(i), y(j)) = c i+j tr(xy) for all x, y ∈ g and 0 ≤ i, j ≤ N . For ℓ = 0, . . . , N , the forms ( | ) ℓ defined by
(x(i)|y(j)) ℓ = δ i+j,ℓ tr(xy) = 1 N + 1 − ℓ tr ℓ (x(i)y(j))
are thus a basis for the vector space of all symmetric invariant bilinear forms on g N .
Proof Let 0 ≤ i, j ≤ N and define a bilinear map λ i : g × g → K by
λ i (x, y) = (x(0), y(i)),
for all x, y ∈ g. For all x, y, z ∈ g,
λ i (x, [y, z]) = (x(0), [y(0), z(i)]) = (y(0), [z(i), x(0)]) = (y(0), [z(0), x(i)]) = ([y(0), z(0)], x(i)) = λ i ([y, z], x),
and since [g, g] = g, this shows that λ i is symmetric. A similar easy calculation shows that λ i is also invariant. Finally, g is simple and K is algebraically closed, so every symmetric invariant bilinear form on g is a multiple of the trace form (itself a normalization of Killing form), and there exist c 0 , . . . , c N ∈ K such that λ i (x, y) = c i tr(xy) for all x, y ∈ g. This completes the proof since [g, g] = g and
([x, y](i), z(j)) = (x(0), [y(i), z(j)]) = λ i+j (x, [y, z]) = λ i+j ([x, y], z) = c i+j tr([x, y], z).
✷
That the forms ( | ) ℓ are clearly symmetric will later be used in the proof of integrability in Section 3.3.
Takiff Toda systems and integrability
In this section, g will be a finite dimensional split simple Lie algebra with Cartan subalgebra h, root system Φ, and root spaces g α = {x ∈ g : [h, x] = α(h)x for all h ∈ h}, over the field K of real or complex numbers. We will give our proofs in the real context, as this is the usual environment for applications; analogous proofs hold in the complex setting. Fix a base ∆ = {α 1 , . . . , α s } of simple roots, and denote the corresponding sets of positive and negative roots by Φ + and Φ − , respectively. As in the previous section, we view g as a matrix Lie algebra by fixing a faithful representation of minimal dimension M . Let {h 1 , . . . , h s } be an orthonormal basis for h, with respect to the trace form on g. Choose nonzero root vectors e α ∈ g α and f α ∈ g −α for each positive root α, normalized so that tr(e α f α ) = 1.
Then [e α , f α ] = h α , where h α = i α(h i )h i .
All Lie algebras, Lie groups, bilinear forms, tensor products, and spans will be taken over K. As usual, the integers will be denoted by Z, and the nonnegative integers by Z + .
Lie bialgebra structure
For each X ∈ g N ⊆ gl((N + 1)M ) and Y = i a i ⊗ b i ∈ g N ⊗ g N , we use the physics notation X 1 = X ⊗ 1,
X 2 = 1 ⊗ X ∈ gl((N + 1)M ) ⊗ gl((N + 1)M ), and Y 12 = Y, Y 21 = b i ⊗a i ∈ g N ⊗g N .
When the meaning is clear from the context, we will abuse notation and also write
Y 12 = Y ⊗ 1, Y 13 = i a i ⊗ 1 ⊗ b i , and Y 23 = 1 ⊗ Y,
as elements of gl((N + 1)M ) ⊗3 . The block partial trace operators Tr ℓ,j are defined by taking the block trace tr ℓ on the jth tensor component:
Tr ℓ,1 (Y ) = i tr ℓ (a i )b i Tr ℓ,2 (Y ) = i tr ℓ (b i )a i .
In a generalisation of the usual Toda r-matrix, we define
r 12 = 1 2 α∈Φ + N i=0 (e α (i) ⊗ f α (N − i) − f α (N − i) ⊗ e α (i)) ∈ g N ⊗ g N . (3.1)
The 2-tensor r 12 defines a Rota-Baxter operator R : g N → g N by R(X) = Tr N,2 (r 12 X 2 ). Calculating directly,
R(h i (j)) = 0, R(e α (j)) = 1 2 e α (j), R(f α (j)) = − 1 2 f α (j),
for all α ∈ Φ + , 1 ≤ i ≤ s, and 0 ≤ j ≤ N . The resulting R-deformed bracket
[X, Y ] R = [RX, Y ] + [X, RY ]
satisfies the modified Yang-Baxter equation
[RX, RY ] − R([X, Y ] R ) = − 1 4 [X, Y ]
for all X, Y ∈ g N . It then follows that g N is a Lie bialgebra, equipped with brackets [−, −] and [−, −] R . See [BBT] or [CP] for more details on Lie bialgebras and the modified Yang-Baxter equation. We will write g N for the Lie algebra (g N , [−, −]) and
g R N for (g N , [−, −] R ). Explicitly, [h i (k), h j (ℓ)] R = 0 [h i (k), e α (ℓ)] R = 1 2 α(h i )e α (k + ℓ) [h i (k), f α (ℓ)] R = 1 2 α(h i )f α (k + ℓ) [e α (k), e β (ℓ)] R = [e α , e β ](k + ℓ) [e α (k), f β (ℓ)] R = 0 [f α (k), f β (ℓ)] R = −[f α , f β ](k + ℓ),
for all 1 ≤ i, j ≤ s, 0 ≤ k, ℓ ≤ N, and α, β ∈ Φ + . The Lie algebra g R N is thus solvable and has a weight space decomposition with respect to the adjoint action of h R = Span{h i (0) : 1 ≤ i ≤ s}:
(g R N ) α = Span{e α (i), f α (i) : 0 ≤ i ≤ N } for all α ∈ Φ + .
Poisson structure
The symmetric algebra S(g N ) of regular functions on the algebraic dual g * N of g N has natural Poisson brackets {−, −} and {−, −} R induced from the bialgebra structure of g N , defined by the Leibniz rule and relations
{x, y} = [x, y] {x, y} R = [x, y] R for all x, y ∈ g N .
(3.2)
Let G N and G R N be connected Lie groups with Lie algebras g N and g R N , respectively. To obtain a symplectic structure and extend known Toda systems, we will consider functions on certain G R N -coadjoint orbits
O λ = (Ad * R G R N )λ ⊂ g * N .
Such orbits will be identified with their preimages
O x λ = (Ad * R G R N )x λ = ν −1 (O λ ) ⊂ g N under the linear isomorphism ν : g N −→ g * N x −→ (x|−) N ,
where x λ = ν −1 (λ). We will refer to both (g * N , Ad * R ) and (g N , Ad * R ) as the coadjoint representation of G R N , where the coadjoint action on g N is given by (Ad * R g)x = ν −1 ((Ad * R g)ν(x)), for all g ∈ G R N and x ∈ g N . The definition of the coadjoint action ad * R of g R N on the vector space g N is made analogously. In contrast with the usual Poisson set-up, the form (−|−) N is not G R N -invariant, so the coadjoint action of g R N is not equivalent to its adjoint action under the linear isomorphism ν. Rather, for x, y, z ∈ g N and µ = ν(y) ∈ g * N , we have
(ad * R (x)µ)z = (y|[Rz, x]) N + (y|[z, Rx]) N = ([x, y]|Rz) N + ([Rx, y]|z) N .
As the 2-tensor r 12 is skew-symmetric, the operator R is skew-adjoint, and
([x, y]|Rz) N = (−R[x, y]|z) N .
The induced coadjoint action on
g N = ν −1 (g * N ) is thus (ad * R x)y = −R[x, y] + [Rx, y],
for all x, y ∈ g N . Explicitly, Proof (i) Indeed, when α ∈ Φ + and β ∈ ∆ in (3.3), the condition β − α ∈ Φ + is never satisfied, so V is stable under ad * R (g N ). It is thus a submodule of the coadjoint representation for both g R N and G R N . (ii) By Part (i), O ⊆ V . The fact that O is open follows from the inverse function theorem. In particular, the exponential is a local diffeomorphism between an open neighbourhood of 0 ∈ g R N and a neighbourhood U of the identity element 1 ∈ G R N . The coadjoint action of each g ∈ U is thus the exponential of the action of the corresponding element of g R N . The differential of
ad * R (g N )h i (ℓ) = 0 ad * R (h i (k))e β (ℓ) = − 1 2 β(h i )e β (k + ℓ) ad * R (h i (k))f β (ℓ) = − 1 2 β(h i )f β (k + ℓ) ad * R (e α (k))f β (ℓ) = 1 2 h α (k + ℓ) if α = β [e α , f β ](k + ℓ) if β − α ∈ Φ + 0 otherwise ad * R (f α (k))e β (ℓ) = 1 2 h α (k + ℓ) if α = β [e β , f α ](k + ℓ) if β − α ∈ Φ + 0 otherwise ad * R (e α (k))e β (ℓ) = ad * R (f α (k))f β (ℓ) = 0, (3.3) for all α, β ∈ Φ + , 1 ≤ i ≤ s, and 0 ≤ k, ℓ ≤ N. Proposition 3.4 Let V = Span{h i (k), e α (k) + f α (k) : α ∈ ∆, 1 ≤ i ≤ s, 0 ≤ k ≤ N } and x ′ = α∈∆ (e α (0) + f α (0)). (i) The space V is a G R N -submodule of the coadjoint representation. (ii) The coadjoint orbit O = Ad * R (G R N )x ′ isφ : G R N −→ V g −→ Ad * R (g)x ′ at 1 ∈ G R N is then the linear map ρ : g R N −→ V y −→ ad * R (y)x ′ , whose kernel is the annihilator g x ′ N of x ′ . By Formulas (3.3), we see that g x ′ N = Span{e α (k) − f α (k), e β (k), f β (k) : α ∈ ∆, β ∈ Φ \ ∆, 0 ≤ k ≤ N }, a vector space of dimension (N + 1) card(∆) + 2card(Φ \ ∆) = (N + 1)(dim g − 2s). The image of ρ is therefore of dimension dim(g N /g x ′ N ) = dim g N − dim g x ′ N = 2s(N + 1) = dim V,
and ρ is surjective. It now follows from the inverse function theorem that φ is an open map when restricted to a sufficiently small open neighbourhood U ′ of 1. The subset φ(
U ′ ) = Ad * R (U ′ )x ′ is thus open. The group G R N acts by diffeomorphisms on V , so Ad * R (g)φ(U ′ ) ⊂ V is open for all g ∈ G R N . Therefore, O x ′ = g∈G R N ad * R (g)φ(U ′ ) ⊂ V is also open. ✷ The Poisson structure {−, −} R on g * N induces a symplectic structure on O and thus on V , via the Kostant-Kirillov-Souriau 2-form ω λ (ad * R (y)x ′ , ad * R (z)x ′ ) = λ[y, z],
for λ = ν(x ′ ) and any y, z ∈ g R N . As usual, this form is well defined, nondegenerate, and closed.
Integrable systems
Recall that a (Liouville) integrable system is a symplectic manifold (M, ω) of dimension 2n, together with n independent (smooth or holomorphic) functions f 1 , . . . , f n that commute with each other and with a hamiltonian function H on M , relative to the Poisson bracket {f, g} = ω(ξ f , ξ g ), where df = ω(ξ f , −). Independent means that df 1 ∧ · · · ∧ df n = 0 on a dense open subset of M . Note that independence and algebraic independence are equivalent in contexts where f 1 , . . . , f n are polynomial functions.
Integrability for Toda systems associated to finite dimensional split simple Lie algebras was proved by Kostant [Kos], following earlier work by Flaschka and Hénon [Fla, Hen] in type A. Such integrable systems have a Lax presentation, where the Lax matrix L parametrizes (the dual of) a G R 0 -coadjoint orbit. Entries in L are thus dynamic (time-dependent) coefficient functions on the orbit. For finite dimensional simple Lie algebras, the eigenvalues of the Lax matrix, or equivalently, the traces of powers of L, supply the necessary conserved quantities f 1 , . . . , f n relative to the Toda hamiltonian H = 1 2 tr(L 2 ). In contrast with the split simple Lie algebra setting, Lax matrices for Takiff algebras no longer have enough eigenvalues to generate an integrable system. For example, when
g = sl(2), the natural Lax matrix parametrizing the G R N -coadjoint orbit O = O x ′ of Proposition 3.4 is L = x 0 x 1 · · · x N 0 x 0 · · · x N −1 . . . . . . . . . . . . 0 0 · · · x 0 , where x k is the 2 × 2 matrix 1 √ 2 y(k) b(k) b(k) − 1 √ 2 y(k) ∈ g, and y(k), b(k) : O → K are coefficient functions of basis elements h 1 (k) = 1 √ 2 h α (k) and e α (k) + f α (k), respectively, in the expressions N i=0 y(k)h 1 (k) + b(k)(e α (k) + f α (k)) ∈ O, where ∆ = {α}. Then tr(L n ) = 1 2 y(0) 2 + b(0) 2 n/2 if n is even 0 if n is odd. (3.5)
The differentials of the functions tr(L n ) are thus linearly dependent everywhere on the 2(N + 1)-dimensional symplectic manifold O.
The solution to the paucity of independent functions in (3.5) is to enlarge the definition of the trace to include the block traces tr ℓ . With this in mind, for any finite dimensional split simple Lie algebra g, let
L = x 0 x 1 · · · x N 0 x 0 · · · x N −1 . . . . . . . . . . . . 0 0 · · · x 0 , where x k is the submatrix x k = s i=1 y i (k)h i + α∈∆ b α (k)(e α + f α ) ∈ g ⊂ gl(M ), and y i (k), b α (k) are the corresponding coefficient functions on O = O x ′ .
Using the definition of the R-bracket (3.2) and calculating directly on O, we see that
{y i (k), y j (ℓ)} R = {b α (k), b β (ℓ)} R = 0 (3.6) {y i (k), b α (ℓ)} R = 1 2 α(h i )b α (k + ℓ − N ), (3.7) for all 1 ≤ i, j ≤ s, 0 ≤ k, ℓ ≤ N , and α, β ∈ ∆. Theorem 3.8 Let f kℓ : O → K be the functions f kℓ (L) = tr k (L ℓ ) for all L ∈ O, 0 ≤ k ≤ N , and ℓ ∈ Z + .
(i) The functions f kℓ commute with respect to the Poisson bracket {−, −} R , for 0 ≤ k ≤ N and ℓ ∈ Z + .
(ii) If g is of type D n for n ≥ 4, let E(g) = {1, 3, 5, . . . , 2n − 1}. Otherwise, let E(g) be the set of exponents 1 of g. Then the functions f kℓ are independent for 0 ≤ k ≤ N and ℓ − 1 ∈ E(g).
(iii) That is, I N (g) = {f kℓ : 0 ≤ ℓ ≤ N, ℓ − 1 ∈ E(g)} is an integrable system on the orbit O, generalizing the Toda systems for finite dimensional simple Lie algebras, with hamiltonian H = 1 2 tr N (L 2 ).
Proof (i) A straightforward calculation shows that the r-matrix (3.1) controls the Poisson bracket between elements in the Lax matrix:
[r 12 , L 1 + L 2 ] = {L 1 , L 2 } R , where L 1 = L ⊗ 1, L 2 = 1 ⊗ L, {L 1 , L 2 } R = i,j,k,ℓ {a ij , a kℓ } R E ij ⊗ E kℓ , and L = i,j a ij E ij ∈ O,
viewed as an element of gl((N + 1)M ). Expanding with the Leibniz 1 The exponents of g are the positive integers ℓ for which the number of roots of height ℓ is strictly greater than the number of roots of height ℓ + 1. The Poisson centre S(g) g is a polynomial algebra in algebraically independent homogeneoous generators of degree m1 + 1, . . . , ms + 1, where m1, . . . , ms are the exponents of g. See [Bou,chapitre VIII §8.3] for details.
rule, we have
{L m 1 , L n 2 } R = m−1 i=0 n−1 j=0 L m−i−1 1 L n−j−1 2 {L 1 , L 2 } R L i 1 L j 2 = i j L m−i−1 1 L n−j−1 2 [r 12 , L 1 + L 2 ]L i 1 L j 2 = i j [L m−i−1 1 L n−j−1 2 r 12 L i 1 L j 2 , L 1 + L 2 ],
using the fact that L 1 and L 2 commute. The bilinear forms (−|−) k and (−|−) ℓ introduced in Section 2 are symmetric, so
tr k Tr ℓ,1 [X ⊗ Y, Z ⊗ W ] = tr k Tr ℓ,1 (XZ ⊗ Y W − ZX ⊗ W Y ) = tr k ((X|Z) ℓ (Y W − W Y )) = (X|Z) ℓ ((Y |W ) k − (W |Y ) k ) = 0, for all X, Y, Z, W ∈ gl((N + 1)M ). In particular, {f km (L), f ℓn (L)} R = {tr k (L m ), tr ℓ (L n )} R = tr k Tr ℓ,1 {L m 1 , L n 2 } R = i j tr k Tr ℓ,1 [L m−i−1 1 L n−j−1 2 r 12 L i 1 L j 2 , L 1 + L 2 ] = 0,
for all L ∈ O, 0 ≤ k, ℓ ≤ N , and m, n ∈ Z + .
(ii) The set {tr(X ℓ+1 ) : ℓ is an exponent of g} is an algebraically independent set of generators for the Poisson centre S(g) g outside of type D n , n ≥ 4, cf. [Mol,Cor 2.4]. The Poisson centre in type D n is generated by the trace functions tr(X ℓ+1 ) for ℓ = 1, 3, . . . , 2n − 3, together with the Pfaffian Pf(n). This set of polynomials is again algebraically independent, as is the set of trace functions {tr(X ℓ+1 ) : ℓ = 1, 3, . . . , 2n − 1}, which generates the invariant subalgebra under the full orthogonal group:
K[tr(X ℓ+1 ) : ℓ = 1, 3, . . . , 2n − 1] = K[Pf(n) 2 , tr(X ℓ+1 ) : ℓ = 1, 3, . . . , 2n − 3] = S(so(2n)) O(2n) .
The traditional proofs of algebraic independence use algebraic independence of the trace functions restricted to the Cartan subalgebra h. Since h ⊂ V = Span O, the polynomials tr(X ℓ+1 ) are also algebraically independent as functions on O = Ad * G R N ·x ′ , for ℓ ∈ E(g). In particular, this proves (ii) when N = 0. Now assume that the differentials of the functions f kℓ for 0 ≤ k ≤ m and ℓ−1 ∈ E(g) are linearly independent almost everywhere on O, for all nonnegative integers m strictly less than N . Suppose there is a nontrivial dependence relation k,ℓ p kℓ df kℓ = 0 (3.9) at every point in a nonempty Euclidean open neighbourhood U ⊆ V , where p kℓ is a (not necessarily polynomial) function on U . If p N ℓ vanishes on U for all ℓ, then {df kℓ : 0 ≤ k ≤ N − 1, ℓ − 1 ∈ E(g)} is linearly dependent on U , contradicting the induction hypothesis. Without loss of generality, we can thus assume that the functions p N ℓ are not all identically zero on U . Note that df kℓ (X) = ℓ tr k (X ℓ−1 dX), (3.10)
for each X = X 0 + X 1 u + · · · + X N u N ∈ V , and tr k (X ℓ−1 dX) can be expressed as a linear combination of terms of the form
tr(X σ 0 0 · · · X σ N N dX j ), (3.11) where N i=0 σ i = ℓ − 1 and N i=0 iσ i = k − j.
The matrix X N ∈ g is independent of the matrices X 0 , X 1 , . . . , X N −1 determining df kℓ for k < N , so X N occurs in the expressions (3.11) only when k = N . In this case, (3.11) simplifies to tr(X ℓ−1 0 dX N ), and dX N is clearly linearly independent from dX 0 , . . . , dX N −1 . By (3.9) and (3.10), this means that the smaller expression
ℓ−1∈E(g) ℓ p N ℓ (X)tr(X ℓ−1 0 dX N ) = 0, (3.12)
for all X ∈ U . In terms of (i, j)-coordinates x ij of the matrix X N ∈ g ⊆ gl(M ), the coefficients of the differentials dx ij are all zero in expression (3.12). But then we also have
ℓ−1∈E(g) p N ℓ (X)df 0ℓ = ℓ−1∈E(g) ℓ p N ℓ (X) tr(X ℓ−1 0 dX 0 ) = 0
on U , since the component X 0 of X is the only component contributing to the trace tr 0 (X ℓ−1 dX). This contradicts the fact that {df 0ℓ : ℓ − 1 ∈ E(g)} is linearly independent almost everywhere on V , so (ii) holds by induction.
(iii) This is an immediate consequence of (i) and (ii). ✷
Equations of motion and their solution
In this section, we derive equations of motion associated to the hamiltonian systems of Section 3, in terms of position and momentum coordinates q i (n), p i (n) on coadjoint orbits. The equations can be solved by combining standard Lie group factorisation methods with new transformations called jet transformations. These techniques are illustrated with explicit examples, including extensions of known soliton solutions for the rank 1 infinite lattice. As the material concerns time evolution of hamiltonian systems, we work over the field R of real numbers.
Darboux coordinates and equations of motion
In the notation of Section 3, the Hamiltonian H = 1 2 tr N (L 2 ) is given by
H = 1 2 N ℓ=0 s i=1 y i (ℓ)y i (N − ℓ) + N ℓ=0 α∈∆ b α (ℓ)b α (N − ℓ), (4.1) where L is the Lax matrix L = N ℓ=0 s i=1 y i (ℓ)h i (ℓ) + α∈∆ b α (ℓ)(e α (ℓ) + f α (ℓ)) (4.2) parametrizing the G R N -coadjoint orbit O = O x ′ .
We now introduce a system of (global) position and momentum coordinates q i (n), p i (n) on O in terms of the coefficient functions y i (ℓ), b α (ℓ).
b α (ℓ)v ℓ = exp 1 2 N n=0 s i=1 α(h i )q i (n)v n ∈ F(O) ⊗ R R[v]/ v N +1 ,
for α ∈ ∆, uniquely determine a family of functions q i (n) ∈ F(O).
Proof Let α be a simple root, and define Q n = Q(α, n)
= s i=1 α(h i )q i (n). Writing N n=0 c n v n ℓ for the coefficient c ℓ of v ℓ in any formal series N n=0 c n v n ∈ F(O) ⊗ R R[v]/ v N +1 , we see that b α (ℓ) = exp N n=0 Q n 2 v n ℓ = N n=0 exp( Q n 2 v n ) ℓ =e Q 0 /2 σ⊢ℓ Q σ 1 1 Q σ 2 2 · · · Q σ N N 2 |σ| σ 1 !σ 2 ! · · · σ N ! = e Q 0 /2 σ⊢ℓ Q σ 1 1 Q σ 2 2 · · · Q σ ℓ ℓ 2 |σ| σ 1 !σ 2 ! · · · σ ℓ ! ,(4.4)
where |σ| = σ 1 + · · · + σ N , and the final two sums are taken over all partitions σ = (σ 1 , . . . ,
σ N ) ∈ Z N + with N i=1 iσ i = ℓ. In particular, b α (0) = e Q 0 /2 = exp 1 2 s i=1 α(h i )q i (0) ,
and for the base ∆ of simple roots α 1 , . . . , α s , we have
α 1 (h 1 ) α 1 (h 2 ) · · · α 1 (h s ) α 2 (h 1 ) α 2 (h 2 ) · · · α 2 (h s ) . . . . . . . . . . . . α s (h 1 ) α s (h 2 ) · · · α s (h s ) q 1 (0) q 2 (0) . . . q s (0) = 2 log b α 1 (0) 2 log b α 2 (0) . . . 2 log b αs (0) .
Note that the coefficient functions b α i (0) are positive everywhere on O, as can be seen by exponentiating Formulas (3.3) for the R-coadjoint action. The matrix (α i (h j )) ij is nonsingular, since {h 1 , . . . , h s } is a basis of h and {α 1 , . . . , α s } is a basis of h * . The functions q i (0) are thus uniquely determined. We now proceed by induction, assuming that q i (k) ∈ F(O) is uniquely determined for all k < n. By (4.4),
b α (n) = e Q 0 /2 σ⊢n Q σ 1 1 Q σ 2 2 · · · Q σn n 2 |σ| σ 1 !σ 2 ! · · · σ n ! ,
where Q 0 , . . . , Q n−1 depend only on α and q i (k) for 1 ≤ i ≤ s and 0 ≤ k < n. The only summand depending on q i (n) is thus the term corresponding to the partition with σ 1 = σ 2 = · · · = σ n−1 = 0 and σ n = 1. That is,
b α (n) = c α + d α Q n ,
for some functions c α , d α depending only on α and q i (k) for 1 ≤ i ≤ s and 0 ≤ k < n. Note that d α = 1 2 e Q 0 /2 is never zero. By varying the simple root α, we now have another system of s linear equations
s i=1 α j (h i )q i (n) = b α j (n) − c α j d α j
in s unknowns q 1 (n), . . . , q s (n), with a unique solution since the matrix (α j (h i )) ij is nonsingular. ✷ Proposition 4.5 Let p i (n) = y i (n), and define q i (n) as in Proposition 4.3. The set
S = {p i (n), q i (n) : 1 ≤ i ≤ s, 0 ≤ n ≤ N } is then a system of independent generators of the Poisson algebra (F(O x ′ ), {−, −} R ) with brackets {p i (m), p j (n)} R = {q i (m), q j (n)} R = 0 (4.6) {p i (m), q j (n)} R = δ ij δ m+n,N ,(4.7)
for all 1 ≤ i, j ≤ s and 0 ≤ m, n ≤ N .
Proof By Proposition 3.4, the coadjoint orbit O = O x ′ is an open subset of a 2s(N +1)dimensional vector space V . The algebra F(O) is thus generated by the 2s(N + 1) coordinate functions y i (n), b α (n) on O. Since these coordinate functions are also in the subalgebra generated by S, the set S clearly generates F(O). Its elements are necessarily independent (i.e. have linearly independent differentials), since the number of elements in S is exactly the dimension of V . The Poisson brackets of members of any generating set of F(O) obviously determine the Poisson brackets on any other generating set, so to complete the proof, it is enough to check that, if the members of S satisfy (4.6) and (4.7), then they satisfy the following relations imposed by Equations (3.6) and (3.7):
{p i (m), p j (n)} R = 0, (4.8) exp 1 2 N ℓ=0 s k=1 α(h k )q k (ℓ)v ℓ m , exp 1 2 N ℓ=0 s k=1 β(h k )q k (ℓ)v ℓ n R = 0, (4.9) p i (m), exp 1 2 N ℓ=0 s k=1 α(h k )q k (ℓ)v ℓ n R = 1 2 α(h i ) exp 1 2 N ℓ=0 s k=1 α(h k )q k (ℓ)v ℓ n+m−N .
(4.10)
The relations (4.8) and (4.9) are obvious from (4.6). To see that (4.10) also holds, note that if S satisfies (4.6) and (4.7), then {p i (m), −} R acts by ∂ ∂q i (N −m) on functions of the position variables q j (ℓ), so
p i (m), exp 1 2 N ℓ=0 s k=1 α(h k )q k (ℓ)v ℓ n R = ∂ ∂q i (N − m) exp 1 2 N ℓ=0 s k=1 α(h k )q k (ℓ)v ℓ n = 1 2 α(h i ) exp 1 2 N ℓ=0 s k=1 α(h k )q k (ℓ)v ℓ n+m−N . ✷
The expression (4.1) for the Hamiltonian can be rewritten as
H = 1 2 s i=1 (y i (v) 2 ) N + α∈∆ (b α (v) 2 ) N ,
where y i (v) and b α (v) are the generating functions
y i (v) = N n=0 y i (n)v n and b α (v) = N n=0 b α (n)v n , respectively, in F(O) ⊗ R[v]/ v N +1 .
In terms of the Darboux coordinates {p i (n), q i (n)} introduced in Propositions 4.3 and 4.5, the Hamiltonian becomes:
H = 1 2 s i=1 N j=0 p i (j)p i (N − j) + α∈∆ exp N n=0 s i=1 α(h i )q i (n)v n N = 1 2 s i=1 N j=0 p i (j)p i (N − j) + α∈∆ e Q(α,0) σ⊢N Q σ α σ! ,
where we use multi-index notation Q σ α = Q(α, 1) σ 1 · · · Q(α, N ) σ N and σ! = σ 1 ! · · · σ N ! for partitions σ = (σ 1 , . . . ,
σ N ) ∈ Z N + of the form N i=1 iσ i = N , with Q(α, n) defined as s i=1 α(h i )q i (n),q i (n) = {H, q i (n)} R = 1 2 N ℓ=0 s k=1 p k (ℓ)p k (N − ℓ), q i (n) R = p i (n) (4.11) • p i (n) = {H, p i (n)} R = α∈∆ b α (v) 2 , y i (n) R N = − α∈∆ α(h i )(b α (v) 2 ) n = − α∈∆ α(h i ) exp N k=0 Q(α, k)v k n = − α∈∆ α(h i )e Q(α,0) σ⊢n Q σ α σ! = − α∈∆ α(h i ) exp s j=1 α(h j )q j (0) σ⊢n n ℓ=1 1 σ ℓ ! s k=1 α(h k )q k (ℓ) σ ℓ (4.12)
for 1 ≤ i ≤ s and 0 ≤ n ≤ N . For example, if N = 0, we recover the classical open Toda evolution equations introduced by Toda in Type A, and their extensions to other root systems [Tod, Bog, Kos]:
• q i (0) = p i (0) (4.13) • p i (0) = − α∈∆ α(h i ) exp s j=1 α(h j )q j (0) .
(4.14)
When N = 2, we have a system of 3s particles in a 6s-dimensional phase space:
• q i (n) = p i (n), for n = 0, 1, 2
• p i (0) = − α∈∆ α(h i ) exp s j=1 α(h j )q j (0) • p i (1) = − α∈∆ α(h i ) exp s j=1 α(h j )q j (0) s k=1 α(h k )q k (1) • p i (2) = − α∈∆ α(h i ) exp s j=1 α(h j )q j (0) 1 2 s k=1 α(h k )q k (1) 2 + s i=1 α(h i )q i (2) .
The equations of motion (4.11) and (4.12) remain well defined in more general settings, even those without access to Liouville integrable systems and hamiltonian functions. All that is needed is an appropriate choice of root system and Cartan elements h j to determine the expression in the last line of (4.12). For example, Toda [Tod] studied equations associated to an infinite lattice:
• q i = p i • p i = −e q i −q i+1 + e q i−1 −q i , for i ∈ Z.
(4.15)
These equations correspond to the locally finite Lie algebra gl(∞) of doubly infinite matrices with only finitely many nonzero entries, where {h j : j ∈ Z} is an orthonormal basis of the diagonal matrices in gl(∞), and ∆ = {ǫ i − ǫ i+1 : i ∈ Z} is a set of simple roots with ǫ i (h j ) = δ ij . Inserting this data into (4.12) gives equations for Takiff algebras of type A ∞ , whose solution will be discussed in Example 4.26:
• q i (n) = p i (n) • p i (n) = − exp q i (0) − q i+1 (0) σ⊢n n ℓ=1 1 σ ℓ ! q i (ℓ) − q i+1 (ℓ) σ ℓ + exp q i−1 (0) − q i (0) σ⊢n n ℓ=1 1 σ ℓ ! q i−1 (ℓ) − q i (ℓ) σ ℓ .
(4.16)
Exact solutions by factorisation
The equations of motion (4.11) and (4.12) are equivalent to a Lax presentation (4.17) where L = L(t) is the Lax matrix (4.2) and
• L = [M, L],M = 1 2 N j=0 α∈∆ b α (j)(e α (j) − f α (j)).
This can be verified directly by changing the matrix entries in (4.17) to the Darboux coordinates introduced in Proposition 4.5 and comparing matrix entries on the two sides of Equation (4.17). The Lax framework lets us exactly solve the Hamiltonian system using factorisation in Lie groups associated to truncated current algebras. The strategy consists of expressing the exponential exp(−t dH(L(0)) = exp(−t L(0)) as a product of certain integral curves θ − (t) −1 and θ + (t) on Lie subgroups of G N , where L(0) is the Lax matrix at time t = 0 and dH is the differential of the Hamiltonian function H ∈ F(O). The adjoint action of θ + (t) then gives solutions to the equations of motion by comparing entries of the matrices in the expression
L(t) = θ + (t)L(0)θ −1 + (t)
. We illustrate this method with exact solutions for the extremal case (minimal root lattice) when g = sl 2 (R), deriving solutions in terms of Darboux coordinates for the 3-body problem associated with N = 2, for certain initial conditions. To simplify notation, we write q n for the (rescaled) position variable 1 √ 2 q 1 (n) for n = 0, 1, . . . , N . Similarly, p n will be the momentum 1 √ 2 p 1 (n), where all masses are taken to be 1. When we wish to emphasize evolution in terms of time t, we write q n (t) and p n (t) for q n and p n , respectively. Note that in this section, q n (0) and p n (0) will denote the initial position and initial velocity of the nth particle.
In the case of g = sl 2 (R), the Lax matrix L = L(t) may be viewed as the 2 × 2 matrix
L(t) = y(v, t) b(v, t) b(v, t) −y(v, t) , (4.18) where y(v, t) = N n=0 p n (t)v n and b(v, t) = exp N n=0 q n (t)v n , in which the formal variable v is identified with its image in R[v]/ v N +1 .
Diagonalizing the matrix −t L(0), we see that
P −ty −tb −tb ty P −1 = −tβ 0 0 tβ , where y = y(v, 0), b = b(v, 0), β is a square root of y 2 + b 2 , P = y + β b y − β b and P −1 = 1 2bβ b −b β − y y + β ,
noting that β, 1 β , and 1 2bβ always exist by Lemma A.1, since the coefficients of v 0 in the expressions for y 2 + b 2 and b are always positive, and the coefficients of v 0 in β and 2bβ are thus always nonzero. Without loss of generality, we choose β so that its constant term is the positive square root p 0 (0) 2 + e 2q 0 (0) . Therefore,
exp(−t L(0)) = exp P −1 −tβ 0 0 tβ P = P −1 e −tβ 0 0 e tβ P = 1 β β cosh tβ − y sinh tβ −b sinh tβ −b sinh tβ β cosh tβ + y sinh tβ ,
where the hyperbolic trigonometric functions are defined as usual, with all computations taking place in R [v]/ v N +1 :
sinh tβ = 1 2 (e tβ − e −tβ )
cosh tβ = 1 2 (e tβ + e −tβ ).
Let A = cosh tβ − y β sinh tβ, B = −b β sinh tβ, and C = cosh tβ + y β sinh tβ. Then √ A and
1 √ A exist by Lemma A.1, so exp(−t L(0)) = A B B C = θ − (t) −1 θ + (t), where θ + (t) = √ A B/ √ A 0 1/ √ A and θ − (t) = 1/ √ A 0 −B/ √ A √ A .
The curves θ + (t) and θ − (t) belong to the Lie subgroups corresponding to the standard and opposite Borels of G N = SL 2 (R[u]/ u N +1 ), respectively. Their Cartan components
√ A 0 0 1/ √ A and 1/ √ A 0 0 √ A
are inverse to one another. By general principles of the Lax formalism, conjugation of L(0) by θ + (t) now gives a Lax expression for solutions to the equations of motion
L(t) = θ + (t)L(0)θ + (t) −1 .
Calculating directly, we see that
y(v, t) b(v, t) b(v, t) −y(v, t) = 1 A Ay + Bb b b −Ay − Bb , so y(v, t) = y − b 2 sinh tβ β cosh tβ − y sinh tβ (4.19) b(v, t) = bβ β cosh tβ − y sinh tβ .
(4.20)
Equations (4.19) and (4.20) give solutions to the equations of motion (4.11) and (4.12) when g = sl 2 (R) and allow explicit calculation of the Darboux coordinates q n (t), p n (t) with respect to any family of initial conditions {q n (0), p n (0)} 0≤n≤N . This calculation is straightforward, but rather onerous for all but the smallest values of n.
We illustrate the process for the 3-body problem associated with g = sl 2 (R) and N = 2, for initial velocities p 0 (0) = p 1 (0) = p 2 (0) = 0 and arbitrary initial positions q 0 (0), q 1 (0), and q 2 (0). In this case, the Hamiltonian is H = (p 2 1 + 2p 0 p 2 ) + 2(q 2 1 + q 2 )e 2q 0 , giving equations of motion • q n = p n for n = 0, 1, 2 (4.21) keeping in mind that {q m , q n } R = {p m , p n } R = 0 and {p m , q n } R = 1 2 δ m+n,2 . Under our initial conditions, Equations (4.19) and (4.20) simplify to
• p 0 = −e 2q 0 • p 1 = −2q 1 e 2q 0 • p 2 = −2(q 2 1 + q 2 )e 2q 0 ,y(v, t) = b tanh tb b(v, t) = b sech tb. Expanding in Darboux coordinates, b(v, t) = exp 2 n=0 q n (t)v n = e q 0 (t) + q 1 (t)e q 0 (t) v + 1 2 (q 1 (t) 2 + 2q 2 (t))e q 0 (0) v 2 and sech tb = sech te q 0 (0) − tq 1 (0)e q 0 (0) sech te q 0 (0) tanh te q 0 (0) v + 1 2 t 2 q 1 (0) 2 e 2q 0 (0) 2 tanh 2 (te q 0 (0) ) − 1 − t q 1 (0) 2 + 2q 2 (0) e q 0 (0) tanh te q 0 (0) sech te q 0 v 2 ,
from which it easily follows that q 0 (t) = q 0 (0) − log(cosh(te q 0 (0) )) q 1 (t) = q 1 (0) − tq 1 (0)e q 0 (0) tanh(te q 0 (0) ) q 2 (t) = q 2 (0) − t 2 (q 1 (0) 2 + 2q 2 (0))e q 0 (0) tanh(te q 0 (0) ) − t 2 2 q 1 (0) 2 e 2q 0 (0) sech 2 (te q 0 (0) ) (4.22)
is a (global) solution to the equations of motion (4.21).
Jet transformations and general solutions
The standard Lie group factorisation methods described in Section 4.2 are elegant but computationally unwieldy in the setting of Takiff algebras g N , without imposing particular initial conditions and restricting to small values of N , as illustrated in the example above. In this section, we introduce differential operators called jet transformations to generate solutions for the equations of motion attached to Takiff algebras for arbitrary N , whenever there are known solutions available for g. As solutions for g can always be obtained [Kos], jet transformations provide solutions for all Takiff algebras g N .
Specializing to type A 1 , we obtain general solutions for N = 2, recovering (4.22) after imposing the same initial conditions. Applied to the rank 1 infinite lattice Toda model, jet transformations extend KdV soliton solutions to the larger Takiff systems.
Let g be a split simple, affine, or locally finite Lie algebra, with root system Φ and base ∆ of simple roots α i , i ∈ I. Let x in , z in ∈ R for i ∈ I and n > 0, and let C = C ∞ (t, x i0 , z i0 : i ∈ I) be the associative R-algebra of smooth functions in independent real variables t, x i0 , and z i0 , for i ∈ I. Let δ : C N → C N be the derivation
δ = N n=0 i∈I x in ∂ ∂x i0 + z in ∂ ∂z i0 ⊗ v n , where C N = C ⊗ R R[v]/ v N +1
is the associative algebra of truncated currents, for some N ≥ 0. When there is no confusion, we will regard C = C ⊗ R R as a subalgebra of C N and suppress the tensor symbol from elements and transformations of C N . For n = 0, . . . , N , the linear transformation D n = (exp δ) n : C → C will be called the jet transformation of order n. Explicitly,
D n = σ⊢n n ℓ=1 1 σ ℓ ! i∈I x iℓ ∂ ∂x i0 + z iℓ ∂ ∂z i0 σ ℓ .
(4.23)
Theorem 4.24 Let q j (0)(t), p j (0)(t) ∈ C be a solution of the equations of motion (4.13) and (4.14) of the Toda system for g, satisfying initial conditions q j (0)(0) = x j0 and p j (0)(0) = z j0 for all j ∈ I. Then q j (n)(t) = D n (q j (0)(t)) p j (n)(t) = D n (p j (0)(t)), for n = 0, . . . , N and j ∈ I, is a solution of the equations of motion (4.11) and (4.12) for g N , for all N ∈ Z + . These solutions satisfy the initial conditions q j (n)(0) = x jn p j (n)(0) = z jn for all n = 0, . . . , N and j ∈ I.
Proof Let q j (n) = q j (n)(t) = D n (q j (0)(t)) and p j (n) = p j (n)(t) = D n (p j (0)(t)) for all n = 0, . . . , N and j ∈ I. As t and q i (0)(0) are independent for all i, the derivation δ commutes with ∂ ∂t ⊗ 1 on C N , so ∂ ∂t commutes with D n = (exp δ) n on C. In particular,
• q j (n) = ∂ ∂t D n (q j (0)) = D n ( • q j (0)) = D n (p j (0)) = p j (n) since • q j (0) = p j (0), by (4.13). Moreover, N n=0 •• q j (n)v n = N n=0 ∂ 2 ∂t 2 D n (q j (0))v n = N n=0 D n ( •• q j (0))v n , and •• q j (0) = − α∈∆ α(h j ) exp Q(α, 0),
by (4.13) and (4.14), where Q(α, n) = i∈I α(h i )q i (n) for all α ∈ ∆ and n = 0, . . . , N . But then
N n=0 •• q j (n)v n = − N n=0 D n α∈∆ α(h j ) exp Q(α, 0) v n = − α∈∆ α(h j ) N n=0 D n exp Q(α, 0) v n = − α∈∆ α(h j )(exp δ)(exp Q(α, 0)).
Since δ is a derivation, its exponential
exp δ = N n=0 D n v n : C N → C N
is an automorphism, so exp δ commutes with the exponential map. That is,
(exp δ)(exp Q(α, 0)) = exp (exp δ)Q(α, 0) = exp N n=0 D n Q(α, 0)v n = exp N n=0 Q(α, n)v n , and N n=0 •• q j (n)v n = − α∈∆ α(h j ) exp N k=0 Q(α, k)v k .
Comparing coefficients of v n now shows that q j (n) satisfies the g N equations of motion:
• q j (n) = p j (n) • p j (n) = − α∈∆ α(h j ) exp N k=0 Q(α, k)v k n = − α∈∆ α(h j ) exp s i=1 α(h i )q i (0) σ⊢n n ℓ=1 1 σ ℓ ! s k=1 α(h k )q k (ℓ) σ ℓ ,
for n = 0, . . . , N , j ∈ I, and all N > 0. To see that q j (n) and p j (n) satisfy the specified initial conditions, note that x jk and z jk are independent of t, so the evaluation map ev : C → C, evaluating the variable t at the value t = 0, commutes with D n . That is,
q j (n)(0) = ev q j (n)(t) = ev D n (q j (0)(t)) = D n ev q j (0)(t) = D n (x j0 ).
Each differential operator of order larger than 1 appearing in the expression (4.23) for D n acts as zero on x j0 , so
q j (n)(0) = D n (x j0 ) = i∈I x in ∂ ∂x i0 + z in ∂ ∂z i0 (x j0 ) = x jn .
By exactly the same arguments, p j (n)(0) = z jn . ✷ Example 4.25 For g = sl 2 (R), we take the normalisation q n (t) = 1 √ 2 q 1 (n)(t) and p n (t) = 1 √ 2 p 1 (n)(t). When N = 0, factorisation no longer requires inversions in the ring of formal series, and Equation (4.20) becomes e q 0 (t) = e q 0 (0) cosh tβ − sinh tβ ,
so q 0 (t) = x 0 − log(cosh tβ − z 0 β sinh tβ),
where x n = q n (0) and z n = p n (0) for all n, and β = z 2 0 + e 2x 0 . Explicit general solutions for all N are then readily available using jet transformations and Theorem 4.24. For example,
q 1 (t) = D 1 (q 0 (t)) = x 1 ∂ ∂x 0 + z 1 ∂ ∂z 0 q 0 (t) = x 1 + (z 1 − x 1 z 0 )e 2x 0 sinh tβ β 3 cosh tβ − β 2 z 0 sinh tβ − t (z 0 z 1 + x 1 e 2x 0 )(β sinh tβ − z 0 cosh tβ) β 2 cosh tβ − βz 0 sinh tβ .
When initial velocity is zero, that is, z n = 0 for all n, this reduces to
q 0 (t) = x 0 − log cosh(te x 0 ) q 1 (t) = x 1 − tx 1 e x 0 tanh(te x 0 ).
In this setting, the jet transformation
D 2 = 1 2 x 1 ∂ ∂x 0 + z 1 ∂ ∂z 0 2 + x 2 ∂ ∂x 0 + z 2 ∂ ∂z 0 simplifies to 1 2 x 2 1 ∂ 2 ∂x 2 0 + x 2 ∂ ∂x 0 , so q 2 (t) = D 2 (q 0 (t)) = x 2 − 1 2 te x 0 2x 2 − x 2 1 tanh(te x 0 ) − 1 2 t 2 x 2 1 e 2x 0 sech 2 (te x 0 ),
and we recover the formulas in (4.22).
Example 4.26 Recall the equations of motion (4.15) associated with the infinite lattice:
•• q i = −e q i −q i+1 + e q i−1 −q i (4.27)
for all i ∈ Z. Finite and affine versions of these equations in type A can be obtained from (4.27), up to a change in basis, by imposing appropriate boundary conditions. For these equations, Toda found soliton solutions,
e −r j = 1 + γ 2 0 sech 2 (κ 0 j ± γ 0 t),(4.28)
where r j = q j+1 − q j , γ 0 = sinh κ 0 , and κ 0 ∈ R, which plays the role of the spring constant in an exponential potential. He then used a continuum limit to relate (4.28) to known solutions of the Korteweg-de Vries equation [TW]. In terms of initial conditions, κ 0 = 1 2 cosh −1 (2e x 00 −x 10 − 1), where q j (0) = x j0 . The Takiff version of these equations of motion (4.16) can be solved by applying jet transformations
D n = exp N k=1 i∈Z x ik ∂ ∂x i0 + z ik ∂ ∂z i0
v k n to (4.28). The action of D n on r j = r j (0)(t) is well defined, since the only x i0 and z i0 that appear in the right-hand side of (4.28) are x 00 and x 10 . That is, r j (n) = q j+1 (n) − q j (n) = D n (q j+1 (0) − q j (0)) = D n (r j (0)).
For example, D 1 (e −r j (0) ) = −r j (1)e −r j (0) , so we have solutions
e −r j (0) = 1 + γ 2 0 sech 2 (κ 0 j ± γ 0 t) −r j (1)e −r j (0) = D 1 (1 + γ 2 0 sech 2 (κ 0 j ± γ 0 t)) = 2γ 0 γ 1 sech 2 (κ 0 j ± γ 0 t) + 2γ 2 0 (κ 1 j ± γ 1 t)sech 2 (κ 0 j ± γ 0 t) tanh(κ 0 j ± γ 0 t), where κ 1 = D 1 κ 0 = x 01 − x 11 2 √
1 − e x 10 −x 00 γ 1 = D 1 γ 0 = κ 1 cosh κ 0 = (x 01 − x 11 )e x 00 −x 10 2 √ e x 00 −x 10 − 1 .
Vinberg quantization
For any Lie algebra m, let
0 = U −1 (m) ⊂ U 0 (m) ⊂ U 1 (m) ⊂ · · ·
be the PBW filtration of its enveloping algebra U (m). The associated graded algebra
gr U (m) = ∞ n=0 U n (m)/U n−1 (m)
is then isomorphic to the symmetric algebra S(m). For finite dimensional simple Lie algebras g, Vinberg [Vin] asked whether there are commutative subalgebras A µ ⊂ U (g) which quantize the (Poisson-commutative) Mischenko-Fomenko subalgebras B µ ⊂ S(g) in the sense that gr A µ = B µ . This is sometimes called Vinberg's Problem.
The analogous problem in our context is to quantize the Poisson-commutative subalgebras generated by the integrable systems introduced in Section 3. As a corollary, this will give a second proof of commutativity for the classical integrable systems described in Section 3. Recall that the integrals of motion f kℓ = tr k (L ℓ ) are restrictions of func- for 1 ≤ i, j ≤ s, α, β ∈ ∆, and 0 ≤ m, n ≤ N , where e α (m + n) is defined to be 0 for m + n > N . Viewing the conserved quantities f kℓ as elements of the symmetric algebra S(c R N ) = ∞ n=0 U n (c R N )/U n−1 (c R N ), Vinberg's Problem is to construct commuting elements u kℓ ∈ U k (c R N ) so that their images
tions in S(g R N ) to the coadjoint orbit O = O x ′ . As Poisson algebras, S(g R N )| O = S(c R N ), where c R N is the Lie algebra c R N = Span{h i (n), e α (n) : 1 ≤ i ≤ s, α ∈ ∆, 0 ≤ n ≤ N },(5.u kℓ + U ℓ−1 (c R N ) ∈ U ℓ (c R N )/U ℓ−1 (c R N ) ⊂ S(c R N )
are precisely the functions f kℓ . As the quantization of H = 1 2 tr N (L 2 ), the element 1 2 u N 2 can then be regarded as a quantum Hamiltonian, with {u kℓ : 0 ≤ k ≤ N, ℓ − 1 ∈ E(g)} as its set of quantum first integrals of motion.
As in Section 1, we fix a faithful representation ρ : g → gl(M ) of minimal dimension M , and recall that (h i |h j ) = δ ij , (e α |e β ) = (f α |f β ) = 0, and (e α |f β ) = δ αβ , with respect to trace form (x|y) = tr(ρ(x)ρ(y)). The following proposition is a reformulation of a recent result of Molev [Mol,Corollary 2.3].
Proposition 5.3 Let
F (v) = N n=0 s i=1 h i (N − n) ⊗ ρ(h i ) + α∈Φ + e α (N − n) ⊗ ρ(f α ) + f α (N − n) ⊗ ρ(e α ) v n ,
viewed as an M ×M matrix with entries in U (g N )⊗K [v]. Then the coefficient (tr F (v) ℓ ) k of v k in tr(F (v) ℓ ) is in the centre of U (g N ) for all 0 ≤ k ≤ N and ℓ − 1 ∈ E(g). ✷
Consider the projection π : g N → b N with respect to the vector space decomposition
g N = k N ⊕ b N , where k N = Span{e α (n) − f α (n) : α ∈ Φ + , 0 ≤ n ≤ N } b N = Span{h i (n), e α (n) : 1 ≤ i ≤ s, α ∈ Φ + , 0 ≤ n ≤ N }.
The map π induces a projectionπ : U (g N ) = k N U (g N ) ⊕ U (b N ) → U (b N ), which is easily seen to be an associative algebra homomorphism when restricted to the centre Z(g N ) of U (g N ). Indeed, for any z, w ∈ Z(g N ), write z = z k + z b and w = w k + w b , where z k , w k ∈ k N U (g N ) and z b , w b ∈ U (b N ). Theñ π(zw) =π(wz) =π(w k z) +π(w b z)
=π(zw b ) =π(z k w b ) +π(z b w b ) =π(z b w b ) = z b w b =π(z)π(w).
Let n N = Span{e α (n) : α ∈ Φ + , 0 ≤ n ≤ N }. Then [n N , n N ] is an ideal of b N , and we let c N be the Lie algebra b N /[n N , n N ]. The quotient map φ : b N → c N and rescaling map r : c N −→ c R N are Lie algebra homomorphisms, where c R N is the Lie algebra defined in (5.1) and r : h i (n) + [n N , n N ] −→ 2h i (n) e α (n) + [n N , n N ] −→ 2e α (n), for 1 ≤ i ≤ s, α ∈ ∆, and 0 ≤ n ≤ N.
They induce enveloping algebra homomorphismsφ : U (b N ) → U (c N ) andr : U (c N ) → U (c R N ), so the compositionr •φ •π : Z(g N ) → U (c R N ) is a homomorphism of associative algebras. In particular, its image is commutative.
Theorem 5.4 Let G(v) = N n=0 s i=1 h i (N − n) ⊗ ρ(h i ) + α∈∆ e α (N − n) ⊗ ρ(e α + f α ) v n .
Then
{u kℓ : 0 ≤ k ≤ N, ℓ − 1 ∈ E(g)} is a maximal set of commuting algebraically independent elements of U (c R N ), where u kℓ = (tr G(v) ℓ ) k . Moreover, u kℓ +U ℓ−1 (c R N ) = f kℓ , as elements of U ℓ (c R N )/U ℓ−1 (c R N ) in S(c R N ) = S(g R N )|O.
Proof That {u kℓ : 0 ≤ k ≤ N, ℓ − 1 ∈ E(g)} is a set of commuting elements follows from the discussion above since Z(g N ) is manifestly commutative, and u kℓ =r •φ •π 1 2 ℓ (tr F (v) ℓ ) k is in the image of the algebra homomorphismr •φ •π : Z(g N ) → U (c R N ). As functions on O,
u kℓ + U ℓ−1 (c R N ) = tr N n=0 ( s i=1 y i (n)v n ⊗ ρ(h i ) + α∈∆ b α (n)v n ⊗ ρ(e α + f α )) ℓ k = tr k N n=0 s i=1 y i (n)h i (n) + α∈∆ b α (n)(e α (n) + f α (n) ℓ = f kℓ .
Algebraic independence then follows from independence of the polynomials f kℓ in Theorem 3.8(ii), for 0 ≤ k ≤ N and ℓ − 1 ∈ E(g). The maximal number of such independent elements is half the dimension of the symplectic manifold O, that is, 1 2 (2(N + 1)s) = s(N + 1), by Proposition 3.4. By Chevalley's Restriction Theorem [Bou, chapitre VIII, §8.3, théorème 1], the number of exponents of g is its rank s, so {u kℓ : 0 ≤ k ≤ N, ℓ − 1 ∈ E(g)} is maximal. ✷ As a corollary, we obtain a second proof of commutativity for the classical integrable systems I N (g) of Theorem 3.8.
Corollary 5.5 The functions f kℓ = tr k (L ℓ ) are mutually commutative, for k = 0, . . . , N and ℓ − 1 ∈ E(g).
Proof This follows from commutativity of the elements u kℓ ∈ U (c R N ) and the fact that f kℓ = u kℓ + U ℓ (c R N ) as elements of the graded component S ℓ (c R N ) = U ℓ (c R N )/U ℓ−1 (c R N ) of S(c R N ) = ∞ j=0 S j (c R N ), together with the following well known result for any Lie algebra m (see [Dix,Remarques 2.8.7], for example):
For all f ∈ S m (m), and g ∈ S n (m), letf ∈ U m (m) andg ∈ U n (m) such that
defines a symmetric invariant bilinear form on g N . The form (−, −) c is nondegenerate if and only if c N = 0. ✷ Such forms (−, −) c are often the only possibility:
an open subset of the linear space V . In particular, the Poisson algebra of polynomial functions on O can thus be identified with the symmetric algebra (S(V ), {−, −} R ).
Proposition 4. 3
3Let F(O) be the algebra of (real-valued) smooth functions on O. The equations N ℓ=0
(n) for the time derivatives dp i (n) dt and dq i (n) dt , the Hamiltonian system (O, H) has equations of motion •
i (m), h j (n)] R = [e α (m), e β (n)] R = 0 [h i (m), e α (n)] R = 1 2 α(h i )e α (m + n),(5.2)
h i (N − n) : L −→ (h i (N − n)|L) N = y i (n) e α (N − n) : L −→ (e α (N − n)|L) N = b α (nh i (N − n)and e α (N − n) coincide with the coordinate functions y i (n) and b α (n), respectively, as functions on O. Therefore,
of S m (m) = U m (m)/U m−1 (m) and S n (m) = U n (m)/U n−1 (m), respectively. Then [f,g] =fg −gf ∈ U m+n−1 (m), and {f, g} = [f,g] + U m+n−2 (m), as elements of S m+n−1 (m). ✷
Acknowledgements. The author is grateful to Carlos Tomei, Rukmini Dey, and Xiao He for many enjoyable discussions. He also thanks IMPA (Rio de Janeiro) and the University of Alberta, where parts of this project were completed.A AppendixThe following easy lemma was used in Section 4.2. As usual, we write a(u) i or simply a i for the coefficient of u i in the expression a(u) = a 0 + a 1 u + · · · + a N u N + u N +1 of any element a(u) in the algebra K[u]/ u N +1 of truncated currents over an arbitrary field K of characteristic zero.is not invertible. If a 0 = 0, then we induct downwards on the minimal positive degree m = min{i > 0 : a i = 0} of a(u). By convention, we define m = N + 1 if a(u) = a 0 . If m > N , the result is clear. Otherwise, a(u) 1 − am a 0 u m has larger minimal positive degree, and the result holds by induction.Conversely, if a k has a square root s ∈ K, we induct on N . When N = 0, the result is obvious. By the induction hypothesis, for any N − 1 ≥ 0, there exists r(u) ∈ K[u]such that r(u) 2 + u N = a(u) + u N . Let n = min{i ≥ 0 : r i = 0}. If n = N/2, then r(u) 2 + u N +1 = r 2 n u N + u N +1 , so a i = 0 for all i < N and we can take b(u) = su n + u N +1 . Otherwise, let c = a N − r(u) 2 N 2rn , and set b(u) = r(u) + cu N −n + u N +1 . Then b(u) 2 = a(u) in K[u]/ u N +1 . ✷
T Arakawa, Proceedings of the International Congress of Mathematicians-Rio de. the International Congress of Mathematicians-Rio deHackensack, NJRepresentation theory of W -algebras and Higgs branch conjectureT. Arakawa, Representation theory of W -algebras and Higgs branch conjecture, in: Proceedings of the International Congress of Mathematicians-Rio de Janeiro 2018. vol. II. World Sci. Publ., Hackensack, NJ, 2018, pp. 1263-1281.
O Babelon, D Bernard, M Talon, Introduction to Classical Integrable Systems. CambridgeO. Babelon, D. Bernard, and M. Talon, Introduction to Classical Integrable Systems, Cambridge, 2003.
Takiff superalgebras and conformal field theory. A Babichenko, D Ridout, J. Phys. A. 4626A. Babichenko and D. Ridout, Takiff superalgebras and conformal field theory, J. Phys. A 46 (2013), 125204, 26 pp.
On perturbations of the periodic Toda lattice. O I Bogoyavlensky, Commun. Math. Phys. 51O.I. Bogoyavlensky, On perturbations of the periodic Toda lattice, Com- mun. Math. Phys. 51 (1976), 201-209.
Éléments de mathématique: Groupes et algèbres de Lie. N Bourbaki, Hermann, ParisN. Bourbaki,Éléments de mathématique: Groupes et algèbres de Lie, Chapitres 7 et 8, Hermann, Paris, 1975.
New integrable hierarchies from vertex operator representations of polynomial Lie algebras. P Casati, G Ortenzi, J. Geom. Phys. 56P. Casati and G. Ortenzi, New integrable hierarchies from vertex operator repre- sentations of polynomial Lie algebras, J. Geom. Phys. 56 (2006), 418-449.
V Chari, A Pressley, A Guide to Quantum Groups. CambridgeV. Chari and A. Pressley, A Guide to Quantum Groups, Cambridge, 1994.
J Dixmier, Algèbres enveloppantes,Éditions Gauthier-Villars. ParisJ. Dixmier, Algèbres enveloppantes,Éditions Gauthier-Villars, Paris, 1974.
E Fermi, J R Pasta, S M Ulam, Studies of nonlinear problems. ChicagoUniv. Chicago PressIIEnrico Fermi Collected PapersE. Fermi, J.R. Pasta, S.M. Ulam, Studies of nonlinear problems, in: Enrico Fermi Collected Papers, Vol. II, Univ. Chicago Press, Chicago, 1965, pp. 977-988.
The Toda lattice. I. Existence of integrals. H Flaschka, Phys. Rev. B. 9H. Flaschka, The Toda lattice. I. Existence of integrals, Phys. Rev. B 9 (1974), 1924-1925.
Integrals of the Toda lattice. M Hénon, Phys. Rev. B. 9M. Hénon, Integrals of the Toda lattice, Phys. Rev. B 9 (1974), 1921-1923.
The solution to a generalized Toda lattice and representation theory. B Kostant, Adv. Math. 34B. Kostant, The solution to a generalized Toda lattice and representation theory, Adv. Math. 34 (1979), 195-338.
The complete integrability of a Lie-Poisson system proposed by Bloch and Iserles. L.-C Li, C Tomei, ID 64949Int. Math. Res. Not. 19ppL.-C. Li and C. Tomei, The complete integrability of a Lie-Poisson system proposed by Bloch and Iserles, Int. Math. Res. Not. (2006), Art. ID 64949, 19 pp.
Casimir elements and Sugawara operators for Takiff algebras. A I Molev, J. Math. Phys. 621paper no. 011701A.I. Molev, Casimir elements and Sugawara operators for Takiff algebras, J. Math. Phys. 62 (2021), no. 1, paper no. 011701.
Jet schemes of the closure of nilpotent orbits. A Moreau, R Yu, Pacific J. Math. 281A. Moreau and R. Yu, Jet schemes of the closure of nilpotent orbits, Pacific J. Math. 281 (2016), 137-183.
Wave propagation in anharmonic lattices. M Toda, J. Phys. Soc. Japan. 23M. Toda, Wave propagation in anharmonic lattices, J. Phys. Soc. Japan 23 (1967), 501-506.
A soliton and two solitons in an exponential lattice and related equations. M Toda, M Wadati, J. Phys. Soc. Japan. 34M. Toda and M. Wadati, A soliton and two solitons in an exponential lattice and related equations, J. Phys. Soc. Japan 34 (1973), 18-25.
On certain commutative subalgebras of a universal enveloping algebra (Russian). E B Vinberg, Izv. Akad. Nauk SSSR Ser. Mat. 54translation in Math. USSR-IzvE.B. Vinberg, On certain commutative subalgebras of a universal enveloping al- gebra (Russian), Izv. Akad. Nauk SSSR Ser. Mat. 54 (1990), 3-25; translation in Math. USSR-Izv. 36 (1991), 1-22.
| []
|
[
"A NOTE ON TORIC DEGENERATION OF A BOTT-SAMELSON-DEMAZURE-HANSEN VARIETY",
"A NOTE ON TORIC DEGENERATION OF A BOTT-SAMELSON-DEMAZURE-HANSEN VARIETY"
]
| [
"B Narasimha Chary "
]
| []
| []
| In this paper we study the geometry of toric degeneration of a Bott-Samelson-Demazure-Hansen (BSDH) variety, which was algebraically constructed in [Pas10] and [PK16]. We give some applications to BSDH varieties. Precisely, we classify Fano, weak Fano and log Fano BSDH varieties and their toric limits in Kac-Moody setting. We prove some vanishing theorems for the cohomology of tangent bundle (and line bundles) on BSDH varieties. We also recover the results in [PK16], by toric methods. | 10.1007/s10801-019-00925-3 | [
"https://arxiv.org/pdf/1710.06300v1.pdf"
]
| 119,634,250 | 1710.06300 | cdef188007b941f8ee2ae07dd063f80cbe88e544 |
A NOTE ON TORIC DEGENERATION OF A BOTT-SAMELSON-DEMAZURE-HANSEN VARIETY
16 Oct 2017
B Narasimha Chary
A NOTE ON TORIC DEGENERATION OF A BOTT-SAMELSON-DEMAZURE-HANSEN VARIETY
16 Oct 2017Bott-Samelson-Demazure-Hansen varietiescanonical line bundletangent bundle and toric varieties
In this paper we study the geometry of toric degeneration of a Bott-Samelson-Demazure-Hansen (BSDH) variety, which was algebraically constructed in [Pas10] and [PK16]. We give some applications to BSDH varieties. Precisely, we classify Fano, weak Fano and log Fano BSDH varieties and their toric limits in Kac-Moody setting. We prove some vanishing theorems for the cohomology of tangent bundle (and line bundles) on BSDH varieties. We also recover the results in [PK16], by toric methods.
Introduction
Bott-Samelson-Demazure-Hansen (for short, BSDH) varieties are natural desingularizations of Schubert varieties in the flag varieties. These were algebraically constructed by M. Demazure and H.C. Hansen independently by adapting a differential geometric approach from the paper of Bott and Samelson (see [BS58], [Dem74] and [Han73]). Briefly, the BSDH varieties are iterated projective line bundles, given by factoring the Schubert variety using Bruhat decomposition. These varieties depend on the given expression of the Weyl group element corresponding to the Schubert variety (see for instance [CKP15, Page 32]). We also see in this paper some properties of these varieties which depend on the given expression.
In [GK94], M. Grossberg and Y. Karshon constructed toric degenerations of BSDH varieties by complex geometric methods. In [Pas10] B. Pasquier and in [PK16] A.J. Parameswaran and P. Karuppuchamy constructed these toric degenerations algebraically. B. Pasquier used these degenerations to study the cohomology of line bundles on BSDH varieties (see [Pas10]). In [PK16], the authors studied the limiting toric variety for a simple simply connected algebraic group by geometric methods. In this paper we study the limiting toric variety of a BSDH variety in more detail by methods of toric geometry and we prove some applications to BSDH varieties. We also recover the results in [PK16] and extend them to the Kac-Moody setting. The key idea for many results in this article is that the toric limit is a 'Bott tower'. These are studied in [Cha] and some of their properties can be transferred to BSDH varieties by using the semi-continuity theorem.
The author is supported by AGIR Pole MSTIC project run by the University of Grenoble Alpes, France.
1 Let G be a Kac-Moody group over the field of complex numbers (for the definition see [Kum12]). Let B be a Borel subgroup containing a fixed maximal torus T . Let W be the Weyl group corresponding to the pair (G, B, T ) and let w ∈ W . Letw := s β 1 · · · s βn be an expression (possibly non-reduced) of w in simple reflections and let Z(w) be the BSDH variety corresponding tow (see Section 2). Let Yw be the toric limit of Z(w) constructed as in [Pas10] and [PK16] (see Section 3). We see that Yw is a Bott tower, the iterated P 1 -bundle over a point {pt} where each P 1 -bundle is the projectivization of a rank 2 decomposable vector bundle (see Corollary 4.4). We prove that the ample cone Amp(Yw) of Yw can be identified with a subcone of the ample cone Amp(Z(w)) of Z(w) (see Corollary 5.1).
Recall that a smooth projective variety X is called Fano (respectively, weak Fano) if its anti-canonical divisor −K X is ample (respectively, nef and big). Following [AS14], we say that a pair (X, D) of a normal projective variety X and an effective Q-divisor D is log Fano if it is Kawamata log terminal and −(K X + D) is ample.
When G is a simple algebraic group and the expressionw is reduced, Fanoness and weak Fanoness of the BSDH variety Z(w) are considered in [Cha17]. Here we have the following results in Kac-Moody setting. Letw = s β 1 · · · s β i · · · s β j · · · s βr be an expression (remember that β k 's are simple roots). Let β ij := β j ,β i , whereβ i is the co-root of β i . Now we define some conditions on the expressionw (see Section 6 for examples and also see [Cha, Section 1]). Define for 1 ≤ i ≤ r,
η + i := {r ≥ j > i : β ij > 0} and η − i := {r ≥ j > i : β ij < 0}. If |η + i | = 1 (respectively, |η + i | = 2), then let η + i = {m} (respectively, η + i = {m 1 , m 2 }). If |η − i | = 1 (respectively, |η − i | = 2), then set η − i = {l} (respectively, η − i = {l 1 , l 2 }). • N 1 i is the condition that (i) |η + i | = 0, |η − i | ≤ 1, and if |η − i | = 1 then β il = −1; or (ii) |η − i | = 0, |η + i | ≤ 1
, and if |η + i | = 1 then β im = 1 and β mk = 0 for all k > m. • N 2 i is the condition that Case 1: Assume that |η + i | = 0. Then |η − i | ≤ 2, and if |η − i | = 1(respectively, |η − i | = 2) then β li = −1 or −2 (respectively, β il 1 = −1 = β il 2 ). Case 2: If |η − i | = 1 = |η + i | and l < m, then β il = −1, β im = 1 and β mk = 0 for all k > m.
Case 3: Assume that |η + i | = 1. Then β im = 1 and either it satisfies (i) Case 2; or (ii) |η − i | = 0 and β mk = 0 for all k > m; or (iii) there exists a unique r ≥ s > m such that β ms − β is = 1 and β mk − β ik = 0 for all k > s; or β ms − β is = −1 and β is − β ms − β sk = 0 for all k > s. Definition 1.1. We say the expressionw satisfies condition I (respectively, condition II) if N 1 i (respectively, N 2 i ) holds for all 1 ≤ i ≤ r. Note that N 1 i =⇒ N 2 i for all 1 ≤ i ≤ r. Theorem (See Lemma 6.1 and Theorem 6.3).
(1) Ifw satisfies I, then Yw and Z(w) are Fano.
(2) Ifw satisfies II, then Yw and Z(w) are weak Fano.
In [CKP15] and [CK17], we have obtained some vanishing results for the cohomology of tangent bundle of Z(w), when G is finite dimensional andw is reduced (see [CKP15, Section 3] and [CK17, Theorem 8.1] ). The casew is non-reduced is considered in [CKP]. Here we get some vanishing results in Kac-Moody setting. Let T Z(w) denote the tangent bundle of Z(w).
Corollary (see Corollary 6.6). Ifw satisfies I, then H i (Z(w), T Z(w) ) = 0 for all i ≥ 1. In particular, Z(w) is locally rigid.
In [AS14], D. Anderson and A. Stapledon studied the log Fanoness of Schubert varieties, and in [And14], log Fanoness of BSDH varieties is studied for chosen divisors. Let D be a divisor in Z(w) with support in the boundary of Z(w). For 1 ≤ i ≤ r, we define some constants f i which again depend on the given expressionw (for more details see Section 6).
Corollary (see Corollary 6.7). The pair
(Z(w), D) is log Fano if f i > 0 for all 1 ≤ i ≤ r.
The article is organized as follows: In section 2, we recall the construction of BSDH varieties. In Section 3, we give the algebraic construction of toric degeneration of a BSDH variety. In Section 4, we describe the limiting toric variety as an iterated P 1 -bundle. In Section 5, we see some vanishing results of cohomology of line bundles on BSDH varieties. Section 6 contains the results on Fano, weak Fano and log Fano properties of BSDH varieties and their toric limits. We also study the vanishing results on cohomology of tangent bundle on BSDH varieties. In Section 7, we recover the results in [PK16] by toric methods.
Preliminaries
In this section we recall the construction of Bott-Samelson-Demazure-Hansen varieties (see [BK07] and [Kum12]) and we recall some definitions in toric geometry which are used in this article (for more details on toric varieties see [CLS11] and also [Ful93]). We work over the field of complex numbers throughout.
2.1. BSDH varieties. Let A = (a ij ) 1≤i,j≤n be a generalized Cartan matrix. Let G be the Kac-Moody group associated to A (see [Kum12,Chapter IV]). Fix a maximal torus T and a Borel subgroup B containing T . Let S := {α 1 , . . . , α n } be the set of all simple roots of (G, B, T ). We denote s α i the simple reflection corresponding to α i . Note that the Weyl group W of G is generated by
{s α i : 1 ≤ i ≤ n}.
Let w ∈ W , an expressionw of w is a sequence (s β 1 , . . . , s βr ) of simple reflections s β 1 , . . . , s βr such that w = s β 1 · · · s βr . An expressionw of w is said to be reduced if the number r of simple reflections is minimal. In such case we call r the length of w. By abuse of notation, we also denote the expressionw byw = s β 1 · · · s βr . For α ∈ S, we denote P α , the minimal parabolic subgroup of G generated by B and a representative of s α .
Definition 2.1. Let w ∈ W andw := s β 1 · · · s βr be an expression (not necessarily reduced) of w. The Bott-Samelson-Demazure-Hansen (for short, BSDH) variety corresponding tõ w is Z(w) := P β 1 × · · · × P βr /B r , where the action of B r on P β 1 × · · · × P βr is defined by
(p 1 , . . . , p r ) · (b 1 , . . . , b r ) = (p 1 b 1 , b −1 1 p 2 b 2 , . . . , b −1 r−1 p r b r ) for all p i ∈ P β i , b i ∈ B.
These are smooth projective varieties of dimension r. There is a natural morphism φw :
Z(w) −→ G/B defined by [(p 1 , . . . , p r )] → p 1 · · · p r B.
Ifw is reduced, the BSDH variety Z(w) is a natural desingularization of the Schubert variety, the B-orbit closure of wB/B in G/B (see [Dem74], [Han73] and [Kum12, Chapter VIII]). We can also construct the BSDH variety as an iterated P 1 -bundles. Letw ′ := s β 1 · · · s β r−1 . Let f : G/B −→ G/P βr be the map given by gB → gP βr and let p : Z(w ′ ) −→ G/P βr be the map given by [(p 1 , . . . , p r−1 )] → p 1 · · · p r−1 P βr . Then we have the following cartesian diagram (see [BK07, Page 66] and [Kum12, Chapter VII]):
Z(w) = Z(w ′ ) × G/P βr G/B φw / / fw G/B f Z(w ′ ) p / / G/P βr
Note that fw is a P 1 -fibration and the relative tangent bundle T fw of fw is φ * w (L βr ), where L βr is the homogeneous line bundle on G/B corresponding to β r . Using the cohomology of the relative tangent bundle T fw we studied the cohomology of the tangent bundle of Z(w), when G is finite dimensional andw is a reduced expression (see [CKP15] and [CK17]). The fibration fw comes with a natural section σw : Z(w ′ ) → Z(w) induced by the projection P β 1 × · · · × P βr → P β 1 × · · · × P β r−1 . For the toric limits we get two natural sections, as will be explained in Section 3. For all i ∈ {1, . . . , r}, we denote Z i , the divisor in Z(w) defined by
{[(p 1 , . . . , p r )] ∈ Z(w) : p i ∈ B}.
In [LT04], N. Lauritzen and J.F. Thomsen proved that Z ′ i s forms a basis of the Picard group of Z(w) and they also proved that ifw is a reduced expression these form a basis of the monoid of effective divisors (see [LT04,Proposition 3.5]). Recently, the effective divisors of Z(w) forw non-reduced case have been considered in [And14].
The BSDH variety can be described also as an iterated projective line bundle, where each projective bundle is the projectivization of certain rank 2 vector bundle (not necessarily decomposable). In Section 4, we see the toric degeneration of a BSDH variety (constructed in Section 3) is a Bott tower, the iterated P 1 -bundle over a point {pt}, where each P 1 -bundle is the projectivization of a rank 2 decomposable vector bundle.
Toric varieties.
Definition 2.2. A normal variety X is called a toric variety (of dimension n) if it contains an n-dimensional torus T (i.e. T = (C * ) n ) as a Zariski open subset such that the action of the torus on itself by multiplication extends to an action of the torus on X.
Toric varieties are completely described by the combinatorics of the corresponding fans. We denote the fan corresponding to a toric variety by Σ and the collection of cones of dimension s in Σ by Σ(s) for 1 ≤ s ≤ n. For each cone σ ∈ Σ, we denote V (σ), the orbit closure of the orbit corresponding to cone σ. For each σ ∈ Σ, σ(1) := σ ∩ Σ(1). For each ρ ∈ Σ(1), we can associate a divisor in X, we denote it by D ρ (see [CLS11, Chapter 4] for more details). We recall the following:
Definition 2.3.
(1) We say P ⊂ Σ(1) is a primitive collection if P is not contained in σ(1) for some σ ∈ Σ but any proper subset is. Note that if Σ is simplicial, primitive collection means that P does not generate a cone in Σ but every proper subset does.
(2) Let P = {ρ 1 , . . . , ρ k } be a primitive collection in a complete simplicial fan Σ.
Recall u ρ be the primitive vector of the ray ρ ∈ Σ.
Then k i=1 u ρ i is in the relative interior of a cone γ P in Σ with a unique expression k i=1 u ρ i − ( ρ∈γ P (1) c ρ u ρ ) = 0. (2.1)
where c ρ ∈ Q >0 . Then we call (2.1) the primitive relation of X corresponding to P.
(3) For a primitive relation P , we can associate an element r(P ) in N 1 (X), where N 1 (X) is the real vector space of numerical classes of one-cycles in X (see [CLS11, Page 305]).
Toric degeneration of a BSDH variety
In [GK94], toric degenerations of BSDH varieties were constructed by complex geometric methods. In [Pas10] and [PK16] they have given an algebraic construction for toric degeneration of a BSDH variety. We recall the algebraic construction here.
Note that the simple roots are linearly independent elements in the character group of G. Let N be the lattice of one-parameter subgroups of T . We can choose a positive integer q and an injective morphism λ : G m −→ T (i.e. λ ∈ N and λ is injective) such that for all 1 ≤ i ≤ n and u ∈ G m , α i (λ(u)) = u q (see [Pas10, Page 2836]). When G is finite dimensional, for each one-parameter subgroup λ ∈ N, define
P (λ) := {g ∈ G : lim u→0 λ(u)gλ(u) −1 exists in G}.
The set P (λ) is a parabolic subgroup and the unipotent radical R u (P (λ)) of P (λ) is given by
R u (P (λ)) = {g ∈ G : lim u→0 λ(u)gλ(u) −1 is identity in G}.
Any parabolic subgroup of G is of this form (see [Spr10,Proposition 8.4.5]). Choose a one-parameter subgroup λ ∈ N such that the corresponding parabolic subgroup is B. Let us define an endomorphism of G for all u ∈ G m bỹ
Ψ u : G → G, g → λ(u)gλ(u) −1 .
Let B be the set of all endomorphisms of B. Now define a morphism
Ψ : G m → B by u →Ψ u | B .
This map can be extended to 0 and for all x ∈ U, Ψ u | B (x) goes to identity when u goes to zero. Let A 1 := SpecC[t] be the affine line over C. We denote for all u ∈ A 1 , Ψ u the image of u in B. Note that Ψ u is the identity on T and Ψ 0 is the projection from B to T . Letw = s β 1 · · · s βr be an expression.
Definition 3.1.
(i) Let X be the variety defined by
X := A 1 × P β 1 × · · · × P βr /B r ,
where the action of B r on A 1 × P β 1 × · · · × P βr is given by
(u, p 1 , . . . , p r ) · (b 1 , . . . , b r ) = (u, p 1 b 1 , Ψ u (b 1 ) −1 p 2 b 2 , . . . , Ψ u (b r−1 ) −1 p r b r ). (ii) For all i ∈ {1, . . . , r}, we denote Z i the divisor in X defined by {(u, p 1 , . . . , p r ) ∈ Z : p i ∈ B}.
Note that X and Z ′ i s are integral. Let π : X → A 1 be the projection onto the first factor. Then we have the following theorem (see [ (1) π : X → A 1 is a smooth projective morphism.
(2) For all u ∈ A 1 \ {0}, the fiber π −1 (u) is isomorphic to the BSDH variety Z(w) such that π −1 (u) ∩ Z i corresponds to the divisor Z i in Z(w).
(3) π −1 (0) is a smooth projective toric variety.
We denote X u := π −1 (u) for u ∈ A 1 and the limiting toric variety X 0 = π −1 (0) by Yw.
Connection to Bott towers
In this section we describe the toric limit Yw as an iterated P 1 -bundle. We also recall some results on Bott towers from [Cha]. Let {e + 1 , . . . , e + r } be the standard basis of the lattice Z r . Define for all i ∈ {1, . . . , r}, (1) The fan Σ of the smooth toric variety Yw consists of the cones generated by subsets of {e + 1 , . . . , e + r , e − 1 , . . . , e − r } and containing no subset of the form
e − i := −e + i − j>i β ij e + j ,(4.{e + i , e − i }. (2) For all i ∈ {1, . . . , r}, Z 0 i is the irreducible (C * ) r -stable divisor in
Yw corresponding to the one-dimensional cone of Σ generated by e + i and these form a basis of the divisor class group of Yw.
Note that the maximal cones of Σ are generated by {e ǫ i : 1 ≤ i ≤ r, ǫ ∈ {+, −}} . We denote the divisor corresponding to the one-dimensional cone ρ ǫ i generated by e ǫ i by D ρ ǫ i for ǫ ∈ {+, −}. Letw ′ := s β 1 · · · s β r−1 . Then we get a toric morphism f r : Yw → Yw′ induced by the lattice map f r : Z r → Z r−1 , the projection onto the first r − 1 coordinates. We prove,
Lemma 4.2.
(1) f r : Yw → Yw′ is a toric P 1 -fibration with two disjoint toric sections.
(
2) Yw ≃ P(O Yw′ ⊕ L ) for some unique line bundle L on Yw′.
Proof. Let Σ ′ be the fan corresponding to the toric variety Yw′. From the above proposition, we can see that Σ has a splitting by Σ ′ and {e + r , 0, e − r }. Then by [CLS11, Theorem 3.3.19], f r : Yw → Yw′ is a locally trivial fibration with the fan Σ F of the fiber being {e + r , 0, e − r }. Since Σ F is the fan of the projective line P 1 , we conclude f r is a toric P 1 -fibration. As toric sections of the toric fibration correspond to the maximal cones in Σ F , we get two disjoint toric sections for f r . This proves (1).
Proof of (2): Since f r : Yw → Yw′ is P 1 -fibration with a section, we see Yw is a projective bundle P(E ) over Yw′ corresponding to a rank 2 vector bundle E on Yw′ (see for example [Har77, Chapter V, Proposition 2.2, page 370]).
Recall that the sections of projective bundle Yw = P(E ) correspond to the quotient line bundles of E (see [Har77,Proposition 7.12]). Since Yw = P(E ) is projective line bundle on Yw′ with two disjoint sections, we see E is decomposable as a direct sum of line bundles on Yw′.
Y r πr −→ Y r−1 π r−1 −→ · · · π 2 −→ Y 1 = P 1 π 1 −→ Y 0 = {pt}, where Y i = P(O Y i−1 ⊕ L i−1 )
for a line bundle L i−1 over Y i−1 for all 1 ≤ i ≤ r and P(−) denotes the projectivization (see for more detalis [Civ05] and also [Cha, Section 2]).
Then by definition of Bott tower and by Lemma 4.2(2) we get:
Corollary 4.4. The toric limit Yw is a Bott tower.
We have the following situation: where β ij 's are integers as defined before. Let
P 1 P 1 Z(w) Yw Z(w ′ ) Yw′P i := {ρ + i , ρ − i } for 1 ≤ i ≤ r.
Then by [Cha,Lemma 4.3], {P i : 1 ≤ i ≤ r} is the set of all primitive collections of Yw. For each 1 ≤ i ≤ r, we denote the cone in the definition of primitive relation (see Section 2) corresponding to P i by γ P i . Let D = ρ∈Σ(1) a ρ D ρ be a toric divisor in Yw with a ρ ∈ Z and for 1 ≤ i ≤ r, define
d i := (a ρ + i + a ρ − i − γ j ∈γ P i (1) c j a γ j ).
Then we recall the following from [Cha, Lemma 5.1]:
Lemma 4.5.
(1) D is ample if and only if d i > 0 for all 1 ≤ i ≤ r.
(2) D is numerically effective (nef) if and only if d i ≥ 0 for all 1 ≤ i ≤ r.
Also note that the conditions I and II onw are same as the conditions on Mw as in [Cha].
Vanishing results on Cohomology of certain line bundles on BSDH varieties
Let X be a smooth projective variety. Recall N 1 (X) denote the real finite dimensional vector space of numerical classes of real divisors in X (see [Kle66, §1, Chapter IV]). The ample cone Amp(X) of X is the cone in N 1 (X) generated by classes of ample divisors. 5.1. Ample cone of the toric limit of BSDH variety. In [LT04], the ampleness of line bundles on BSDH variety Z(w) is studied. Now we compare the ample cone of the toric limit Yw with that of the BSDH-variety Z(w) as a consequence of Theorem 3.2.
Corollary 5.1. The ample cone Amp(Yw) of Yw can be identified with a subcone of the ample cone Amp(Z(w)) of Z(w).
Proof. By Theorem 3.2, π : X → A 1 is a smooth projective morphism with X 0 = Yw and
h j i := 0 for j > i. 1 for j = i. − i−1 k=j β ik (h j k ) for j < i. Let ǫ ∈ {+, −}. Define Σ(1) ǫ := {ρ ǫ i : 1 ≤ i ≤ r}.
Then we can write a toric divisor D in Yw as follows:
D = ρ∈Σ(1) a ρ D ρ = ρ∈Σ(1) + a ρ D ρ + ρ∈Σ(1) − a ρ D ρ .
For 1 ≤ i ≤ r, let
g i := a ρ + i + r j=i a ρ − j h i j .
Recall d i from Section 4,
d i := (a ρ + i + a ρ − i − γ j ∈γ P i (1) c j a γ j ), Let D ′ = r i=1 g i Z i be a divisor in Z(w),
where Z i is as in Section 2 for 1 ≤ i ≤ r. Lemma 5.2. If d i ≥ 0 for all 1 ≤ i ≤ r, then H j (Z(w), D ′ ) = 0 for all j > 0.
Proof. If d i ≥ 0 for all 1 ≤ i ≤ r, by Lemma 4.5, ρ∈Σ(1) a ρ D ρ is a nef divisor in Yw. The condition I:
Z x i = Z i for 0 = x ∈ k and Z 0 i = D ρ + i .
By [Cha, Corollary 3.3], we can write
D = ρ∈Σ(1) a ρ D ρ ∼ r i=1 g i D ρ + i .
(1) Special case: |η + i | = 0 and |η − i | = 0. This condition means that the expressionw is fully commutative without repeating the simple reflections. For example if G = SL(n, C) andw = s α 1 s α 3 · · · s αr , 1 < r ≤ n − 1 and r is odd, then |η + i | = 0 and |η − i | = 0 for all i. Hencew satisfies the condition I and also observe that in this case we have Yw ≃ Z(w) ≃ P 1 × · · · × P 1 (dim(Z(w)) times ).
(2) Let G = SL(n, C) and fix 1 ≤ j < r ≤ n − 1 such that j is even and r is odd. Letw = s α 1 s α 3 · · · s α j−3 s α j−1 s α j s α j+1 s α j+3 · · · s αr . Note that s α j appears only once in the expressionw and |η + i | = 0 for all i. Let p be the 'position of s α j ' in the expressionw, then |η − i | = 0 for all i = p, p − 1 and |η − p−1 | = 1 = |η − p | with β p−1p = −1 = β pp+1 . Hencẽ w satisfies condition I.
The condition II:
Again, let G = SL(n, C) and fix 1 ≤ j < r ≤ n − 1 such that j is even and r is odd. Letw = s α 1 s α 3 · · · s α j−3 s α j s α j−1 s α j+1 s α j+3 · · · s αr (observe that we interchanged s α j and s α j−1 in the example of condition II). Then |η + i | = 0 and |η − i | ≤ 2 for all i. Let p be the 'position of s α j ' in the expressionw, then |η − i | = 0 for all i = p and |η − p | = 2 with β pp+1 = −1 = β pp+1 . Hencew satisfies the condition II but not I. Letw = s α 1 s α 3 s α 1 . Then |η + 1 | = 1 with β 13 = 2, and |η − 1 | = |η + 2 | = η − 2 | = 0 . Hencew 1 satisfies II but not I.
Observe that the condition |η − i | = 1 and β il = −2, happens only in non-simply laced cases. Let G = SO(5, k) (i.e. G is of type B 2 ), letw 1 = s α 2 s α 1 andw 2 = s α 1 s α 2 . Recall that we have α 1 , α 2 = −2 and α 2 , α 1 = −1. Then Hencew 1 satisfies II but not I and w 2 satisfies I.
Let G be of type G 2 (with α 1 , α 2 = −1 and α 2 , α 1 = −3). Letw 1 = s α 2 s α 1 and w 2 = s α 1 s α 2 . Then Hencew 1 satisfies I andw 2 does not satisfy any of the conditions I or II.
Now we have the following result:
Lemma 6.1.
(1) Yw is Fano if and only ifw satisfies I.
(2) Yw is weak Fano if and only ifw satisfies II.
Proof. This follows from Corollary 4.2 and [Cha, Theorem 6.3].
Recall the following (see for instance [Cha, Corollary 6.2]):
Lemma 6.2. Let X be a smooth projective variety and D be an effective divisor. Let supp(D) denote the support of D. If X \ supp(D) is affine, then D is big.
We prove the following: Theorem 6.3.
(1) Ifw satisfies I, then Z(w) is Fano.
(2) Ifw satisfies II, then Z(w) is weak Fano.
Proof. First recall that the canonical line bundle O Z(w) (K Z(w) ) of Z(w) is given by
O Z(w) (K Z(w) ) = O Z(w) (−∂Z(w)) ⊗ L(−δ),
where ∂Z(w) is the boundary divisor of Z(w) and δ ∈ N such that δ,α = 1 for all α ∈ S, whereα is the co-root of α (see [Kum12,Proposition 8.1.2] and also [Ram85,Proposition 2]). Note that if G is finite dimensional, δ is half sum of the positive roots.
By Theorem 3.2, φ : X → A 1 is a smooth projective morphism with X 0 = Yw and X u = Z(w) for 0 = u ∈ A 1 .
Proof of (1): By [Laz04, Theorem 1.2.17], if −K X 0 is ample then −K Xu is ample for u = 0. By Lemma 6.1, −K Yw is ample if and only ifw satisfies I. Hence we conclude that ifw satisfies I, then Z(w) is Fano.
Proof of (2): First we prove −K Z(w) is big. Let and L(δ) is nef, we conclude −K Z(w) is big, as tensor product of a big and a nef line bundles is again a big line bundle. By [Laz04,Theorem 1.4.14] and X u = Z(w) for u = 0, we can see that if −K X 0 is nef then −K Xu is also nef for u = 0. Therefore, (2) follows from Lemma 6.1(2).
There exists expressionsw such that the BSDH variety Z(w) is Fano (respectively, weak Fano) but the toric limit Yw is not Fano (respectively, not weak Fano).
Example 6.4. Let G = SL(4, C).
(1) Letw = s α 1 s α 1 . Then Z(w) ≃ P 1 × P 1 , which is Fano. The toric limit Yw ≃ P(O P 1 ⊕ O P 1 (2)). Sincew does not satisfy I, then by Lemma 6.1, Yw is not Fano.
(2) Letw = s α 1 s α 2 s α 1 . Then it can be seen Z(w) is Fano (see [Cha17, Example 5.4]).
By Lemma 6.1, the toric limit Yw is weak Fano but not Fano.
Example 6.5. Let G = SO(5, k), i.e. G is of type B 2 . Letw = s α 1 s α 2 s α 1 . By Lemma 6.1, the toric limit Yw is not weak Fano. Also we can see Z(w) is weak Fano but not Fano (see [Cha17,Theorem 5.3]).
6.2. Local rigidity of BSDH varieties. In this section we obtain some vanishing results for the cohomology of tangent bundle of the toric limit Yw and Z(w). Let T X denote the tangent bundle of X, where X = Yw or Z(w). Then we have Corollary 6.6.
(1) Ifw satisfies I, then H i (Yw, T Yw ) = 0 for all i ≥ 1. In particular, Yw is locally rigid.
(2) Ifw satisfies I, then H i (Z(w), T Z(w) ) = 0 for all i ≥ 1. In particular, Z(w) is locally rigid.
Proof. Proof of (1): Ifw satisfies I, then by Lemma 6.1, Yw is a Fano variety. By [BB96,Proposition 4.2], since Yw is a smooth Fano toric variety, we get H i (Yw, T Yw ) = 0 for all i ≥ 1.
Proof of (2): From Theorem 3.2, π : X → A 1 is a smooth projective morphism with X 0 = Yw and X u = Z(w) for u ∈ A 1 , u = 0. Hence (2) follows from (1) by semi-continuity theorem (see [Har77,Theorem 12.8]). 6.3. Log Fano BSDH varieties. In [And14] and [AS14] log Fanoness of Schubert varieties and BSDH varieties were studied respectively. Now we characterize the (suitably chosen) Q-divisors D ′ in Z(w)) for which (Z(w), D ′ ) is log Fano. Recall that Z i = {[(p 1 , . . . , p r )] ∈ Z(w) : p i ∈ B} is a divisor in Z(w) (see Section 2). Let γ i = s βr · · · s β i+1 (β i ) for 1 ≤ i ≤ r. Then,
L(δ) = r i=1 b i Z i with b i = δ,γ i = ht(γ i ), (6.1)
where δ is as in Section 6.1 (see page 10), L(δ) is the homogeneous line bundle on Z(w) corresponding to δ and ht(β) for a root β = n i=1 n i α i , is the height defined by ht(β) = n i=1 n i (see [MR85, Proof of Proposition 10]). Whenw is reduced, γ i is a positive root and we can see the relation (6.1) from the Chevalley formula for intersection of Schubert variety by a divisor (see [AS14,Page 410] or [Che94]). It is known that
− K Z(w) = r i=1 (b i + 1)Z i (6.2) (see [MR85, Proposition 4]). Let D ′ = r i=1 a i Z i be a effective Q-divisor in Z(w), with ⌊D ′ ⌋ = 0, where ⌊ i a i Z i ⌋ = i ⌊a i ⌋Z i , ⌊x⌋ is the greatest integer ≤ x.
Then by (6.2), we get
−(K Z(w) + D ′ ) = r i=1 (b i + 1 + a i )Z i . For 1 ≤ i ≤ r, define f i := (b i + 1 + a i ) − γ j ∈γ P i (1) + c j (b j + 1 + a j ),
where γ P i (1) + := γ P i (1) ∩ {ρ + l : 1 ≤ l ≤ r} and γ P i is the cone as in (2.1) for the toric limit Yw.
Recall that if X is smooth and
Z(w) + D ′ ) is ample. Now we prove −(K Z(w) + D ′ ) is ample if f i > 0 for all 1 ≤ i ≤ r. Recall that D ρ + i
is the divisor corresponding to ρ + i ∈ Σ(1) and Z x i = π −1 (x) ∩ Z i for x ∈ k (see Section 2 and Section 3).
By Theorem 3.2, we have
Z x i = Z i for x = 0 and Z 0 i = D ρ + i . (6.3)
Assume that f i > 0 for all 1 ≤ i ≤ r. By (6.3) and by semicontinuity (see [Laz04, Theorem 1.2.7]) to prove (Z(w), D ′ ) is log Fano it is enough to prove
r i=1 (b i + 1 + a i )D ρ + i is ample .
By Lemma 4.5, we see that
r i=1 (b i + 1 + a i )D ρ + i is ample if and only if f i = ((b i + 1 + a i ) − γ j ∈γ P i (1) + c j (b j + 1 + a j )) > 0 for all 1 ≤ i ≤ r.
Hence we conclude that (Z(w), D ′ ) is log Fano.
More results on the toric limit
In this section we are going to recover the results of [PK16] by using methods of toric geometry. In [PK16], they have assumed that G is a simple algebraic group. In our situation G is a Kac-Moody group. Recall the following:
(1)w = s β 1 · · · s βr andw ′ = s β 1 · · · s β r−1 .
(2) The toric morphism f r : Yw → Yw′ is induced by the lattice map f r : Z r → Z r−1 , the projection onto the first r − 1 coordinates.
As we discussed in Section 3, there are two disjoint toric sections for the P 1 -fibration f r : Yw → Yw′ (see Lemma 4.2).
Definition 7.1.
(1) Schubert and non-Schubert sections: We call the section corresponding to the maximal cone ρ + r (respectively, ρ − r ) in Σ F (the fan of the fiber of f r ) by 'Schubert section σ 0 r−1 ' (respectively, 'non-Schubert section σ 1 r−1 ' ).
(2) Schubert point: Let σ ∈ Σ be the maximal cone generated by {e + 1 , . . . , e + r }. We call the point in Yw corresponding to the maximal cone σ by 'Schubert point'.
(3) Schubert line: We call the fiber of f r over the Schubert point by 'Schubert line L r '.
Note that these definitions agree with that of in [PK16,Section 4]. Now onwards we denotew = (1, . . . , r) (respectively,w ′ = (1, . . . , r − 1)) for the expressionw = s β 1 · · · s βr (respectively,w ′ = s β 1 · · · s β r−1 ). Let I = (i 1 , . . . , i m ) be a subsequence ofw. Inductively we define the curve L I corresponding to I. Let L I ′ be the curve in Yw′ corresponding to the subsequence I ′ = (i 1 , . . . , i m−1 ) of I . Then define L I := σ 1 r−1 (L I ′ ) and σ 0 r−1 (L I ′ ) = L I ′ .
Recall some more notations. Let X be a smooth projective variety, we define
N 1 (X) Z := { finite a i C i : a i ∈ Z, C i irreducible curve in X}/ ≡ where ≡ is the numerical equivalence, i.e. Z ≡ Z ′ if and only if D · Z = D · Z ′ for all divisors D in X. We denote by [C] the class of C in N 1 (X) Z . Let N 1 (X) := N 1 (X) Z ⊗ R.
It is a well known fact that N 1 (X) is a finite dimensional real vector space dual to N 1 (X) (see [Kle66,Proposition 4, §1, Chapter IV]). We have the following result:
Lemma 7.2. The classes of Schubert lines L j , 1 ≤ j ≤ r form a basis of N 1 (Yw).
Proof. Proof is by induction on r. Assume that the result is true for r − 1. Since Yw is a projective bundle over Yw′ (see Lemma 4.2), then by [Bar71, Lemma 1.1], L r and σ 0 r−1 (L j ) for 1 ≤ j ≤ r (the image of L j by the Schubert section in Yw) form a basis of N 1 (Yw). By definition of L I , we have σ 0 r−1 (L j ) = L j for 1 ≤ j ≤ r − 1 and hence the result follows.
Let 1 ≤ j ≤ r. Let D := {e ǫ l l : 1 ≤ l ≤ r and ǫ l = + for all l} . Let D ′ j := {e ǫ l l : 1 ≤ l ≤ r and ǫ l = + for all l = l; ǫ j = −} .
Lemma 7.3. Fix 1 ≤ j ≤ r. Then the Schubert line L j is given by
L j = V (τ j ) , with τ j = σ ∩ σ ′ j , intersection of two maximal cones in Σ, where σ (respectively, σ ′ j ) is generated by D (respectively, D ′ j ).
Proof. Let us consider the expressionw j = s β 1 · · · s β j for 1 ≤ j < r. Let Σ j be the fan of the toric variety Yw j . By Lemma 4.2,
f j : Yw j → Yw j−1
is a P 1 -fibration induced by f j : Z j → Z j−1 the projection onto the first j − 1 factors. Also note that the Schubert point in Yw j−1 corresponds to the maximal cone generated by {e + l : 1 ≤ l ≤ j − 1} and the fan of the fiber is given by {e + j , 0, e − j }. Let σ j (respectively, σ ′ j ) be the cone generated by {e + l : 1 ≤ l ≤ j} (respectively,
{e + l : 1 ≤ l ≤ j − 1} ∪ {e − j } )
. Then by definition of Schubert line L j , we can see that L j is the curve in Yw j given by L j = V (τ j ), where τ j ∈ Σ j and τ j = σ j ∩ σ ′ j . Since the Schubert section of f k for (j ≤ k ≤ r) corresponds to e + k , we see σ 0 r • · · · • σ 0 j+1 (L j ), by abuse of notation we also denote it again by L j in Yw, is given by
L j = V (τ j ) with τ = σ ∩ σ ′ j ,
where σ and σ ′ j are as described in the statement. This completes the proof of the lemma.
Let τ be a cone of dimension r −1 which is a wall, that is τ = σ ∩σ ′ for some σ, σ ′ ∈ Σ of dimension r. Let σ (respectively, σ ′ ) be generated by {u ρ 1 , u ρ 2 , . . . , u ρr } (respectively, by {u ρ 2 , . . . , u ρ r+1 }) and let τ be generated by {u ρ 2 , . . . , u ρr }. Then we get a linear relation,
u ρ 1 + r i=2 b i u ρ i + u ρ r+1 = 0 (7.1)
The relation (7.1) called wall relation and we have Proposition 7.4. Let 1 ≤ j ≤ r and let L j be the Schubert line in Yw. Then,
D ρ · V (τ ) = b i if ρ = ρ i and i ∈K Yw · L j = −2 − k>j β kj .
Proof. By definition of e − j , we have
e + j + e − j + k>j β kj e + k = 0. (7.3) By Lemma 7.3, we have L j = V (τ ), with τ = σ ∩ σ ′ where σ (respectively, σ ′ ) is generated by {e ǫ l l : 1 ≤ l ≤ r, ǫ l = + for all l} (respectively,
{e ǫ l l : ǫ l = + for 1 ≤ l ≤ r and l = j, ǫ j = −} ). Hence (7.3) is the wall relation for the curve L j . Then by (7.2), we see that
D ρ · L j = 1 if ρ = ρ + j or ρ − j . β kj if ρ = ρ + k and k > j. 0 otherwise. Since K Yw = − ρ∈Σ(1) D ρ , we get K Yw · L j = −2 − k>j β kj .
This completes the proof of the proposition. Now onwards we denote the subsequence (i 1 , . . . , i m ) by I i 1 . Let D ′′ i 1 := {e ǫ l l : 1 ≤ l ≤ r and
ǫ l = + if l / ∈ I i 1 \ {i 1 } − if l ∈ I i 1 }.
Let D ′′′ i 1 := {e ǫ l l : 1 ≤ l ≤ r and
ǫ l = + if l / ∈ I i 1 − if l ∈ I i 1 }.
Proposition 7.5. The curve L I i 1 is given by
L I i 1 = V (τ i 1 ) with τ i 1 = σ i 1 ∩ σ ′ i 1 , where σ i 1 (respectively, σ ′ i 1 ) is the cone generated by D ′′ i 1 (respectively, D ′′′ i 1 ).
Proof. As in the proof of Lemma 7.3, we start with j = i 1 and L i 1 is the Schubert line in Yw i 1 . By Lemma 7.3, we have
L i 1 = V (τ i 1 ) with τ i 1 = σ i 1 ∩ σ ′ i 1 . By definition of L I , we have σ 0 i 2 −1 • · · · • σ 0 i 1 +1 (L i 1 ) = L i 1 in Yw i 2 −1 and σ 1 i 2 • σ 0 i 2 −1 • · · · • σ 0 i 1 +1 (L i 1 ) = L {i 1 ,i 2 } in Yw i 2 .
By repeating the process we conclude that
L I i 1 = V (τ i 1 ) with τ i 1 = σ i 1 ∩ σ ′ i 1 ,
where σ i 1 and σ ′ i 1 are as described in the statement. This completes the proof of the proposition.
Recall NE(X) is the real convex cone in N 1 (X) generated by classes of irreducible curves. The Mori cone NE(X) is the closure of NE(X) in N 1 (X) and it is a strongly convex cone of maximal dimension (see for instance [CLS11, Chapter 6, page 293]). Now we describe the Mori cone of the toric limit Yw in terms of the curves L I i j 's defined above. For this we need the following notation (see also [Cha]). Fix 1 ≤ i ≤ r. Define:
(1) Let r ≥ j > j 1 = i ≥ 1 and define a 1,j := β j 1 j .
(2) Let r ≥ j 2 > j 1 be the least integer such that a 1,j > 0, then define for j > j 2 a 2,j := β ij 2 β j 2 j − β ij .
(3) Let k > 2 and let r ≥ j k > j k−1 be the least integer such that a k−1,j < 0, then inductively, define for j > j k a k,j := −a k−1,j k β j k j + a k−1,j .
(4) LetĨ i := {i = j 1 , . . . , j m }.
Example 7.6. Let G = SL(5, C) and letw = s β 1 · · · s β 7 = s α 2 s α 1 s α 3 s α 1 s α 2 s α 1 s α 2 . Let i = 1. Then j 1 = 1 and (1) a 1,2 = β 12 = β 2 ,β 1 = α 1 ,α 2 = −1 ; (2) a 1,3 = β 13 = β 3 ,β 1 = α 3 ,α 2 = −1 ;
(3) a 1,4 = β 14 = β 4 ,β 1 = α 1 ,α 2 = −1 ; (4) a 1,5 = β 15 = β 5 ,β 1 = α 2 ,α 2 = 2 ; (5) a 1,6 = β 16 = β 6 ,β 1 = α 1 ,α 2 = −1 ; (6) a 1,7 = β 17 = β 7 ,β 1 = α 2 ,α 2 = 2 .
Then by definition of j 2 , we have j 2 = 5 and (1) a 2,6 = β 15 β 56 − β 16 = β 5 ,β 1 β 6 ,β 5 − β 6 ,β 1 = α 1 ,α 2 = −1 ; (2) a 2,7 = β 15 β 57 − β 17 = α 2 ,α 2 = 2 . Then by definition of j 3 , we have j 3 = 6 and a 3,7 = −a 2,6 β 67 + a 2,7 = −( β 6 ,β 5 )( β 7 ,β 6 ) + ( β 7 ,β 5 ) = −(−1)(−1) + (2) = 1. Therefore, we getĨ 1 = {1, 5, 6} .
Example 7.7. We use Example 7.6, for i = 1, we have I 1 = {1, 5, 6}. Then D ′′ 1 = {e + 1 , e + 2 , e + 3 , e + 4 , e − 5 , e − 6 , e + 7 } and D ′′′ 1 = {e − 1 , e + 2 , e + 3 , e + 4 , e − 5 , e − 6 , e + 7 }.
Fix 1 ≤ i ≤ r. Let I i :=Ĩ i = {i = j 1 , j 2 , . . . , j m } where j k 's are as above. With this notation we prove the following (see [PK16,Theorem 22]):
Theorem 7.8. The set {L I i : 1 ≤ i ≤ r} of classes of curves forms a basis of N 1 (Yw) Z and every torus invariant curve in N 1 (Yw) lie in the cone generated by {L I i : 1 ≤ i ≤ r}.
Proof. By [Cha,Proposition 4.16], for 1 ≤ i ≤ r the curve r(P i ) (see Section 2 for the definition of r(P i )) is given by r(P i ) = [V (τ i )], where τ i = σ i ∩ σ ′ i and σ i (respectively, σ ′ ) is generated by D ′′ i (respectively, D ′′′ i ). From Proposition 7.5, we see that the class of the curve L I i is r(P i ) in N 1 (Yw) Z . By [Cha,Theorem 4.7], we have
NE(Yw) = r i=1 R ≥0 r(P i ).
Also by [Cha,Corollary 4.8], the set {r(P i ) : 1 ≤ i ≤ r} forms a basis of N 1 (Yw) Z . Hence we conclude the assertion of the theorem.
We recall some definitions: Let V be a finite dimensional vector space over R and let K be a (closed) cone in V . A subcone Q in K is called extremal if u, v ∈ K, u + v ∈ Q then u, v ∈ Q. A face of K is an extremal subcone. A one-dimensional face is called an extremal ray. Note that an extremal ray is contained in the boundary of K. Then we have (see [PK16,Theorem 30]):
Corollary 7.9. The extremal rays of the toric limit Yw are precisely the curves L I i for 1 ≤ i ≤ r.
Proof. This follows from the proof of the Theorem 7.8. Let X be a smooth projective variety. An extremal ray R in the Mori cone NE(X) ⊂ N 1 (X) is called Mori if R · K X < 0, where K X is the canonical divisor in X. Recall that NE(Yw) is a strongly convex rational polyhedral cone of maximal dimension in N 1 (Yw). We have the following (see [PK16,Theorem 35]):
Corollary 7.10. Fix 1 ≤ i ≤ r, the class of curve L I i is Mori ray if and only if either |γ P i (1)| = 0, or |γ P i (1)| = 1 with c j = 1 for γ j ∈ γ P i (1). If X is Fano, then by definition, −K X is ample. By toric Kleiman criterion for ampleness [CLS11, Theorem 6.3.13], we can see that −K X · V (τ ) > 0 for all τ ∈ Σ(r − 1). Then K X · V (τ ) < 0 for all τ ∈ Σ(r − 1). In particular, every extremal ray is Mori.
Proof. Since
Conversely, let R ≥0 [V (τ )] be an extremal ray, by assumption it is a Mori ray. Then by definition of a Mori ray, we have K X · V (τ ) < 0. This implies −K X · V (τ ) > 0. By (7.4), NE(X) is a polyhedral cone and hence the extremal rays generate the cone NE(X). Hence we see that −K X · C > 0 for all classes of curves [C] in NE(X). Again by toric Kleiman criterion for ampleness, we conclude that −K X is ample and hence X is Fano.
Then we have the following (see [PK16,Corollary 36]):
Corollary 7.12. The toric limit Yw is Fano if and only if every extremal ray in NE(Yw) is Mori.
1)where β ij := β j ,β i . The following proposition will give the description of the fan of the toric variety Yw (see [Pas10, Proposition 1.4]).
AsP(E ) ≃ P(L ′ ⊗ E ) for any line bundle L ′ on Yw′ (see[Har77, Lemma 7.9]), we can assume without loss of generality E = O Yw′ ⊕ L for some unique line bundle L on Yw′. Hence Yw ≃ P(O Yw′ ⊕ L ) and this completes the proof of the lemma.
Definition 4. 3 .
3A Bott tower of height r is a sequence of projective bundles
the Bott towers bijectively correspond to the upper triangular matrices with integer entries (see [Civ05, Section 3]). Here the upper triangular matrix Mw corresponding to Yw is given by 12 β 13 . . . β 1r 0 1 β 23 . . .
X u = Z(w) for u = 0. Let L = {L u : u ∈ A 1 } be a line bundle on π : X → A 1 with L 0 is an ample line bundle on Yw. Note that the ampleness of line bundle is an open condition for the proper morphism π, i.e. there exists an open subset U in A 1 containing 0 such that L u is an ample line bundle on X u for all u ∈ U (see [Laz04, Theorem 1.2.17]). Hence we can identity Amp(Yw) with a subcone of Amp(Z(w)). 5.2. Vanishing results. In [Pas10], B. Pasquier obtained vanishing theorems for the cohomology of certain line bundles on BSDH varieties, by using combinatorics of the toric limit (see [Pas10, Theorem 0.1]). Here we see some vanishing results for the cohomology of certain line bundles on BSDH varieties. Let 1 ≤ i ≤ r, define h i
Z 0 :
0= Z(w) \ ∂Z(w). Note that Z 0 is an open affine subset of Z(w). Then by Lemma 6.2, ∂Z(w) is big. Since O(−K Z(w) ) = O(∂Z(w)) ⊗ L(δ)
D is a normal crossing divisor, the pair (X, D) is log Fano if and only if ⌊D⌋ = 0 and −(K X + D) is ample (see [KM08, Lemma 2.30, Corollary 2.31 and Definition 2.34]). We prove, Corollary 6.7. The pair (Z(w), D ′ ) is log Fano if f i > 0 for all 1 ≤ i ≤ r. Proof. By definition of D ′ , the pair (Z(w), D ′ ) is log Fano if and only if −(K
Yw is a Bott tower (see Corollary 4.2), then the result follows from Proposition 7.5 and [Cha, Theorem 8.1]. Now we prove a general result for smooth projective toric varieties, Lemma 7.11. Let X be a smooth projective toric variety of dimension r. Then X is Fano if and only if every extremal ray is Mori. Proof. By [CLS11, Theorem 6.3.20] (Toric Cone Theorem)
Fano, Weak Fano and log Fano BSDH varities 6.1. Fano and weak Fano properties. In this section, we observe that Fano and weak Fano properties for BSDH variety Z(w) depend on the given expressionw. We use the terminology from Section 1. First we discuss the conditions I and II with some examples. We use the ordering of simple roots as in[Hum72, Page 58].Hence by (5.1), Theorem 3.2 and by semi-continuity theorem (see [Har77, Theorem 12.8]),
we get
H j (Z(w), D ′ ) = 0 for all j > 0.
6.
Acknowledgements: I would like to thank Michel Brion for valuable discussions and many critical comments.
D Anderson, arXiv:1501.00034Effective divisors on Bott-Samelson varieties. arXiv preprintD. Anderson, Effective divisors on Bott-Samelson varieties, arXiv preprint arXiv:1501.00034 (2014).
Schubert varieties are log Fano over the integers. D Anderson, A Stapledon, Proceedings of the American Mathematical Society. 1422D. Anderson and A. Stapledon, Schubert varieties are log Fano over the integers, Proceedings of the American Mathematical Society 142 (2014), no. 2, 409-411.
Tensor products of ample vector bundles in characteristic p. C M Barton, American Journal of Mathematics. 932C.M. Barton, Tensor products of ample vector bundles in characteristic p, American Journal of Mathematics 93 (1971), no. 2, 429-438.
Automorphisms and local rigidity of regular varieties. F Bien, M Brion, Compositio Mathematica. 1041engF. Bien and M. Brion, Automorphisms and local rigidity of regular varieties, Compositio Mathe- matica 104 (1996), no. 1, 1-26 (eng).
Applications of the theory of Morse to symmetric spaces. R Bott, H Samelson, American Journal of Mathematics. R. Bott and H. Samelson, Applications of the theory of Morse to symmetric spaces, American Journal of Mathematics (1958), 964-1029.
M Brion, S Kumar, Frobenius splitting methods in geometry and representation theory. Springer Science & Business Media231M. Brion and S. Kumar, Frobenius splitting methods in geometry and representation theory, vol. 231, Springer Science & Business Media, 2007.
On Mori cone of Bott towers, preprint. B N Chary, arxiv.org/abs/1706.02139B.N. Chary, On Mori cone of Bott towers, preprint, arxiv.org/abs/1706.02139.
On Fano and weak Fano Bott-Samelson-Demazure-Hansen varieties, to appear in Journal of Pure and Applied Algebra. 10.1016/j.jpaa.2017.10.006, On Fano and weak Fano Bott-Samelson-Demazure-Hansen varieties, to appear in Jour- nal of Pure and Applied Algebra, https://doi.org/10.1016/j.jpaa.2017.10.006.
Rigidity of a Bott-Samelson-Demazure-Hansen variety for P Sp(2n, C). B N Chary, S S Kannan, Journal of Lie Theory. 27B.N. Chary and S.S. Kannan, Rigidity of a Bott-Samelson-Demazure-Hansen variety for P Sp(2n, C), Journal of Lie Theory 27 (2017), 435-468.
Automorphism group of a Bott-Samelson-Demazure-Hansen variety for non reduced case. B N Chary, S S Kannan, A J Parameswaran, in preparationB.N. Chary, S.S. Kannan, and A.J. Parameswaran, Automorphism group of a Bott-Samelson- Demazure-Hansen variety for non reduced case, in preparation.
Automorphism group of a Bott-Samelson-Demazure-Hansen variety. Transformation Groups. 203, Automorphism group of a Bott-Samelson-Demazure-Hansen variety, Transformation Groups 20 (2015), no. 3, 665-698.
Sur les décompositions cellulaires des espaces G/B, Algebraic Groups and their Generalizations: Classical Methods. C Chevalley, American Mathematical SocietyC . Chevalley, Sur les décompositions cellulaires des espaces G/B, Algebraic Groups and their Generalizations: Classical Methods, American Mathematical Society (1994), 1-23.
Bott towers, crosspolytopes and torus actions. Y Civan, Geometriae Dedicata. 1131Y. Civan, Bott towers, crosspolytopes and torus actions, Geometriae Dedicata 113 (2005), no. 1, 55-74.
Toric varieties, Graduate studies in mathematics. D A Cox, J B Little, H K Schenck, American Mathematical SocD.A. Cox, J.B. Little, and H.K. Schenck, Toric varieties, Graduate studies in mathematics, American Mathematical Soc., 2011.
Désingularisation des variétés de Schubert généralisées. M Demazure, Annales scientifiques de l'École Normale Supérieure. Société mathématique de France7M. Demazure, Désingularisation des variétés de Schubert généralisées, Annales scientifiques de l'École Normale Supérieure, vol. 7, Société mathématique de France, 1974, pp. 53-88.
Introduction to toric varieties. W Fulton, Princeton University PressW. Fulton, Introduction to toric varieties, no. 131, Princeton University Press, 1993.
Bott towers, complete integrability, and the extended character of representations. M Grossberg, Y Karshon, Duke Mathematical Journal. 761M. Grossberg, Y. Karshon, Bott towers, complete integrability, and the extended character of representations, Duke Mathematical Journal 76 (1994), no. 1, 23-58.
On cycles in flag manifolds. H C Hansen, Mathematica Scandinavica. 33H. C. Hansen, On cycles in flag manifolds., Mathematica Scandinavica 33 (1973), 269-274.
Algebraic geometry. R Hartshorne, Springer Science & Business Media52R. Hartshorne, Algebraic geometry, vol. 52, Springer Science & Business Media, 1977.
J E Humphreys, Introduction to Lie algebras and representation theory. Springer Science & Business Media9J. E. Humphreys, Introduction to Lie algebras and representation theory, vol. 9, Springer Science & Business Media, 1972.
Toward a numerical theory of ampleness. S L Kleiman, Annals of Mathematics. S.L. Kleiman, Toward a numerical theory of ampleness, Annals of Mathematics (1966), 293-344.
J Kollár, S Mori, Birational geometry of algebraic varieties. Cambridge university press134J. Kollár and S. Mori, Birational geometry of algebraic varieties, vol. 134, Cambridge university press, 2008.
Kac-Moody groups, their flag varieties and representation theory. S Kumar, Springer Science & Business Media204S. Kumar, Kac-Moody groups, their flag varieties and representation theory, vol. 204, Springer Science & Business Media, 2012.
N Lauritzen, J F Thomsen, Line bundles on Bott-Samelson varieties. 13N. Lauritzen, J. F. Thomsen, et al., Line bundles on Bott-Samelson varieties, Journal of Algebraic Geomeometry 13 (2004), 461-473.
Positivity in algebraic geometry I: Classical setting: line bundles and linear series. R K Lazarsfeld, Springer Science & Business Media48R.K. Lazarsfeld, Positivity in algebraic geometry I: Classical setting: line bundles and linear series, vol. 48, Springer Science & Business Media, 2004.
Frobenius splitting and cohomology vanishing for Schubert varieties. V B Mehta, A Ramanathan, Annals of Mathematics. 2V. B. Mehta and A. Ramanathan, Frobenius splitting and cohomology vanishing for Schubert varieties, Annals of Mathematics, (2) 122 (1985), no. 1, 27-40.
Convex bodies and algebraic geometry-An introduction to the theory of toric varieties. T Oda, Springer-Verlag15Berlin Heidelberg NewYorkT. Oda, Convex bodies and algebraic geometry-An introduction to the theory of toric varieties, vol. (3)15, Springer-Verlag, Berlin Heidelberg NewYork, 1988.
A J Parameswaran, P Karuppuchamy, arXiv:1604.01998Toric degeneration of Bott-Samelson-Demazure-Hansen varieties. arXiv preprintA.J. Parameswaran and P. Karuppuchamy, Toric degeneration of Bott-Samelson-Demazure- Hansen varieties, arXiv preprint arXiv:1604.01998 (2016).
Vanishing theorem for the cohomology of line bundles on Bott-Samelson varieties. B Pasquier, Journal of Algebra. 32310B. Pasquier, Vanishing theorem for the cohomology of line bundles on Bott-Samelson varieties, Journal of Algebra 323 (2010), no. 10, 2834-2847.
A Ramanathan, Schubert varieties are arithmetically Cohen-Macaulay, Inventiones mathematicae. 80A Ramanathan, Schubert varieties are arithmetically Cohen-Macaulay, Inventiones mathemat- icae 80 (1985), no. 2, 283-294.
Linear algebraic groups. T A Springer, Springer Science & Business Media, 2010. B. Narasimha Chary, Institut Fourier, UMR 5582 du CNRS, Université de Grenoble Alpes. France40700Grenoble cedex 09T. A. Springer, Linear algebraic groups, Springer Science & Business Media, 2010. B. Narasimha Chary, Institut Fourier, UMR 5582 du CNRS, Université de Greno- ble Alpes, CS 40700, 38058, Grenoble cedex 09, France., Email: narasimha- [email protected]
| []
|
[
"Analytic solutions of the Madelung equation",
"Analytic solutions of the Madelung equation"
]
| [
"Imre F Barna \nWigner Research Centre\nHungarian Academy of Sciences\nKonkoly-Thege Miklósút 29 -331121BudapestHungary\n\nELI-HU Nonprofit Kft\nDugonics Tér 13H-6720SzegedHungary\n",
"Mihály A Pocsai \nWigner Research Centre\nHungarian Academy of Sciences\nKonkoly-Thege Miklósút 29 -331121BudapestHungary\n\nInstitute of Physics\nUniversity of Pécs\nIfjúságútja 6H-7624PécsHungary\n",
"L Mátyás \nDepartment of Bioengineering\nSapientia University\nLibertȃtii sq. 1530104Miercurea CiucRomania\n"
]
| [
"Wigner Research Centre\nHungarian Academy of Sciences\nKonkoly-Thege Miklósút 29 -331121BudapestHungary",
"ELI-HU Nonprofit Kft\nDugonics Tér 13H-6720SzegedHungary",
"Wigner Research Centre\nHungarian Academy of Sciences\nKonkoly-Thege Miklósút 29 -331121BudapestHungary",
"Institute of Physics\nUniversity of Pécs\nIfjúságútja 6H-7624PécsHungary",
"Department of Bioengineering\nSapientia University\nLibertȃtii sq. 1530104Miercurea CiucRomania"
]
| []
| We present analytic self-similar solutions for the one, two and three dimensional Madelung hydrodynamical equation for a free particle. There is a direct connection between the zeros of the Madelung fluid density and the magnitude of the quantum potential. | 10.4172/1736-4337.1000271 | [
"https://arxiv.org/pdf/1703.10482v1.pdf"
]
| 17,007,324 | 1703.10482 | 99f16af0150161ffcc2134d41e7d3564002f3617 |
Analytic solutions of the Madelung equation
30 Mar 2017
Imre F Barna
Wigner Research Centre
Hungarian Academy of Sciences
Konkoly-Thege Miklósút 29 -331121BudapestHungary
ELI-HU Nonprofit Kft
Dugonics Tér 13H-6720SzegedHungary
Mihály A Pocsai
Wigner Research Centre
Hungarian Academy of Sciences
Konkoly-Thege Miklósút 29 -331121BudapestHungary
Institute of Physics
University of Pécs
Ifjúságútja 6H-7624PécsHungary
L Mátyás
Department of Bioengineering
Sapientia University
Libertȃtii sq. 1530104Miercurea CiucRomania
Analytic solutions of the Madelung equation
30 Mar 2017(Dated: March 31, 2017)1
We present analytic self-similar solutions for the one, two and three dimensional Madelung hydrodynamical equation for a free particle. There is a direct connection between the zeros of the Madelung fluid density and the magnitude of the quantum potential.
I. INTRODUCTION
Finding classical physical basements of quantum mechanics is a great challenge since the advent of the theory. Madelung was one among the firsts who gave one explanation, this was the hydrodynamical foundation of the Schrödinger equation [1,2]. His exponential transformation simply indicates that one can model quantum statistics hydrodynamically.
The transformed equation has an attractive feature that the Planck's constant appears only once, as the coefficient of the quantum potential or pressure. Thus, the fluid dynamicist can gather experience of its effects by translating some of the elementary situations of the quantum theory into their corresponding fluid mechanical statements and vice versa.
The quantum potential also appears in the de Broglie-Bohm pilot wave theory [3,4] (in other context) which is a non-mainstream attempt to interpret quantum mechanics as a deterministic non-local theory. In the case of → 0 the Euler equation transforms to the Hamilton-Jacobi equation.
Later, many authors tried to generalize this Madelung's description for more compound quantum mechanical systems (like particles in external electromagnetic fields or with spin 1/2 ) [5]. Takabayashi [6] tried to interpret the Ansatz of Madelung as an ensemble of trajectories. Schönberg [7] developed a new type of hydrodynamical model for the quantum mechanics for any values of spin. The quantum potential appears as a combination of a pressure term arising from the turbulence.
In spite of the flaws of Madelung hydrodynamics (it cannot give a proper solution of the problem of atomic eigenstates and to the quantum description of emission or absorption processes), this approach turns to be fruitful in a number of applications like the stochastic quantum mechanics [8], quantum cosmology [9], description of quantum-like systems [10], the coherent properties of high-energy charged particle beams [11,12].
Terlecki applied the fluid dynamical interpretation of the quantum mechanical probability density and current for the trajectory method and evaluated the solution of the timedependent Schrödinger equation in atomic physics and calculated ionization and electron transfer cross sections for proton-hydrogen collision [13].
As an interesting peculiarity Wallstrom showed with mathematical means that the initialvalue problem of the Madelung equation is not well-defined and additional conditions are needed [14].
Tsubota summarized the hydrodynamical descriptions of quantum condensed fluids such as superfluid helium and Bose-Einstein condensates as quantum hydrodynamics based on the original Ansatz of Madelung [15].
Nowadays, hydrodynamical description of quantum mechanical systems is a popular technical tool in numerical simulations. Review articles on quantum trajectories can be found in a booklet edited by Huges in 2011 [16].
From general concepts as the second law of thermodynamics a weakly non-local extension of ideal fluid dynamics can be derived which leads to the Schrödinger-Madelung equation as well [17].
In the following study we investigate the Madelung equation with the self-similar Ansatz and present analytic solutions with discussion.
This way of investigation is a powerful method to study the global properties of the solutions of various non-linear partial differential equations(PDEs) [18]. Self-similar Ansatz describes the intermediate asymptotic of a problem: it is hold when the precise initial conditions are no longer important, but before the system has reached its final steady state. This is much more simpler than the full solutions and so easier to understand and study the different regions of the parameter space. A final reason for studying them is that they are solutions of a system of ordinary differential equations(ODEs) and hence do not suffer the extra inherent numerical problems of the full PDEs. In some cases self-similar solutions help to understand diffusion-like properties or the existence of compact supports of the solution.
In the last years we successfully applied the multi-dimensional generalization of the selfsimilar Ansatz to numerous viscous fluid equations [19,20] ending up with a book chapter of [21]. [23] however that is not the field of our present interest.
II. THEORY AND RESULTS
Following the original paper of Madelung [2] the time-dependent Schrödinger equation
reads Ψ − 8π 2 m h 2 U Ψ − i 4πm h ∂Ψ ∂t = 0,(1)
where Ψ, U, m, h are the wave function, potential, mass and Planck's constant, respectively.
Taking the following Ansatz Ψ = √ ρe iS where ρ(x, t) and S(x, t) are time and space dependent functions. Substituting this trial function into Eq. (1) going through the derivations the real and the imaginary part give us the following continuity and Euler equations with the form of
ρ t + ∇ · (ρv) = 0, v t + v · ∇v = h 2 8π 2 m 2 ∇ √ ρ √ ρ − 1 m ∇U,(2)
with v = m ∇S. The ρ is the density of the investigated fluid and v is the velocity field.
Madelung also showed that this is a rotation-free flow. The transformed equations has an attractive feature that the Planck's constant appears only once, at the coefficient of the quantum potential or pressure, which is the first term of the right hand side of the second equation. Note, that these are most general vectorial equation for the velocity field v which means that one, two or three dimensional motions can be investigated as well.
In the following we will consider the two dimensional flow motion v = (u, v) in Cartesian coordinates without any external field U = 0. The functional form of the three and one dimensional solutions will be mentioned briefly as well.
We are looking for the solution of Eqs. (2) with self-similar Ansatz which is well-known from [18] ρ(x, y, t
) = t −α f x + y t β := t −α f (η), u(x, y, t) = t −δ g(η), v(x, y, t) = t − h(η),(3)
where f, g and h are the shape functions of the density and the velocity field, respectively.
The similarity exponents α, β, δ, are of primary physical importance since α, δ, represents the rate of decay of the magnitude of the shape function while β represents the spreading.
More about the general properties of the Ansatz can be found in our former papers [19,20].
Except some pathological cases all positive similarity exponents mean physically relevant dispersive solutions with decaying features at x, y, t → ∞. Substituting the Ansatz (3) into (2) and going through some algebraic manipulation the following ODE system can be obtained for the shape functions
− 1 2 f − 1 2 f η + f g + f g + f h + f h = 0, − 1 2 g − 1 2 g η + gg + hg = 2 2m 2 f 3 2f 3 − f f f 2 + f 2f , − 1 2 h − 1 2 h η + gh + hh = 2 2m 2 f 3 2f 3 − f f f 2 + f 2f .(4)
Note that the particle mass appears in the denominator of the quantum potential term which is consistent with the experience of regular quantum mechanics that quantum features are relevant at small particle masses.
All the similarity exponents have the fixed value of + 1 2 which is usual for regular heat conduction, diffusion or for Navier-Stokes equations [21]. Note, that the two remaining free parameters are the mass of the particle m and which is the Planck's constant divided by 2π. For a better transparency we fix = 1.
At this point it is worth to mention, that the obtained ODE for the density shape function is very similar to Eq. (5) for different space dimensions, the only difference is a constant in the last term. For one, two or three dimensions the denominator has a factor of 1,2 or 3, The solution of (5) can be expressed with the help of the Bessel functions of the first and second kind [24] and has the following form of
f (η) = 2 −J 1 4 √ 2mη 2 8 · c 1 + Y1 4 √ 2mη 2 8 · c 2 2 η 3 m 2 J − 3 4 √ 2mη 2 8 · Y1 4 √ 2mη 2 8 − J 1 4 √ 2mη 2 8 · Y − 3 4 √ 2mη 2 8 2(6)
where c 1 and c 2 are the usual integration constants. The correctness of this solutions can be easily verified via back substitution into the original ODE.
To imagine the complexity of these solutions Figure 1 presents f (η) for various m values.
It has a strong decay with a stronger and stronger oscillation at large arguments. The function is positive for all values of the argument, (which is physical for a fluid density), but such oscillatory profiles are completely unknown in regular fluid mechanics [21]. The most interesting feature is the infinite number of zero values which cannot be interpreted physically for a classical real fluid. Both J ν and Y ν Bessel functions with linear argument form an orthonormal set, therefore integrable over the L 2 space. In our case, the integral ∞ 0 f (η)dη is finite as well, unfortunately can be evaluated only by numerical means. This is good news because f (η) is the density function of the original Schrdinger equation. In this sense √ f is the fluid mechanical analogue of the real part of the wave function of the free quantum mechanical particle which can be described with a Gaussian wave packet. To obtain the complete original wave function, the imaginary part has to be evaluated as well. It is trivial from η = x+y t 1/2 = 2(g + h) that
S = m r 1 r 0 vdr = m (x + y) 2 4t .(7)
Ψ(x, y, t) = √ 2t
1 4 −J 1 4 √ 2m{x+y} 2 8t · c 1 + Y1 4 √ 2m{x+y} 2 8t · c 2 (x + y) 3/2 m J − 3 4 √ 2m{x+y} 2 8t · Y1 4 √ 2m{x+y} 2 8t − J 1 4 √ 2m{x+y} 2 8t · Y − 3 4 √ 2m{x+y} 2 8t
· e mi (x+y) 2 4t
(8) Figure 2 shows the projection of the real part wave function to the x, t sub-space. At small times the oscillations are clear to see, however at larger times the strong damping is evident.
For arbitrary quantum systems, the wave function can be evaluated according to the Schrödinger equation, however we never know directly how large is the quantum contribution to the classical one. Now, it is possible for a free particle to get this contribution.
Q = h 2 8π 2 m 2 ∇ √ ρ √ ρ = h 2 8π 2 m 2 ∂ ∂η −η 2 m 2 c 1 8J1 4 [ mη 2 4 √ 2 ] − c 2 8Y 1 4 [ mη 2 4 √ 2 ]
.
(9) Figure 3 shows the shape function of quantum potential Q(η) comparing to the shape function of the density f (η). Note, that where the density has zeros the quantum potential is singular. Such singular potentials might appear in quantum mechanics, however the corresponding wave function should compensate this effect. This is the main message of our study. Unfortunately, we cannot squarely state, that this kind of property is true in general for all the Madelung quantum potentials (e.g. for other Ansätze). Therefore additional investigations should be made to clarify this conjecture.
III. SUMMARY AND OUTLOOK
After reviewing the historical development and interpretation of the Madelung equation we introduced the self-similar Ansatz which is a not-so-well-known but powerful tool to investigate non-linear PDEs. The free particle Madelung equation was investigated in two dimensions with this method, (the one and three dimensional solutions were mentioned as well.) We found analytic solution for the fluid density, velocity field and the original wave function. All can be expressed in terms of the Bessel functions. The classical fluid density has interesting properties, oscillates and has infinite number of zeros which is quite unusual
and not yet been seen in such analysis.
To our knowledge there are no direct analytic solutions available for the Madelung equation. Baumann and Nonnenmacher [22] exhaustively investigated the Madelung equation with Lie transformations and presented numerous ODEs, however non exact and explicit solutions are presented in a transparent way. Additional numerous studies exist where the non-linear Schrödinger equation is investigated with the Madelung Ansatz ending up with solitary wave solutions,
The first continuity equation can be integrated giving us the mass as a conserved quantity and the parallel solution for the velocity fields η = 2(g + h) + c 0 where c 0 is the usual integration constant, which we set to zero. (A non-zero c 0 remains an additive constant in the final ODE (5) as well.) It is interesting, and unusual (in our practice) that even the Euler equation can be integrated once giving us an other constant of motion. For classical fluids this is not the case. After some additional algebraic steps a decoupled ODE can be derived for the shape function of the density 2f f − (f ) 2 + m 2 η 2 f 2 2 2 = 0.
space dependent potentials U (like a dipole, or harmonic oscillator interaction) in the original Schrödinger equation would generate an extra fourth term in Eq. (5) like η, f (η), or η 2 . Unfortunately, no other analytic closed form solutions can be found for such ODEs.
Fig. 1 .
1The solution of Eq. (6) (c 1 = c 2 = 1) the yellow curve is for m =1 and blue curve is for m= 0.5.
Fig. 2 .
2The projection of real part of the wave function Ψ(x, t) from Eq.(8)form = 1 = c 1 = c 2 .The presented form of the shape function cannot be simplified further, only Y ν s can be expresses with the help of J ν s[24]. Applying the recurrence formulas the orders of the Bessel functions can be shifted as well. With the parabolic cylinder functions the Bessel functions with 3/4 and 1/4 orders can be expressed, too. Unfortunately, all these formulas and manipulations are completely useless now.
gives the Gaussian wave function for a freely propagating particle.) With the Madelung Ansatz we got the classical fluid dynamical analogue of the motion with the physical parameters ρ(x, y, t), v(x, y, t) which can be calculated analytically via the self-similar Ansatz thereafter original wave function Ψ(x, y, t) of the quantum problem can be evaluated as well. The magnitude of the quantum potential Q directly informs us where quantum effects are relevant. This can be evaluated from the classical density of the Madelung equation (2) via
Fig. 3 .
3The shape function of the density f (η) is the blue curve and the shape function of the quantum potential Q(η) is the yellow solid line. All the corresponding parameters are c 1 = c 2 = m = 1.
. E Madelung, Naturwissenschaften. 141004E. Madelung, Naturwissenschaften 14, 1004 (1926).
. E Madelung, Z. Phys. 40332E. Madelung, Z. Phys. 40, 332 (1927).
. L De Broglie, Journ, De Phys, Rad Et La, 38803L. de Broglie, Journ. de Phys. et la Rad. 38, 803 (1927).
. D Bohm, Phys. Rev. 15166D. Bohm, Phys. Rev. 15, 166 (1952).
. L Jánossy, Z. Physics. 16979L. Jánossy, Z. Physics. 169, 79 (1962).
. T Takabayashi, Prog. Theor. Phys. 8143T. Takabayashi, Prog. Theor. Phys. 8, 143 (1952).
. M Schönberg, Il. Nuovo. Chim. 12103M. Schönberg, Il. Nuovo. Chim. 12, 103 (1954).
G Auletta, Foundation and Interpretations of Quantum Mechanics World Scientific. SingaporeG. Auletta (ed.), Foundation and Interpretations of Quantum Mechanics World Scientific, Singapore, 2000.
. J C Vink, Nuclear Phys. B. 369707J.C. Vink, Nuclear Phys. B 369, 707 (1992).
Quantum-like Models and Coherent Effects World Scientific. R. Fedele and P.K. ShuklaSingaporeR. Fedele and P.K. Shukla (eds.), Quantum-like Models and Coherent Effects World Scientific, Singapore, 1995.
. R Fedele, D Anderson, M Lisak, Eur. Phys. J. B. 49275R. Fedele, D. Anderson and M. Lisak, Eur. Phys. J. B. 49, 275 (2006).
Quantum Aspects of Beam Physics World Scientific. P. ChenSingaporeP. Chen (ed.), Quantum Aspects of Beam Physics World Scientific, Singapore, 2002.
. G Terlecki, N Grün, W Scheid, J. Phys. B: At. Mol. Phys. 173719G. Terlecki, N. Grün and W. Scheid, J. Phys. B: At. Mol. Phys. 17, 3719 (1984).
. T C Wallstrom, Phys. Lett. A. 184229T.C. Wallstrom, Phys. Lett. A 184, 229 (1994).
. M Tsubota, M Kobayashi, H Takeuchi, Phys. Rep. 522191M. Tsubota, M. Kobayashi and H. Takeuchi, Phys. Rep. 522, 191 (2013).
K H Hughes, G Parlant, 978-0-9545289-9-7Quantum Trajectories. CCP6, Daresbury LaboratoryK. H. Hughes and G. Parlant (ed.) Quantum Trajectories (CCP6, Daresbury Laboratory, 2011) ISBN 978-0-9545289-9-7, [http://www.ccp6.ac.uk/booklets/CCP6-2011 Quantum Tra- jectories.pdf].
. P Ván, T Fülöp, Proc. R. Soc. A. 462541P. Ván and T. Fülöp, Proc. R. Soc. A 462, 541 (2006).
Similarity and Dimensional Methods in Mechanics. L Sedov, CRC PressL. Sedov, Similarity and Dimensional Methods in Mechanics CRC Press 1993.
. I F Barna, L Mátyás, Fluid. Dyn. Res. 4655508I.F. Barna and L. Mátyás, Fluid. Dyn. Res. 46, 055508 (2014).
. I F Barna, L Mátyás, Chaos, Solitons and Fractals. 78249I.F. Barna and L. Mátyás, Chaos, Solitons and Fractals 78, 249 (2015).
D Campos, Handbook on Navier-Stokes Equations, Theory and Applied Analysis. New YorkNova PublishersD. Campos, Handbook on Navier-Stokes Equations, Theory and Applied Analysis, Nova Pub- lishers, New York, 2017, Chapter 16, Page 275 -304.
. G Baumann, T F Nonnenmacher, Journ. Math. Phys. 281250G. Baumann and T. F. Nonnenmacher, Journ. Math. Phys. 28, 1250 (1987).
. D Grecu, T Alexandru, A T Grecu, S. De Nicola, Journal of Nonlin. Math. Phys. 15209D. Grecu, T. Alexandru, A.T. Grecu and S. De Nicola, Journal of Nonlin. Math. Phys. 15, 209 (2008).
. F W J Olver, D W Lozier, R F Boisvert, C W , Clark NIST Handbook of Mathematical Functions Cambridge University PressF.W.J. Olver, D.W. Lozier, R.F. Boisvert and C.W. Clark NIST Handbook of Mathematical Functions Cambridge University Press, 2010.
| []
|
[
"A model for calcium-mediated coupling between the membrane activity and the clock gene expression in the neurons of the suprachiasmatic nucleus",
"A model for calcium-mediated coupling between the membrane activity and the clock gene expression in the neurons of the suprachiasmatic nucleus"
]
| [
"J M Casado \nArea de Física Teórica\nUniversidad de Sevilla Apartado Correos 1085\n41080SevillaSpain\n",
"M Morillo \nArea de Física Teórica\nUniversidad de Sevilla Apartado Correos 1085\n41080SevillaSpain\n"
]
| [
"Area de Física Teórica\nUniversidad de Sevilla Apartado Correos 1085\n41080SevillaSpain",
"Area de Física Teórica\nUniversidad de Sevilla Apartado Correos 1085\n41080SevillaSpain"
]
| []
| Rhythms in electrical activity in the membrane of cells in the suprachiasmatic nucleus (SCN) are crucial for the function of the circadian timing system, which is characterized by the expression of the so-called clock genes. Intracellular Ca 2+ ions seem to connect, at least in part, the electrical activity of SCN neurons with the expression of clock genes. In this paper, we introduce a simple mathematical model describing the linking of membrane activity to the transcription of one gene by means of a feedback mechanism based on the dynamics of intracellular calcium ions. | null | [
"https://arxiv.org/pdf/1503.00908v2.pdf"
]
| 14,169,694 | 1503.00908 | 66c828733c56d041b22c29ba5b38db4907622863 |
A model for calcium-mediated coupling between the membrane activity and the clock gene expression in the neurons of the suprachiasmatic nucleus
J M Casado
Area de Física Teórica
Universidad de Sevilla Apartado Correos 1085
41080SevillaSpain
M Morillo
Area de Física Teórica
Universidad de Sevilla Apartado Correos 1085
41080SevillaSpain
A model for calcium-mediated coupling between the membrane activity and the clock gene expression in the neurons of the suprachiasmatic nucleus
Circadian rhythmsHindmarsh-Rose modelGoodwin modelintracellular calcium dynamics
Rhythms in electrical activity in the membrane of cells in the suprachiasmatic nucleus (SCN) are crucial for the function of the circadian timing system, which is characterized by the expression of the so-called clock genes. Intracellular Ca 2+ ions seem to connect, at least in part, the electrical activity of SCN neurons with the expression of clock genes. In this paper, we introduce a simple mathematical model describing the linking of membrane activity to the transcription of one gene by means of a feedback mechanism based on the dynamics of intracellular calcium ions.
Introduction
Circadian rhythms arise from the cooperative action of a number of endogenous biological oscillators generating daily patterns of many physiological and behavioral processes that persist even in the absence of the forcing provided by the external light-dark cycle. The master circadian clock in mammalians is located in the suprachiasmatic nucleus (SCN), a neuronal structure located in the anterior hypothalamus. The cells of the SCN behave as a set of cooperative autonomous oscillators with slightly distributed frequencies yielding a global circadian (∼24-hour period) rhythm. Thus the basic oscillatory mechanism leading to the emergence of circadian rhythms has an intracellular origin that relies on the negative self-regulation of gene expression through transcriptional/translational feedback loops. In the last few years, a number of genes involved in such a regulatory mechanism has been identified [1].
On the other hand, the neurons in the SCN show a characteristic firing pattern that changes dramatically with the circadian cycle. They are thought to encode the time of the day by adjusting their firing frequency to high rates during the day and lower ones at night. Different studies carried out in recent times both in Drosophila and in mammals suggest that the electrical activity of SCN cells provides the driving of the molecular clockwork. Also, keeping the SCN cells within an appropriate voltage range may be required for the generation of circadian rhythmicity of clock gene expression at the single cell level [1]. Thus, a fundamental question in circadian biology is how the electrical activity may regulate clock gene expression and, conversely, how this latter process may alter the electrical activity in SCN neurons.
The evidence accumulated in recent years suggests that the effect of the electrical activity on clock gene expression by SCN cells is probably mediated, at least in part, by Ca 2+ ions. In fact, a close relationship between electrical activity and Ca 2+ levels has been observed. Resting levels of Ca 2+ in SCN neurons exhibit a circadian rhythm that has been detected by using a calcium sensitive dye. During the peak in firing at midday, SCN neurons show resting Ca 2+ levels of around 150 mM, but these levels drop to about 75 mM during times of inactivity. The action potential itself is an important source of Ca 2+ in the SCN, regulating Ca 2+ influx into the soma through the opening of voltage-sensitive calcium channels [1]. This feature has been shown most clearly by a recent work in which a Ca 2+ levels and the firing of the SCN neurons were measured simultaneously [2]. Data from this study show that driving the frequency of action potentials in the SCN neurons to 5-10 Hz (daytime levels) induces a rise in somatic Ca 2+ levels. This effect can be attenuated by the application of a L-type Ca 2+ channel blocker [2].
In addition to the relative contribution of Ca 2+ influx to intracellular calcium levels, SCN neurons also have a rhythmically regulated reservoir of calcium that is not driven by membrane events. Both processes, inflow/outflow of calcium across the cell membrane and fixation/release of Ca 2+ from the calcium store, cooperate to generate intracellular Ca 2+ oscillations that seem to be crucial in driving a robust rhythm in gene expression. Indeed, rhythms in Ca 2+ levels, with its peak occurring during the day, seem to be a general feature of circadian systems [1].
It turns out that the oscillatory behavior of the intracellular concentration of Ca 2+ , and those of the variables of the genetic network as well, have a much slower time scale than that of the variables involved in the generation of action potentials by the cell membrane. Thus, from a theoretical point of view, the problem is to understand how fast variables can control the dynamics of slower ones and vice versa.
A conductance based model of spiking in cells of SCN has recently been presented in [3,4]. In those works, the huge variety of ionic currents that contribute to the membrane excitability was modeled in terms of the sodium, potassium and leakage currents characteristic of the Hodgkin-Huxley formalism plus a specific calcium current. They describe the periodic alternation between silent and excited states of the neuronal membrane in terms of direct and inverse Hopf bifurcations induced by the externally imposed time dependence of the ionic conductances. Nonetheless, in those works, no attempt is made to link this driving to changes in the intracellular level of either Ca 2+ or of any other substance.
The aim of this paper is to suggest a plausible mechanism linking the electrical activity in the cell membrane to the circadian rhythms of the genes expressions, taking into account the two very different time scales associated to the membrane voltage (fast) and the genetic (slow) processes. The basic ingredient is to admit that the intracellular level of Ca 2+ controls both the firing frequency and the transcription of a clock gene. The control of the firing frequency is achieved by the free Ca 2+ concentration that, in turn, is controlled by the Ca 2+ in the cell cytoplasm coming from reservoirs. The reservoir Ca 2+ concentration varies in a much longer time scale than the intracellular Ca 2+ one. At the same time, the intracellular Ca 2+ concentration evolves slowly compared to the other variables characterizing the membrane electrical activity. On the other hand, the control on the circadian variables is established by assuming that the activation of the clock gene expression is proportional to the free Ca 2+ concentration.
Ca 2+ controlled electrical activity
We take the electrical activity taking place in the neuron membrane to be described by the two-dimensional systeṁ
x = y − ax 3 + bx 2 + q − sz, (1) y = c − dx 2 − y.(2)
The variable x represents the (dimensionless) voltage across the cell membrane and y is a recovering variable that describes the currents restoring the polarity of the membrane after the emission of an action potential, q is a control parameter. In the first of these equations, z stands for the free calcium concentration inside the cell, whose dynamics is supposed to be governed by the reactions Z
k 2 −→ ø, ø k(x) −→ Z Z R k 3 −→ Z.(3)
The first equation describes the outflow of Ca 2+ from the cell whereas the second one expresses the inflow of these same ions while the voltage-sensitive calcium channels are open during the membrane activation. Thus, the rate of this process is made dependent of the voltage. The last reaction describes the release of calcium ions from a intracellular reservoir described by the variable z R . The changing in Ca 2+ levels induced by the reservoir seems to be a critical part of the output pathway by which intracellular processes drive rhythms in neural activity [1]. The kinetic of calcium ions are completed by the reactions Z
µk 4 −→ Z R Z R µk 5 −→ ø (4)
where the first reaction describes the subtraction by the reservoir of free calcium ions and the second one corresponds to the direct loss of calcium ions by the reservoir. This set of reactions allows us to write the system of differential equations for the variables z and z Ṙ
z = (k(x) − k 2 z + k 3 z R ), (5) z R = µ(k 4 z − k 5 z R ).(6)
If we admit that the temporal scale for the evolution of the reservoir is much slower that the corresponding to the free calcium (µ ), we havė
z R = 0, ⇒ z R = Γ,(7)
Γ being constant on the time scale. Thus, introducing this constant in Eq. 6 and assuming that k(x) = k 1 x, we can writė
x = y − ax 3 + bx 2 + q − sz, (8) y = c − dx 2 − y,(9)z = (k 1 x − k 2 z + g),(10)
with g = k 3 Γ. This three-dimensional dynamical system is formally identical to the Hindmarsh-Rose (HR) neuronal model [10]. Here, however, the slow adaptation variable z introduced by these authors is reinterpreted as the intracellular free Ca 2+ concentration, whose level controls the much faster spiking variables x and y. The HR model shares essential qualitative features with several conductance-based models of excitable cells giving rise to squarewave bursting [5,6,7]. All those models involve the interplay between a stable branch of stationary points and a limit cycle through a saddle middle branch in the equilibrium curve. By inducing the periodical transitions between these two coexisting attractors by means of a slow control variable, this dynamical mechanism allows the description of periodical silent and active phases in the neuron behavior. This same dynamical mechanism has recently been invoiced to interpret some experimental findings related to crustacean patterns generators [8].
The HR system with 1 has two very different time scales, one of them associated with a fast subsystem (x and y variables) and the other associated with the dynamics of a slow subsystem (z variable). In fact, 1 implies that in the time scale in which both x and y vary, z can be considered as a constant. Thus, calling sz = γ we are led to analyze the dynamics of the two-dimensional systemẋ
= y − ax 3 + bx 2 + q − γ, y = c − dx 2 − y,(11)
in terms of the values of the parameter γ. The equilibrium values x * of this two-dimensional system obey a cubic equation resulting from the crossing of both nullclines associated with Eq. (11),
ax 3 + (d − b)x 2 − q − c + γ = 0.(12)
Several regimes can be found as the parameter γ is varied. As seen in Fig.1 for small negative values of γ the only attractor is an equilibrium point (a stable focus) with non-zero x * -values. As γ increases, the system undergoes a Hopf bifurcation rendering unstable the branch of equilibrium points giving rise to a stable limit cycle corresponding to voltage spikes (point B in Fig.1). For still larger γ values, the cubic equation has two unstable values coexisting with a stable one. Furthermore, the limit cycle disappears through a homoclinic bifurcation when it collides with the unstable branch of x * (point A in the same figure). Notice the rather small interval of γ values for which the stable limit cycle coexists with the stable equilibrium point appearing through a saddle-node bifurcation. For still larger values of γ, just a stable stationary branch remains. Following a technique pioneered by J. Rinzel some decades ago [12] a rather useful picture of the HR model dynamics as given by Eq. (10) can be obtained by projecting its orbits onto the xz-plane, and superimposing it with the bifurcation diagram for the fast subsystem described by Eq. (11). In Fig.2, an enlarged version of the coexistence region is presented including the nullcline of the slow subsystem in (10) x = (k 2 z − g)/k 1 (the red dotdashed line). The solid black thin line in this figure corresponds to the bursting oscillations depicted in red in Fig. 1. This curve has been obtained from the x(t) and z(t) obtained by numerically integrating the full system of differential equations (10). It corresponds to the projection on the x − z plane of the global three dimensional attractor. As one can observe in Fig. 2, when z slowly moves to the left (notice thatż < 0 under the z−nullcline), the x variable evolves along the branch of stable equilibria (C → D) until it comes to an end at a saddle-node bifurcation (point D). At this point it is forced to jump to the limit cycle by crossing the z−nullcline and afterwards it is forced to move to the right (nowż > 0) while performing fast oscillations. This oscillatory behavior ends when the limit cycle disappears through the homoclinic bifurcation at A. At this point, the x variable must jump to the stable branch of equilibria and the whole process repeats itself.
The temporal evolution of x(t) and z(t) has been depicted in Fig.3. As we can see, the free running x(t) variable alternates periodic episodes of fast spiking with silent epochs of much slower evolution. On the other hand, z(t) evolves in the slower time scale. In this case, the temporal scale of the transitions between the silent and the spiking states of the neuron coincides with that of the z variable. Each period of the z oscillations corresponds to a cycle characterized by a rise of the calcium level associated with the burst of spikes and followed by a silent phase in which the concentration of Ca 2+ slowly decays. Note that our model describes the membrane activity without explicitly incorporating the ionic currents. Nonetheless, our equations show a bifurcation structure close to that of more complicated conductance models [4].
Coupling electrical and molecular activities
On the genetic side of the model, we have considered the transcription of only one clock gene. Thus the variables of interest are the intracellular concentration of clock gene mRNA, the corresponding level of a clock protein and that of a transcriptional inhibitor whose action closes the self-regulatory feedback loop. All these variables obey the differential equations of the wellknown Goodwin model [9].
To link the membrane electrical activity to the clock gene expression, we assume that the level of free calcium acts as an activator of the transcription of the mRNA. Then, our complete model is embodied in the six-variable system of differential equationṡ
x = y − ax 3 + bx 2 − sz + q + pY,(13)y = c − dx 2 − y,(14)z = (k 1 x − k 2 z + g),(15)X = αz 1 + Z h − kX ,(16)Y = (k f X − kY ), (17) Z = (k f Y − kZ)(18)
where X, Y and Z describe the clock mRNA, the clock protein and the inhibitor of the transcription, respectively. Throughout this work we will take a = 1, b = 3, c = 1, d = 5, s = 1, h = 10, k = 2, k f = 2 and, α = 8. Note that the above equations contemplate a direct feedback mechanism by which the molecular clock is able to drive the electrical activity of the cell membrane through the term pY appearing in the voltage equation. At the same time, the variable z representing the free Ca 2+ concentration inside the cell, influences the clock gene concentration X through a Hill-like term. For more elaborated circadian models involving several clock genes [13,14] the coupling between the membrane and genetic activities could be formulated along the lines of our methodology by including terms describing the Ca 2+ activation of the different gene expressions. We first study the case with no feedback from the genetic variables on the membrane ones (p = 0). In Fig. 3 we have depicted the temporal behavior of the voltage variable and that of the intracellular concentration of free calcium as well as the evolution of all the variables of the genetic subsystem. The parameter sets the spiking frequency of the system so that the smaller it is, the higher the frequency of spiking achieved during the bursting phase. The value of q is critical to the character and persistence of these bursts. In order to save computation time we have set the values of small but with no intention to fit quantitatively the experimental values for the neurons of the SCN. One could interpret the behavior observed in Fig. 3 by noting that the start of a burst of spikes activates the inflow of Ca 2+ , thus increasing its intracellular level until it forces the membrane back to its resting electrical state. On the other hand, the slow oscillation in the intracellular calcium concentration forces a slightly delayed oscillation in the level of mRNA (black line in the lower panel), leading to an additional delay in the evolution of the protein level (blue line) as well as in the inhibitor (red line). Note that there is no modulation of the bursts amplitude due to the fact that we have chosen p = 0. Now we turn to the case with p = 0. As we can see in Fig.4, the existence of the pY feedback term in the first equation of the model generates a relatively complex temporal modulation of the voltage variable along the day. Again, the genetic variables oscillate at the slower time scale of the voltage variable and with a certain dephasing among them. As we can see in Fig.4, the level of the clock gene mRNA peaks at the start of the silent phase of the neuron that takes place during the daylight hours (see Fig.5) while it decreases during the night.
x, z In Fig.5 we present the autonomous behavior of the voltage variable in terms of the Zeitgeber Time (ZT) during a whole day. The time origin for the data plotted in this figure has been shifted to coincide with that of the data reported by Belle and coworkers for the forcing of per1-containing SNC neurons with a sequence of equal periods of light and darkness [4]. The results presented here are qualitatively close to those experimental findings although some differences in timing remains. From ZT0 to ZT2 approximately, the neuron is firing, whereas from ZT4 to ZT10, it has been deeply depolarized and has become silent. The value chosen for p in the equations of the model affects the period of depolarization. For p = 13.5, the spiking behavior starts again at ZT10 and lasts until ZT18, when the neuron becomes hyperpolarized and its activity ceases. This silent period increases as the parameter k 2 decreases. Finally, at Z22 the neuron starts firing again. The intracellular concentration of calcium ions, on the other hand, rises during the day, peaks in the evening and early night and decreases to a minimum along the late night hours. Note that the spiking frequency changes with the ZT value.
Concluding remarks
Experimental evidence indicates that the link between electrical activity and clock gene expression in SCN cells is provided, at least in part, by the intracellular dynamics of Ca 2+ ions associated with the existence of a reservoir of these ions inside the cell. The nature and localization of this intracellular reservoir is unknown at present although it seems that this role is played by the endoplasmic and sarcoplasmic reticulae [15].
In this work we have presented a mathematical model aiming to explore a simple dynamical mechanism for the driving of clock genes by the firing activity of neurons. Some work carried out in the last few years suggests the existence of a control of the firing frequency by clock gene expression, but very little is known about the mechanisms by which genetic oscillations are able to drive rhythms in neural activity [1]. So, in this work we have explored this issue by assuming that the protein produced by the transcription of the clock gene modulates the neuronal firing. Similar results would have been obtained by using the mRNA variable or the inhibitor to control the neuronal firing. The role played by a calcium reservoir has been taken into account by considering the kinetics of calcium fixation and its release by the reservoir. It seems possible to extend the main ideas that we have elaborated here to link the genetic and membrane variables of the SNC using a conductance based model for the membrane activity as well as a dynamics of more than one clock gene. Work along these lines is in progress.
Figure 1 :
1Z−shaped steady state curve x * = x * (γ) for the fast subsystem (Eq.11). The stable branch is depicted by a continuous line whereas the dashed lines represent unstable branches of equilibria. The limit cycle appearing at γ = −11.293 and ending at the homoclinic bifurcation at A appears in gray. The projection of the global attractor of the full three-variable system (in red) has been superimposed to this diagram. Parameter values are a = 1.0, b = 3.0, c = 1.0, d = 5.0, q = 0.3, k 1 = 1.0, k 2 = 0.1 and g = 1.33.
Figure 2 :
2Enlarged version of coexistence region depicted in Fig. 1. The homoclinic bifurcation occurs at A and the fold bifurcation at D. The red dot-dashed line corresponds to the z−nullcline. The solid black thin line corresponds to the bursting oscillations depicted in red in Fig. 1.
Figure 3 :
3Square-wave bursting of the model with p = 0 as a function of (dimensionless) time t (upper panel).The evolution of X(t) (black), Y (t) (blue) and Z(t) (red) as well as the temporal behavior of z(t) (red thick line) have been depicted in the lower panel. Parameter values are = 0.001, p = 0, q = 0.3, k 1 = 1.0, k 2 = 0.8 and g = 1.23.
Figure 4 :
4Autonomous behavior of x(t) and z(t) for the model with p = 0 (upper panel). The evolution of X(t) (black), Y (t) (blue) and Z(t) (red) have been depicted in the lower panel. Parameter values are k 1 = 4, k 2 = 0.6, q = 0.5 and p = 12.5.
Figure 5 :
5Autonomous behavior of the voltage variable (upper panel) and that of the concentration of intracellular calcium (lower panel) during a whole day. The silent phase characterized by a strongly depolarized membrane appears through an inverse Hopf bifurcation and ends at a direct bifurcation of the same kind. Parameter's values are k 1 = 4, k 2 = 0.6, q = 0.5, and p = 13.5. The time origin has been shifted to coincide with that used in[4].
. C S Colwell, Nature Neuroscience. 12553C. S. Colwell, Nature Neuroscience 12 (2011) 553.
. R Irwin, C Allen, J. Neurosci. 2711748R. Irwin, C. Allen, J. Neurosci. 27 (2007) 11748.
. C K Shin, D B Forger, J. Biol. Rhythms. 22445C. K. Shin, D. B. Forger, J. Biol. Rhythms 22 445 (2007).
. M D C Belle, C O Diekman, D B Forger, H D Piggins, Science. 326281M. D. C. Belle, C. O. Diekman, D. B. Forger, H. D. Piggins, Science 326 (2009) 281.
. T R Chay, J Keizer, Biophys. J. 42181T. R. Chay, J. Keizer, Biophys. J. 42 (1983) 181.
. R J Butera, J Rinzel, J C Smith, J. Neurophysiol. 82382R. J. Butera, J. Rinzel, J. C. Smith, J. Neurophysiol. 82 (1999) 382.
. D Golomb, C Yue, Y Yaari, J. Neurophysiol. 961912D. Golomb, C. Yue, Y. Yaari, J. Neurophysiol. 96 (2006) 1912.
. B Marin, R D Pinto, R C Elson, E Colli, Phys. Rev. E. 9042718B. Marin, R. D. Pinto, R. C. Elson, E. Colli, Phys. Rev. E 90 (2014) 042718.
. A Woller, D Gonze, T Erneux, Phys. Rev. E. 8732722A. Woller, D. Gonze, T. Erneux, Phys. Rev. E 87 (2013) 032722.
. J L Hindmarsh, R M Rose, Nature. 296162J. L. Hindmarsh, R. M. Rose, Nature 296 (1982) 162;
. Proc. R. Soc. London B. 22187Proc. R. Soc. London B 221 (1984) 87.
. S Bernard, D Gonze, B Cajavek, H Herzel, A Kramer, Comp, Biol. 3667S. Bernard, D. Gonze, B. Cajavek, H. Herzel, A. Kramer, PLOS Comp. Biol. 3 (2007) 0667.
J , Ordinary and Partial Differential Equations. B. D. Sleemon and R. J. JarvisNew YorkSpringerJ. Rinzel, in Ordinary and Partial Differential Equations, eds. B. D. Sleemon and R. J. Jarvis, Springer, New York, 1985.
. S Becker-Weinmann, J Wolf, H Herzel, A Kramer, Biophys. J. 873023S. Becker-Weinmann, J. Wolf, H. Herzel, and A. Kramer, Biophys. J. 87 (2004) 3023.
. J.-C Leloup, A Goldbeter, PNAS. 1007051J.-C. Leloup and A. Goldbeter, PNAS 100 (2003) 7051.
. D Noble, Y Rudy, Phil. Trans. R. Soc. Lond. A. 3591127D. Noble and Y. Rudy, Phil. Trans. R. Soc. Lond. A 359 (2001) 1127.
| []
|
[
"Warped Ricci-flat reductions",
"Warped Ricci-flat reductions"
]
| [
"E Ó Colgáin \nC.N.Yang Institute for Theoretical Physics\nSUNY Stony Brook\n11794-3840NYUSA\n",
"M M Sheikh-Jabbari \nSchool of Physics\nInstitute for Research in Fundamental Sciences (IPM)\nP.O.Box19395-5531TehranIran\n\nDepartment of Physics\nKyung Hee University\n130-701SeoulKorea\n",
"J F Vázquez-Poritz \nPhysics Department\nNew York City College of Technology\nThe City University of New York\n300 Jay Street11201BrooklynNYUSA\n\nKavli Institute for Theoretical Physics China\nInstitute of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina\n",
"H Yavartanoo \nInstitute of Theoretical Physics\nState Key Laboratory of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina\n",
"Z Zhang \nPhysics Department\nNew York City College of Technology\nThe City University of New York\n300 Jay Street11201BrooklynNYUSA\n",
"\nDepartment of Mathematics\nUniversity of Surrey\nGU2 7XHGuildfordUK\n",
"\nThe Graduate School and University Center\nThe City University of New York\n365 Fifth Avenue10016New YorkNYUSA\n"
]
| [
"C.N.Yang Institute for Theoretical Physics\nSUNY Stony Brook\n11794-3840NYUSA",
"School of Physics\nInstitute for Research in Fundamental Sciences (IPM)\nP.O.Box19395-5531TehranIran",
"Department of Physics\nKyung Hee University\n130-701SeoulKorea",
"Physics Department\nNew York City College of Technology\nThe City University of New York\n300 Jay Street11201BrooklynNYUSA",
"Kavli Institute for Theoretical Physics China\nInstitute of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina",
"Institute of Theoretical Physics\nState Key Laboratory of Theoretical Physics\nChinese Academy of Sciences\n100190BeijingChina",
"Physics Department\nNew York City College of Technology\nThe City University of New York\n300 Jay Street11201BrooklynNYUSA",
"Department of Mathematics\nUniversity of Surrey\nGU2 7XHGuildfordUK",
"The Graduate School and University Center\nThe City University of New York\n365 Fifth Avenue10016New YorkNYUSA"
]
| []
| We present a simple class of warped-product vacuum (Ricci-flat) solutions to ten and elevendimensional supergravity, where the internal space is flat and the warp factor supports de Sitter (dS) and anti-de Sitter (AdS) vacua in addition to trivial Minkowski vacua. We outline the construction of consistent Kaluza-Klein (KK) reductions and show that, although our vacuum solutions are non-supersymmetric, these are closely related to the bosonic part of well-known maximally supersymmetric reductions on spheres. We comment on the stability of our solutions, noting that (A)dS3 vacua pass routine stability tests. | 10.1103/physrevd.90.045013 | [
"https://arxiv.org/pdf/1406.6354v1.pdf"
]
| 119,295,788 | 1406.6354 | 7718f01a8215e1f86b87e9c13b6af24c7a10b552 |
Warped Ricci-flat reductions
E Ó Colgáin
C.N.Yang Institute for Theoretical Physics
SUNY Stony Brook
11794-3840NYUSA
M M Sheikh-Jabbari
School of Physics
Institute for Research in Fundamental Sciences (IPM)
P.O.Box19395-5531TehranIran
Department of Physics
Kyung Hee University
130-701SeoulKorea
J F Vázquez-Poritz
Physics Department
New York City College of Technology
The City University of New York
300 Jay Street11201BrooklynNYUSA
Kavli Institute for Theoretical Physics China
Institute of Theoretical Physics
Chinese Academy of Sciences
100190BeijingChina
H Yavartanoo
Institute of Theoretical Physics
State Key Laboratory of Theoretical Physics
Chinese Academy of Sciences
100190BeijingChina
Z Zhang
Physics Department
New York City College of Technology
The City University of New York
300 Jay Street11201BrooklynNYUSA
Department of Mathematics
University of Surrey
GU2 7XHGuildfordUK
The Graduate School and University Center
The City University of New York
365 Fifth Avenue10016New YorkNYUSA
Warped Ricci-flat reductions
We present a simple class of warped-product vacuum (Ricci-flat) solutions to ten and elevendimensional supergravity, where the internal space is flat and the warp factor supports de Sitter (dS) and anti-de Sitter (AdS) vacua in addition to trivial Minkowski vacua. We outline the construction of consistent Kaluza-Klein (KK) reductions and show that, although our vacuum solutions are non-supersymmetric, these are closely related to the bosonic part of well-known maximally supersymmetric reductions on spheres. We comment on the stability of our solutions, noting that (A)dS3 vacua pass routine stability tests.
INTRODUCTION
Studying gravity in various dimensions has had many different motivations, at the classical and quantum levels, dating back to the 1920's and the seminal works of Kaluza and Klein. String theory, of course, prefers ten or eleven-dimensional supergravity theories and related compactifications or reductions to lower dimensions. Except for rather limited though important cases in lower dimensions, such as supersymmetric solutions to ungauged supergravity [1,2], we are not even close to classifying all solutions to a given (super)gravity theory. In the absence of supersymmetry, a specific class of simpler solutions that is often studied in gravity theories involves "vacuum" solutions, i. e. solutions to Einstein equations with vanishing energy-momentum tensor, or alternatively, Ricci-flat geometries.
AdS/CFT motivations have directed a lot of activity in gravity solution construction towards finding and classifying solutions that involve an AdS factor and an internal compact space. Such solutions are almost always not vacuum solutions and involve various form-field fluxes present in supergravity theories. Moreover, for AdS/CFT purposes and also for stability requirements, as well as having (quantum) corrections under control, it is often demanded that such solutions preserve a fraction of the global supersymmetry of the theory. This tendency has led to Ricci-flat solutions with AdS factors being largely overlooked.
Separately, solutions with a de Sitter factor have also been of particular interest within higher-dimensional supergravity and string theory setups, as the observed universe we are living in seems to be an asymptotically de Sitter space. Nonetheless, it has proven to be notoriously difficult to construct four-dimensional de Sitter solutions in a string theory setting which are classically and quantum mechanically stable and do not have a moduli problem [3,4]. A leading framework [5] and its uplifting procedure to produce dS vacua has recently been called into question [6], leading to a renewed interest in alternatives [7]. In this paper, we consider simple higher-dimensional gravity with no local sources or non-perturbative contributions and simply solve the equations of motion. As a result, the de Sitter vacua we find are some of the simplest in the literature and, while we forfeit supersymmetry from the onset, we still retain some control through a scaling limit.
Although our AdS solutions are singular -albeit in a "good" sense [8] -it is a striking feature of our de Sitter constructions that they are completely smooth. This should be contrasted with recent studies of persistent singularities in non-compact geometries where anti-branes are used to uplift AdS vacua [9] (see also [10]). Here, since we are considering vacuum solutions, we have no branes, and thus, no singularities. Any branes that do exist only appear when we turn on fluxes in the lowerdimensional theories to construct dS 3 vacua. Regardless of whether fluxes are turned on or not, our construction evades the well-known "no-go" theorem [11] on the basis that the internal space is non-compact.
The scaling limit we employ may be traced to "nearhorizon" limits of Extremal Vanishing Horizon (EVH) black holes [12]. Within an AdS/CFT context, such warped-product solutions have been studied previously in [13,14] (more recently [15,16]), where consistent KK reductions to lower-dimensional theories were constructed. In contrast to usual AdS/CFT setups, the vacua of the lower-dimensional theories are supported exclusively through the warp factor and lift to vacuum solutions in ten and eleven dimensions. We show that if one neglects the possibility of a large number of internal dimensions, there are just three example in this class. Furthermore, we demonstrate that the solutions can be easily generalized to de Sitter and that consistent KK reductions exist as scaling limits of well-known sphere reductions, for example [17].
We begin in the next section by considering a general D-dimensional warped-product spacetime Ansatz on the assumption that the internal space is Ricci-flat. While it is most natural to consider R q , the same analysis holds for Calabi-Yau cones and we expect a variant to hold for more generic Calabi-Yau. Locally, this construction encompasses cases for which the internal space is an Einstein space such as a sphere, which is conformally flat and the conformal factor is automatically included by way of the warp factor. We identify a class of vacuum solutions where the internal Ricci scalar can be made to vanish by tuning the dimensions. Somewhat surprisingly, this leads to three isolated examples, which only reside in ten or eleven dimensions, supporting (A)dS p vacua with p = 5, 6, 8. We remark that supersymmetry is broken.
In the following section, we focus on the warped (A)dS 5 solution to eleven-dimensional supergravity, where the internal space is R 6 . We explicitly construct a KK reduction Ansatz and note that the lower-dimensional theory one gets is five-dimensional U(1) 3 gauged supergravity [17] on the nose. Importantly, this observation guarantees stability within our truncation, though of course instabilities can arise from modes that are truncated out [18]. In the absence of warping, the (A)dS 5 vacuum becomes a Minkowski vacuum and one can compactify the internal space to get the usual KK reduction to ungauged supergravity on a Calabi-Yau manifold. We show that the KK reduction naturally arises as a scaling limit of the KK reduction of eleven-dimensional supergravity on S 7 , further truncated to the Cartan U(1) 4 [17]. Using a similar scaling argument, we also exhibit KK reductions from ten and eleven-dimensional supergravity on R 4 and R 3 , respectively.
Since none of our solutions are manifestly supersymmetric, the final part of our paper concerns (classical) stability. In constructing the KK reductions, we have assumed a product structure for the internal space and it is a well-known fact that reductions based on product spaces are prone to instabilities where the volume of one subspace increases whilst another decreases [19][20][21]. Neglecting all solutions to the five-dimensional theory, which are guaranteed to be stable within the truncation, we find that the AdS 6 and AdS 8 vacua are unstable. By constructing lower-dimensional Freund-Rubin type solutions within our AdS p , p = 5, 6, 8 truncations, we show that AdS 3 solutions are stable. On physical grounds, since these arise as the near-horizon of EVH black holes [13,14], they are expected to be classically stable.
In the final part of this paper, we construct an example of a dS 3 vacuum and study its stability. We show that the vacuum energy of the de Sitter solution can be tuned so as to stabilise the vacuum against tunneling. This guarantees that the vacuum would be suitably long lived and, along with other dS 3 solutions to string theory [22]. We close the paper with some discussion of related open directions.
RICCI-FLAT SOLUTIONS
In this section we identify a class of Ricci-flat solutions in general dimension D = p + q. From the offset, we assume that the overall spacetime takes the form of a warped product,
ds 2 p+q = ∆ m ds 2 (M p ) + ∆ n ds 2 (Σ q ),(1)
decomposed into a p-dimensional external spacetime M p and a q-dimensional Ricci-flat internal space Σ q . m, n denote constant exponents and the warp factor ∆ only depends on the coordinates of the internal space. Denoting external coordinates, a, b = 0, . . . , p − 1 and internal coordinates m, n = 1, . . . q, the vanishing of the internal Ricci scalar, i. e. g mn R mn , yields the equation:
[ mp 2 + n(q − 1)]∇ 2 ∆ = n(q − 1) − mp 2 (m − 1) − (q − 2)[ n 2 4 (q − 1) + mnp 4 ] ∆ −1 (∂∆) 2 .(2)
If the internal space is R q and r denotes its radial coordinate, then simple solutions to (2) are given by
∆ = (c 1 + c 2 r 2−q ) 1 (1+κ) , q = 2, (c 1 + c 2 log r) 1 (1+κ) , q = 2,(3)
where κ is a constant that depends on m, n, p, q and c i denote integration constants. Geometries with these ∆ are singular at the origin r = 0.
One may hope to find non-singular solutions to (2), by forcing the bracketed terms to vanish:
m = 2 − 4 q , n = − 4 q , p = 4(q − 1) q − 2 .(4)
This condition defines our class of Ricci-flat solutions. Note that q = 2 is not a legitimate choice and if one demands integer dimensions, we are hence led to only the following choices for (p, q): (5, 6), (6,4) and (8,3). Besides these choices, one can also formally consider large D, q → ∞ limit where one encounters a four-dimensional vacuum (p = 4). It is a curious property of this class of solutions that they only exist in ten and eleven (and also infinite) dimensions, settings where we have low-energy effective descriptions for string theory. From the onset, there is nothing outwardly special about our Ansatz and one would assume that examples could be found for general D, yet we find that this is not the case.
To specify the overall spacetime, we simply now have to record the warp factor. Again, evoking the existence of a radial direction, we can write ∆ as
∆ = 1 + λr 2 ,(5)
where we have normalized the integration constants, one of which, λ = −1 or +1, dictates whether the vacuum is anti-de Sitter or de Sitter spacetime, respectively. For λ = 0, the warp factor becomes trivial and the solution reduces to D-dimensional Minkowski spacetime. The radius, , of M p is expressed as
2 = 1 |λ| (p − 1) (q − 2) .(6)
The other D-dimensional vacuum Einstein equations yield the following equation for ∆:
∆∇ 2 ∆ + (∂∆) 2 = qλ,(7)
and, as a result, it is easy to infer through
Σq ∆∇ 2 ∆ + (∂∆) 2 − qλ = −qλvol(X) = 0,(8)
that a compact internal space Σ q requires a Minkowski vacuum, λ = 0. Therefore, all our (A)dS spacetimes, will have non-compact internal spaces. Despite the existence of a covariantly constant spinor (which is a result of Ricci-flatness), it is easy to see that none of these geometries are supersymmetric, except when λ = 0. As a cross-check, we note that where complete classifications of supersymmetric solutions exist, for example [23], one can confirm that our solutions are not among them.
As an extension to the Σ q = R q internal space, for q = 4 and 6, Σ q can easily be chosen to be a Calabi-Yau cone over a Sasaki-Einstein space, or more generally by a cone over an Einstein space. However, such cones have a conical singularity at their apex.
We also remark that for λ = −1, we encounter a curvature singularity at r = 1, where the warp factor vanishes. This does not affect the Ricci scalar, since the above solutions are Ricci-flat, but it does show up in contractions of the Riemann tensor R M N P Q R M N P Q . This singularity can be seen to be of "good" type [8], a point that was made recently in [16] and we will return to in due course. In contrast, the Minkowski and de Sitter solutions are smooth.
KK REDUCTION ON THE SOLUTIONS
Taking each of the warped-product solutions identified in the previous section in turn, one can construct simple consistent Kaluza-Klein reductions to the lowerdimensional theory. By "simple", in contrast to traditional reductions, we mean that there is a clear division between scalars in the metric and gauge fields in the fluxes of the higher-dimensional supergravity. This means that even with scalars the overall spacetime is Ricci-flat 1 , whereas the inclusion of gauge fields leads to a back-reaction externally, with the internal space remaining Ricci-flat. In this section we discuss the KK reduction of each of the three solutions independently.
KK Reduction on the R 6
We start with a reduction from eleven-dimensional supergravity to five dimensions on R 6 . The main idea for such a reduction was presented in [13]. Our warped Ansatz naturally generalises the known reduction to a Minkowski vacuum on Calabi-Yau, for example [25]. Since the lower-dimensional theory in that case is ungauged supergravity, here we present a simple Ansatz that recovers the bosonic sector of D = 5 U(1) 3 gauged supergravity [17], and via a flip in a sign (appropriate double Wick rotation and analytic continuation), the AdS 5 vacuum becomes dS 5 . We recall the fivedimensional action of D = 5 U(1) 3 gauged supergravity:
L 5 = R * 1 − 1 2 2 i dϕ i ∧ * dϕ i − 1 2 3 i X −2 i F i ∧ * F i − 4λ 3 i X −1 i vol 5 + F 1 ∧ F 2 ∧ A 3 .(9)
In the above action, F i = dA i and the scalars X i are subject to the constraint
3 i=1 X i = 1.
In terms of the unconstrained scalars ϕ i , they may be expressed as
X 1 = e − 1 2 2 √ 6 ϕ1+ √ 2ϕ2 , X 2 = e − 1 2 2 √ 6 ϕ1− √ 2ϕ2 ,(10)
where X 3 = (X 1 X 2 ) −1 . It is a well-known fact that the theory (9), with coupling g 2 = −λ, arises from a KK reduction of type IIB supergravity on S 5 [17]. Here, we provide an alternative higher-dimensional guise.
The U(1) 3 theory (9) arises from the following KK reduction Ansatz:
ds 2 11 = ∆ 4 3 ds 2 (M 5 ) + ∆ − 2 3 3 i=1 X i dµ 2 i + µ 2 i dψ 2 i , G 4 = − i µ i dµ i ∧ dψ i ∧ dA i ,(11)
where the internal space is a product of three copies of R 2 and the warp factor is given in (5) in terms of the overall radius r, where r 2 = (µ 2 1 + µ 2 2 + µ 2 3 ). The fourform is largely self-selecting, since it is only this combination of external and internal forms that will scale in the same way with the Ricci tensor, R ab ∼ ∆ − 4 3R ab . In general, constructing KK reductions for warped product spacetimes is tricky and a useful rule of thumb is that the fluxes should scale in the same way as the Ricci tensor, so that the warp factor drops out of the Einstein equation. Further discussion can be found in [26].
The internal part of metric in the reduction Ansatz (11) is rather unusual in the sense that we have isolated U(1)'s in the internal metric but have not gauged the isometries. In fact, it can be shown that the inclusion of traditional KK vectors along the U(1)'s in the metric would result in terms that scale differently with the warp factor. Therefore, on their own, they are inconsistent but it may be possible to restore consistency, essentially by mixing the metric with the fluxes so that the overall factor that appears with the gauge fields scales correctly. This would involve a more complicated Ansatz-potentially one where Ricci-flatness is sacrificed-and we leave this to future work.
This reduction is performed at the level of the equations of motion and is, by definition, consistent. The flux equation of motion in eleven-dimensional supergravity, d * 11 G 4 + 1 2 G 4 ∧ G 4 = 0, leads to the lower-dimensional flux equations of motion, while Ricci-flatness along each copy of R 2 leads to an equation of motion for X i . The constraint on the X i comes from cross-terms in the Ricci tensor of the form
R aµi = −µ i ∆ − 7 3 X − 1 2 i ∂ a log i X i = 0.(12)
The reduction proves to be inconsistent at the level of the action. This is probably due to the non-compactness of the internal space 2 . Some further comments are now in order. As stated, one may readily show that the D = 11 uplift of the AdS 5 vacuum is not supersymmetric. A priori, there is nothing to rule out the possibility that the solutions to U(1) 3 gauged supergravity, which are supported by the scalars X i and the gauge fields A i , are supersymmetric. However, we believe that this is unlikely. To back this up, we have confirmed that a three-parameter family of wrapped brane solutions considered in [28] (see [29] for earlier works) is not supersymmetric in the current context. The same solutions are supersymmetric when uplifted on S 5 .
As for compactness, for λ = −1 we have a natural cut-off on the internal spaces, namely leading to a finite-volume internal space, where each R 2 subspace may be regarded as a disk. Curvature singularities of this type have been identified as the "good" type in the literature [8] and their CFT interpretation has been explored in [13]. For λ = +1 this is a smooth embedding of D = 5 U(1) 3 de Sitter gravity in eleven-dimensional supergravity, albeit with a non-compact internal space.
Origin of the KK reduction
It is striking that we have arrived at a class of vacua that only reside in ten and eleven-dimensional supergravity. In this section, we offer an explanation as to why that may be the case. Our observation hinges on a known "far from BPS" near-horizon limit of certain extremal black holes in D = 4 U(1) 4 and D = 5 U(1) 3 gauged supergravity [13,14] (see also [16]), where an AdS 3 near-horizon is formed by incorporating an internal circular direction with the scalars, in this case X i , all scaled appropriately.
Eschewing explicit solutions, we are free to apply the same scaling for the scalars X i directly to the KK reduction Ansatz presented in [17]. For concreteness, we consider the SO(8) KK reduction on S 7 , further truncated to the U(1) 4 Cartan subgroup. The reduction Ansatz may be written as [17]
ds 2 11 = ∆ 2 3 ds 2 4 + ∆ − 1 3 4 i=1 X −1 i dµ 2 i + µ 2 i Dφ 2 i ,(13)G 4 = 2 i X 2 i µ 2 i − ∆X i vol 4 + 1 2 * 4 d log X i ∧ d(µ 2 i ) − 1 2 X −2 i d(µ 2 i ) ∧ Dφ i ∧ * 4 F i ,(14)
where we have defined
D = dφ i + A i , ∆ = 4 i=1 X i µ 2 i , F i = dA i and µ i are constrained by 4 i=1 µ 2 i = 1. The X i are subject to the constraint 4 i=1 X i = 1.
We now isolate X 1 and blow it up by taking the limit 3
X 1 = − 3 2X 1 , X i = 1 2X i , i = 2, 3, 4, φ 1 = −1 ϕ 1 , g 4 = g 4 ,(15)
with → 0. In the process, the internal φ 1 direction migrates and combines with the original four-dimensional metric to form a five-dimensional subspace. Performing this scaling at the level of the Ansatz yields
ds 2 11 = µ 2 3 1 X 2 3 1 ds 2 4 +X 1 − 4 3 dϕ 2 1 + µ − 2 3 1 4 i=2X 1 − 1 3X −1 i dµ 2 i + µ 2 i dφ 2 i ,(16)G 4 = − 1 2 4 i=2X −2 i d(µ 2 i ) ∧ dφ i ∧ * 4 F i ,(17)
where µ 1 is constrained, so the warp factor is
µ 1 = 1 − (µ 2 2 + µ 2 3 + µ 2 4 ).(18)
Up to redefinitions, and the introduction of λ, this Ansatz is the same as the consistent KK reduction Ansatz identified in the previous section. Note that the * 4 F i in the original notation refers to a two-form, with the Hodge dual of the two-form leading to a two-form in the new five-dimensional spacetime, which appears wedged with the volume of the internal disks. It is interesting that the AdS vacuum is now sourced by the warp factor and not by the original vol 4 term in the four-form flux, G 4 , which is suppressed in the limiting procedure. Note that this limiting procedure naturally leads to a singularity, which conforms to the "good" type under the criterion of [8], since the scalar potential is bounded above in the lower-dimensional potential. In other words, from the perspective of the original D = 4 U(1) 4 gauged supergravity, the limiting procedure results in a steadily more negative potential.
One could first take a limit where the S 7 degenerates to S 5 × R 2 or S 3 × R 4 [30] and then apply the scaling limit of this paper. The result of this two-step process is that the warp factor does not depend on the R 2 or R 4 portion of the internal space that resulted from taking the first limit. One could then perform dimensional reduction and T-duality along these flat directions so that our KK reductions are reinterpreted as arising from scaling limits of S 5 or S 3 reductions of type IIB theory, along with a toroidal reduction along the remaining flat directions. The resulting lower-dimensional theory admits a domain wall, rather than (A)dS, as a vacuum solution.
KK reductions on R 4
In this subsection, we briefly record the other consistent KK reductions with warp factors. We begin by considering type IIB supergravity on the product R 4 ≡ R 2 × R 2 . It was previously shown in [14] that there is a consistent KK reduction to a six-dimensional theory admitting an AdS 6 vacuum with a single scalar and three-form flux. We recall the ten-dimensional Ansatz 4 but allow for the opposite sign in the warp factor, where ∆ = 1 + λ(µ 2 1 + µ 2 2 ), we have relabeled X i = e Yi to avoid logarithms and L is a length scale. The Bianchi identity for F 5 implies that we can define a twoform potential so that H 3 = dB 2 and the self-duality requirement dictates that both H 3 and its Hodge dual appear in the Ansatz for the five-form flux. In [14], it was found in the absence of a dilaton and axion that consistency of the reduction (considering cross-terms in the Ricci tensor and flatness condition) requires Y 1 = −Y 2 , leaving a single scalar in six dimensions. We recall that for the SO(6) reduction of type IIB supergravity on S 5 the dilaton and axion do not feature in the scalar potential (see for example [31]), so it is expected that one can reinstate them here.
ds 2 10 = ∆ds 2 (M 6 ) + ∆ −1 2 i=1 L 2 e Yi (dµ 2 i + µ 2 i dψ 2 i ),F 5 = (1 + * 10 )H 3 ∧ µ 1 dµ 1 ∧ dψ 1 ,(19)
After a conformal transformation to get to the Einstein frame, the six-dimensional action takes the form
L 6 = −ĝ 6 R 6 − 1 2 (∂Φ) 2 − 1 2 e 2Φ (∂χ) 2 − 1 4 (∂Z) 2 − 8 L 2 λ cosh Z 2 − 1 12 e −Z H 2 3 ,(20)
where
Φ = Y 1 + Y 2 and Z = Y 1 − Y 2 .
It is worth noting that the three-form entering in the lower-dimensional action is not self-dual from the sixdimensional perspective but it is self-dual in the five-form flux of type IIB supergravity. Truncating out either Y 1 or Y 2 , we arrive at the action of [14], up to a symmetry of that action Z ↔ Z −1 .
Since the internal space is flat, the (A)dS 6 vacuum of the six-dimensional reduced theory could be embedded in either type IIB or type IIA supergravity, yet we have insisted on type IIB theory. This preference can be attributed to the fluxes, which will only scale correctly in type IIB supergravity, suggesting that the KK reduction carries some memory that it was originally a reduction of type IIB supergravity on S 5 . One could first take a limit where the S 5 degenerates to S 3 × R 2 [30] and then apply our scaling limit. This enables our KK reduction to be reinterpreted as arising from a scaling limit of an S 3 reduction of type IIA theory [32] followed by a toroidal reduction, though the lower-dimensional theory would have a domain wall as a vacuum solution rather than (A)dS.
We have omitted natural three-form fluxes, both NS-NS and RR, in our Ansatz. Though these scale correctly with the warp factor, we find that it is difficult to include them in an Ansatz, since they would require a decomposition into an external two-form and internal one-form. For the internal space R 4 , this is inconsistent with internal Ricci-flatness, and for Calabi-Yau reductions one generically does not have natural internal one-forms. It is expected that when λ = 0, modulo mirror symmetry (T-duality), we recover the Ansatz of the reduction of type IIB supergravity on CY 2 , e. g. [33].
KK reductions on R 3
Switching our attention to the final warped-product solution, which lives in eleven dimensions, we can construct a KK reduction Ansatz with the internal odd-dimensional space further split as R 3 ≡ R 2 × R.
The Ansatz is motivated by a scaling limit of D = 7 gauged supergravity and is given as
ds 2 11 = ∆ 2 3 ds 2 (M 8 ) + ∆ − 4 3 L 2 e −2Y dµ 2 0 + e Y (dµ 2 2 + µ 2 2 dψ 2 2 ) , G 4 = H ∧ dµ 0 ,(21)
where ∆ = 1 + λ(µ 2 0 + µ 2 2 ) and H = dB 2 . The Ansatz for the four-form flux is largely self-selecting in that it preserves the symmetry of the R 2 factor and scales correctly with the warp factor ∆ in the Einstein equation. We remark that this choice appears to fall outside the Ansatz of Cvetic et al [17] but can be accommodated in reductions on S 4 , where three-forms are retained [34,35].
After the KK reduction, the eight-dimensional action is
L 8 = √ −g 8 R 8 − 3 2 (∂Y ) 2 − 1 12 e 2Y H abc H abc − 2 L 2 λ(e 2Y + 2e −Y ) .(22)
Note that one can also find a KK reduction Ansatz in type IIA theory by first taking a limit where the S 4 degenerates to S 3 × R [30,32], reducing along the R direction and then performing our scaling limit.
STABILITY OF ADS VACUA
It is a well-appreciated fact that in the absence of supersymmetry classical stability is a concern. In this section we focus on the stability of the (A)dS vacua. We begin with AdS stability, where violations of the Breitenlohner-Freedman (BF) bound [36] provide us with a simple litmus test for instability. We will turn our attention to dS vacua later.
An important caveat from the outset is that we will confine our attention to instabilities that arise within our truncation Ansatz. However, this does not preclude the possibility that instabilities arise from modes we have truncated out, for example see [18]. Within this restricted scope, we will explicitly show that lowerdimensional Freund-Rubin type solutions with AdS 3 factors enjoy greater stability than higher-dimensional vacua in the truncated theory. This may be attributed to the fact that these solutions correspond to the nearhorizon of known black holes and are expected to be classically stable.
That some of our geometries are unstable may come as no surprise. It is known that product spaces can be prone to instabilities where one space becomes uniformly larger while another shrinks so that the overall volume is kept fixed. Earlier examples of this instability include the spacetimes AdS 4 × M n × M 7−n [19] and AdS 7 × S 2 × S 2 [20]. Indeed, in a fairly comprehensive study of the classical stability of Freund-Rubin spacetimes of the form AdS p × S q [21], this is the primary instability observed.
In terms of our Ansatz, the scalars in the lowerdimensional theory control the volume of the internal spaces, which are further subject to a constraint. We will now investigate the stability of geometries with respect to these scalar modes, while at the same time taking into account breathing modes that arise from further reductions. Our analysis is not intended to be comprehensive but, since instabilities usually arise from products [21], experience suggests these are the most dangerous modes. We recall that the BF bound for AdS p with radius R AdS is [36]
m 2 R 2 AdS ≥ − (p − 1) 2 4 .(23)
Before proceeding to specialize to particular cases, here we record some preliminaries. We will in general consider spacetimes in dimension D and further reductions on constant curvature spaces of dimension (D − p) to a p-dimensional spacetime:
ds 2 D = e 2A (D−p) (2−p) ds 2 p + e 2A ds 2 (Σ D−p ),(24)
which leads to scalars A, i.e. breathing modes, in the lower-dimensional theory. The above reduction Ansatz is designed to bring us to the Einstein frame in p dimensions. The Einstein-Hilbert term in the higher-dimensional action reduces to
L p = −g p R − (D − 2)(D − p) (p − 2) (∂A) 2 + e − 2(D−2) (p−2) A κ(D − p − 1)(D − p) ,(25)
where κ is the curvature of the internal space. When κ > 0, we will only consider the constant spherical harmonic, which appears with the lowest mass, ∇ 2 S q A = 0. Higher spherical harmonics typically do not lead to instabilities, since they correspond to modes with a more positive mass squared. We can now specialize to the various potentials we have found. Since our lower-dimensional theory corresponds to the bosonic sector of a known supergravity, it is expected that solutions are stable. It is easy enough to check that the scalars precisely saturate the (unit radius) BF bound in five dimensions:
∇ 2 AdS5 δϕ i + 4δϕ i = 0.(26)
At this point, it is also instructive to make a comment regarding reductions in the absence of fluxes. The first non-trivial reduction with an AdS 3 vacuum involves a reduction on H 2 . We omit the details since a related reduction appeared recently in [37] where, in addition, the underlying three-dimensional gauged supergravity was identified. The mass-squared matrix for the fluctuations takes the following form:
∇ 2 AdS3 δϕ 1 δϕ 2 δA = −2 0 0 0 −2 0 0 0 4 δϕ 1 δϕ 2 δA .(27)
We observe that the scalars ϕ i have mass m 2 = −2 which violates the BF bound for AdS 3 . In contrast, the fluctuation of the scalar A does not affect the stability. Indeed, it is worth observing that we can also truncate out δϕ i , in which case the instability would not be observed. This is a pretty trivial example of the hidden instabilities noted in a higher-dimensional context in [18].
D = 10, p = 6, ΣD−p = R 4
In contrast to the AdS 5 vacuum, the AdS 6 vacuum is unstable to fluctuations of the scalar Z with a mass squared m 2 = −10 for unit-radius AdS 6 . We will now consider whether a lower-dimensional AdS 3 × Σ 3 vacuum is stable to this mode. Projecting out the massless axion and the dilaton, which are less likely to source instabilities, the theory may be dimensionally reduced to give
L 3 = √ −g 3 R 3 − 1 4 (∂Z) 2 − 12(∂A) 2 + 6κe −8A + 8 L 2 e −6A cosh Z 2 − 1 2 e −12A p 2 e Z + q 2 e −Z ,
where we have assumed
H 3 = p e Z−12A vol(M 3 ) + qvol(Σ 3 ).(28)
In the electric term in H 3 , we have imposed the lowerdimensional equation of motion. Extremizing the potential, we get
p 2 = 4e 4A−Z κ + L −2 e 1 2 Z+2A , q 2 = 4e 4A+Z κ + L −2 e − 1 2 Z+2A .(29)
Plugging these back into the action, we note that AdS 3 will have unit radius provided that
4 L 2 cosh Z 2 = 2e 6A − 2κe −2A .(30)
The AdS 3 ×S 3 solutions appeared previously in [14]. If one considers variations of the breathing mode A and scalar Z, then the mass-squared matrix has eigenvalues
m 2 = 4e −16A L 4 2e 8A L 4 κ + 2e 10A L 2 cosh Z 2 ± e 20A L 4 (2 cosh Z − 1) .(31)
To simplify expressions, we can solve (30) for the length scale L in terms of κ, A and Z:
L = √ 2e −A cosh Z 2 e 4A − κe −4A .(32)
Since (31) is symmetric under Z ↔ −Z, without loss of generality we can take Z ≥ 0. For both κ = 0 (T 3 ) and κ = 1 (S 3 ), we find that the masses are always strictly positive. Indeed, for κ = 0, all dependence on the critical value of A drops out and m 2 → 0 + as Z → ∞ for the lowest eigenvalue. When κ = 1, the dependence on A remains, but m 2 is again strictly positive. When κ = −1, we have a constraint on the range of A and Z, namely A ≥ z 8 , such that the solution is real, i.e. p 2 , q 2 ≥ 0. In this range, m 2 is always positive. Thus, we conclude that a Freund-Rubin type AdS 3 solution is stable to fluctuations that destabilize the AdS 6 vacuum of the six-dimensional theory.
D = 11, p = 8, ΣD−p = R 3
The stability analysis for the eight-dimensional theory (22) parallels the six-dimensional theory (20), which we analyzed previously. Even in the absence of supersymmetry, where there is no dual (super)conformal theory in seven dimensions [38], the AdS 8 vacuum is puzzling, but it resolves itself by being unstable. In order to stabilize the vacuum, one can turn on the three-form. One can support a "magnetovac" solution, AdS 5 × Σ 3 , however the fluctuations of the scalars A (breathing) and Y have the following mass eigenvalues (at unit radius): (33) and are thus unstable for all Σ 3 .
m 2 = 4(3 − 4e −3Y ± 25 − 32e −3Y + 16e −6Y ).
We can also consider an "electrovac" with an AdS 3 factor but this solution can be incorporated in a more general case:
ds 2 = e −2A e −4B ds 2 (M 3 ) + e 2B ds 2 (Σ 2 ) + e 2A ds 2 (Σ 3 ), H = pe −2Y −4A−8B vol(M 3 ) + qvol(Σ 3 ),(34)
where we now have two breathing modes A and B, two Freund-Rubin-type flux terms p, q and a transverse space Σ 2 of constant curvature κ 2 . We can view the Ansatz as two successive reductions, one on Σ 3 , followed by a second on Σ 2 , where in each case one arrives in the Einstein frame.
The effective three-dimensional action may be written as
L 3 = √ −g 3 R 3 − 3 2 (∂Y ) 2 − 6(∂A) 2 − 6(∂B) 2 + 2 L 2 (e 2Y + 2e −Y )e −2A−4B − 1 2 p 2 e −2Y −4A−8B + 6κ 1 e −4A−4B + 2κ 2 e −6B − 1 2 q 2 e 2Y −8A−4B (35)
where κ 1 denotes the constant curvature of Σ 3 . Extremizing the potential, we arrive at the critical point:
e 2Y = −L 2 κ 2 e 2A−2B , q 2 = 4κ 1 e −2Y +4A + 2 L 2 e 6A ,(36)p 2 = 4κ 1 e 2Y +4B + 4 L 2 (e −Y − 1 2 e 2Y )e 2Y +2A+4B .
We observe that the Riemann surface Σ 2 should be negatively curved, thus making it H 2 . We can set q = 0, 1 2 κ 1 = 1 4 κ 2 = κ and B = 2A to recover the electrovac solution, the details of which we have omitted. We can set AdS 3 to unit radius by choosing With an additional breathing mode, the stability analysis is more complicated. In fact, even for the simpler case when κ 1 = 0, where expressions do not depend on the breathing modes A and B, we cannot find analytic expressions for the mass-squared matrix eigenvalues. For κ 1 = 0, we note that in order the solution to remain real, Y is restricted to the range Y ≤ 1 3 log 2. The mass as a function of the critical value of Y is plotted in FIG. 1. We see that for suitably negative values of Y , a range of stability exists. As can be seen from (36), this corresponds to values where the fluxes are larger.
L 2 = e −Y +2A (e 4A+4B − κ 1 ) .(37)
When κ 1 = ±1, the mass-squared matrix only depends on Y and the combination A + B. It is easier to consider κ 1 = −1, since we have a constraint. From p 2 , q 2 ≥ 0, we find a constraint on Y in terms of A + B:
1 2 (e 4A+4B + 1) ≥ e −3Y ≥1 2
(e 4A+4B + 1) e 4A+4B . (38) Taking A + B to be a fixed value, for either value of κ 1 , we see that the eigenvalues of the mass matrix vary with the critical value of Y in essentially the same way as they do for the AdS 3 × T 3 × H 2 geometry shown in FIG. 1. Thus, the eigenvalues of the mass-squared matrix are largely insensitive to curvature, given our choice of normalization. This means that the same range of stability will exist for κ 1 = ±1. The only caveat here is that for κ 1 = −1, there is the added constraint above (38), so A + B has to be chosen to be large enough such that one has some overlap with the stable region.
DE SITTER VACUA
In this section we discuss a particular dS 3 solution in the five-dimensional theory (9), which via our consistent KK reduction may be regarded as a solution to elevendimensional supergravity. Neglecting time-dependent solutions, such as [39], static embeddings in the literature have either involved reductions on non-compact hyperbolic spaces, for example [40][41][42], or analytic continuations of known maximally supersymmetric solutions, such as AdS 5 × S 5 , leading to solutions of so-called type II * theories and their dimensional reductions [43,44].
Here we point out that the internal non-compact spaces need not be curved and can in fact be Ricci-flat. This evades the well-known "no-go" theorem [11] on noncompactness grounds.
Our effective three-dimensional theory supporting the dS 3 , comes from a reduction of the U(1) 3 theory (9) on a Riemann surface of constant curvature κ 5 . To do this, we employ the usual Ansatz,
ds 2 5 = e −4A ds 2 3 + e 2A ds 2 (Σ 2 ), F i = −a i vol(Σ 2 ),(39)
where a i denote constants. The three-dimensional action may be recast as
L 3 = √ −g 3 R 3 − 1 2 3 i=1 (∂W i ) 2 − V (W i )(40)
where the potential V takes the form
V = −2κe K + 4 L 2 e K 3 i=1 e Wi + 1 2 e 2K 3 i=1 a 2 i e 2Wi .(41)
In expressing terms this way, we have made use of the Kähler potential of the three-dimensional gauged supergravity, K = −(W 1 + W 2 + W 3 ), introduced a length scale L for the internal space, and imported the notation of [37], e Wi = e 2A X −1 i , where A denotes the warp factor. Up to the minus sign in front of the second term, this is the potential corresponding to magnetized wrapped brane solutions [37]. This potential has an underlying real superpotential provided κ = −(a 1 + a 2 + a 3 ). One advantage of working with the type II * embeddings is that flux terms appear with the "wrong" sign and the theories may be regarded as "supersymmetric". Solutions then follow from extremizing the fake superpotential. This is not the case here, since the flux terms do not have the wrong sign. We have checked that a fake superpotential can be found, but only when all the constants are equal, a i = a, and κ = 5 a L . One of the extrema of the potential in this case is AdS 3 , so we will ignore this possibility.
Extremizing (41), we arrive at conditions on the fluxes for a critical point to exist:
a 2 i = e j =i Wj κe −Wi − 4 L 2 .(42)
For real solutions we recognize the immediate need for a reduction on a sphere (κ > 0). Inverting the above expression to get W i in terms of a i is, in general, problematic, so we consider the simplification where a i = a, W i = W . In this case, it is easy to locate the critical points of V ,
e W± = L 2 8 1 ± 1 − 16a 2 L −2 .(43)
We note that we require 16a 2 < L 2 for two real extrema. Examining the second derivative of the potential, we identify the upper sign as a local maximum and the lower sign as a local minimum corresponding to our de Sitter vacuum. By tuning the parameter a relative to L, as we show in FIG. 2, it is possible to find a de Sitter vacuum, where the cosmological constant is arbitrarily small and positive.
To address stability, we follow the treatment presented in [5], which is based in part on [45]. Since we are working in D = 3, it is natural to consider an O(3)-invariant Euclidean spacetime with the metric,
ds 2 = dτ 2 + a(τ ) 2 dΩ 2 2 ,(44)
where a is the Euclidean scale factor. The scalars obey the following equations of motion,
3W + 6 a a W = V, W , a = − a 2 3 2 W 2 + V ,(45)
where primes denote derivatives with respect to τ . These equations admit a simple instantonic three-sphere solution, where the scalar sits at one of the extrema of the potential, W = W ± , and
a(τ ) = −1 sin( τ ).(46)
Here is the inverse radius of the sphere, which in turn is related to the potential, 2 = V 2 . Given the two extrema, we have two trivial solutions of this type describing a time-independent field.
We now wish to consider Coleman-De Luccia instantons, which describe tunneling trajectories between the de Sitter vacuum and asymptotic Minkowski space (W > W + ). According to [45], the probability, P , for tunneling from from a false vacuum at W = W − , with vacuum energy V 0 1 (in Planck units), to Minkowski space is to first approximation given by
P ≈ exp(S 0 ),(47)
where S 0 ≡ S(W 0 ) is the Euclidean action evaluated in the vicinity of the de Sitter vacuum. S 0 , in turn, is determined from the tunneling action,
S(W ) = d 3 x √ g −R 3 + 3 2 (∂W ) 2 + V (W ) ,(48)
which describes trajectories beginning in the vicinity of the false vacuum, W = W − , at τ = 0 and reaching W = 0 (Minkowski) at τ = τ f , where a(τ f ) = 0. Using the trace of the Einstein equation, we can rewrite (48)
S(W ) = −2 d 3 x √ gV (W ) = −8π τ f 0 dτ a 2 (τ )V (W (τ )).(49)S 0 = −8π 2 2 V 0 .(50)
By tuning a and L appropriately, so that V 0 is small, we can find an arbitrarily long-lived dS 3 vacuum. We conclude that the dS 3 vacuum can be regarded as stable. Though we have only presented one example, we expect similar comments to hold for dS 3 vacua supported through the consistent reductions we have identified.
As it stands, our set-up needs some tweaking in order to incorporate dS 4 vacua. We have seen that a vacuum solution exists when the number of internal dimensions is large. In the absence of two-form flux in the six-dimensional theory (20), one could contemplate reducing on a 2d Riemann surface. For the dS 4 × S 2 solution without flux threading the S 2 , it is not surprising that one finds that the vacuum is unstable.
Before leaving the subject of de Sitter solutions, we make one final comment. The five-dimensional theory (9) also has solutions that smoothly interpolate between dS 2 × S 3 in the infinite past and a dS 5 -type spacetime in the infinite future [46]. These solutions can be obtained from the AdS black hole solutions in D = 5 U(1) 3 gauged supergravity simply by changing the sign of the scalar potential, and can all be embedded in eleven dimensions using the KK reduction Ansatz (11). Like the previouslymentioned dS 3 , these solutions have a fake superpotential in the equal-charge case.
DISCUSSION
We have studied a class of non-supersymmetric Ricciflat solutions which are warped products of a flat internal space and an anti-de Sitter or de Sitter spacetime. We have found that these Ricci-flat solutions are limited to three cases: warped products of (A)dS 5 and R 6 in eleven dimensions, (A)dS 8 and R 3 in eleven dimensions, and (A)dS 6 and R 4 in ten dimensions. There is also a fourth potentially interesting case of (A)dS 4 in a spacetime with large dimension D. Given that these geometries are rather simple and do not involve any matter content, it is intriguing that so few examples exist and that they are mainly limited to ten and eleven dimensions. While singular in the anti-de Sitter cases, these geometries are completely smooth for de Sitter and are similar in structure to the "bubble of nothing" [47]. Unlike direct products of AdS and a sphere or warped products of de Sitter and a hyperbolic space, both of which are supported by flux, our solutions are supported entirely by the warp factor, are hence not bound by the no-go theorem [11], and do not appear to arise from a limit of the former solutions.
We construct consistent KK truncations for which the above solutions arise as vacuum solutions. These KK truncations are shown to arise as limits of the celebrated dimensional reductions on spheres (like those discussed in [17]) in which the lower-dimensional spacetime gets augmented by one of the spherical coordinates while the remaining directions along the sphere get flattened out. Unlike KK truncations on hyperbolic spaces which are associated with non-compact gauge groups, the truncations in this paper lead to the bosonic sector of gauged supergravities with compact gauge groups. This is because the gauge fields are associated only with the flux in the higher-dimensional theory and, rather surprisingly, not the geometry. Therefore, within this truncation the isometries of the internal space do not play an explicit role in the lower-dimensional (bosonic sector of gauged) supergravity. This KK reduction enables one to embed five-dimensional U(1) 3 de Sitter gravity in elevendimensional supergravity. It is an interesting open direction to consider generalizations where the gauge groups are non-Abelian. We expect that one can achieve this by considering a similar limit of the maximally supersymmetric SO(4) [35], SO(6) [31] and SO(8) reductions [48].
Given that our solutions do not preserve supersymmetry, it is important to study their stability. We have focused on possible classical instabilities associated with breathing modes, though there could be other instabilities associated with massive modes that have been truncated out. Within this limited setting, one finds that the AdS 5 solution, although corresponding D = 11 solution is singular, is stable. A dual four-dimensional nonsupersymmetric CFT is a rather intriguing notion, given that this is dual to pure D = 11 gravity (reduced on our Ricci-flat solutions), that D-branes are completely absent and that the eleven-dimensional solution is singular. Other stable solutions include AdS 3 × Σ 3 , where Σ 3 is S 3 , T 3 or H 3 , as well as AdS 3 × Σ 3 × H 2 for a certain range of its parameters. On the other hand, the AdS 8 , AdS 6 and AdS 5 × Σ 3 solutions are not stable. It would have been rather surprising if there had been a stable AdS 8 solution, since this would imply the existence of a corresponding seven-dimensional CFT, though it would be non-supersymmetric.
As for the de Sitter solutions, we find dS 3 × S 2 solutions that, in terms of the breathing modes, are stable. We expect similar solutions to of the form dS 3 × S 3 and dS 3 × S 3 × H 2 to exist in the six-dimensional (20) and eightdimensional theory (22), respectively. Although all the dS 4 vacua we have found are either i) unstable or ii) they require an infinite number of internal dimensions, and are thus unsatisfactory, we hope that this line of inquiry will lead to simple stable dS 4 in the future. The most positive angle is that a flat direction in the reduction on R 6 to D = 5 can be found and a dS 4 vacuum can be engineered from the dS 5 vacuum using the approach of [49].
As with warped-product solutions of de Sitter and a hyperbolic space [40], our solutions appear to be intrinsically higher dimensional, in that they are not amenable to either compactification or a braneworld scenario [50]. In particular, massless gravitational modes are not associated with normalizable wavefunctions and therefore cannot be localized on a brane. It is not clear as to whether one can use the proposed dS/CFT correspondence [51] to extract meaningful information directly from the elevendimensional embeddings of these de Sitter solutions.
While our construction remains intact if the internal space R n is replaced by a cone over any Einstein space with positive curvature, there is a conical singularity at the apex of the cone. Replacing the internal space with a smooth cone, such as a resolved or deformed conifold [52] for the case of a six-dimensional space, would be interesting but would necessitate a slightly different Ansatz than was considered in this paper.
D
= 11, p = 5, ΣD−p = R 6
FIG. 1 .
1The mass-squared eigenvalues for scalar fluctuations around the geometry AdS3× T 3 × H 2 as a function of the critical value of Y .
FIG. 2 .
2Plot of the potential for L = 1 and a = 0.236. By tuning a relative to L, we can increase the barrier to decay and stabilise the vacuum.
The Euclidean action calculated for the false vacuum de Sitter solution at W = W − is
Superficially, this bears some similarity to the AdS/Ricci-flat correspondence[24] in that there is a connection between a Ricci-flat space and an AdS spacetime. However, our connection, which also works at the level of the equations of motion, does not involve an analytic continuation of dimensionality.
i=1 µ 2 i ≤ 1, thus 2It was also observed in[27] that a non-compact reduction was inconsistent when performed at the level of the action.
In terms of the unconstrained scalars, ϕ i , i = 1, 2, 3, this simply corresponds to the limit ϕ i → −∞, so it is symmetric.
It should be noted that the original five-dimensional vacuum corresponds to a local maximum and is unstable.
ACKNOWLEDGEMENTSWe have enjoyed fruitful discussions with K. Balasubramanian, C. Hull
. G W Gibbons, C M Hull, Phys. Lett. B. 109190G. W. Gibbons and C. M. Hull, Phys. Lett. B 109, 190 (1982).
. K P Tod, Phys. Lett. B. 121241K. p. Tod, Phys. Lett. B 121, 241 (1983).
. J P Gauntlett, J B Gutowski, C M Hull, S Pakis, H S Reall, hep-th/0209114Class. Quant. Grav. 20J. P. Gauntlett, J. B. Gutowski, C. M. Hull, S. Pakis and H. S. Reall, Class. Quant. Grav. 20, 4587 (2003) [hep-th/0209114].
. J B Gutowski, D Martelli, H S Reall, hep-th/0306235Class. Quant. Grav. 205049J. B. Gutowski, D. Martelli and H. S. Reall, Class. Quant. Grav. 20 (2003) 5049 [hep-th/0306235].
. C P Burgess, R Kallosh, F Quevedo, hep-th/0309187JHEP. 031056C. P. Burgess, R. Kallosh and F. Quevedo, JHEP 0310, 056 (2003) [hep-th/0309187].
. A Saltman, E Silverstein, hep-th/0402135JHEP. 041166A. Saltman and E. Silverstein, JHEP 0411, 066 (2004) [hep-th/0402135].
. V Balasubramanian, P Berglund, J P Conlon, F Quevedo, hep-th/0502058JHEP. 05037V. Balasubramanian, P. Berglund, J. P. Conlon and F. Quevedo, JHEP 0503, 007 (2005) [hep-th/0502058].
. C Caviezel, P Koerber, S Kors, D Lust, T Wrase, M Zagermann, arXiv:0812.3551JHEP. 090410hep-thC. Caviezel, P. Koerber, S. Kors, D. Lust, T. Wrase and M. Zagermann, JHEP 0904, 010 (2009) [arXiv:0812.3551 [hep-th]].
. S S Haque, G Shiu, B Underwood, T Van Riet, arXiv:0810.5328Phys. Rev. D. 7986005hepthS. S. Haque, G. Shiu, B. Underwood and T. Van Riet, Phys. Rev. D 79, 086005 (2009) [arXiv:0810.5328 [hep- th]].
. L Covi, M Gomez-Reino, C Gross, J Louis, G A Palma, C A Scrucca, arXiv:0804.1073JHEP. 080657hep-thL. Covi, M. Gomez-Reino, C. Gross, J. Louis, G. A. Palma and C. A. Scrucca, JHEP 0806, 057 (2008) [arXiv:0804.1073 [hep-th]].
. S Kachru, R Kallosh, A D Linde, S P Trivedi, hep-th/0301240Phys. Rev. D. 6846005S. Kachru, R. Kallosh, A. D. Linde and S. P. Trivedi, Phys. Rev. D 68, 046005 (2003) [hep-th/0301240].
. I Bena, M Grana, N Halmagyi, arXiv:0912.3519JHEP. 100987hep-thI. Bena, M. Grana and N. Halmagyi, JHEP 1009, 087 (2010) [arXiv:0912.3519 [hep-th]].
. U Danielsson, G Dibitetto, arXiv:1312.5331JHEP. 140513hep-thU. Danielsson and G. Dibitetto, JHEP 1405, 013 (2014) [arXiv:1312.5331 [hep-th]].
. J Blaback, D Roest, I Zavala, arXiv:1312.5328hepthJ. Blaback, D. Roest and I. Zavala, arXiv:1312.5328 [hep- th].
. F Hassler, D Lust, S Massai, arXiv:1405.2325hepthF. Hassler, D. Lust and S. Massai, arXiv:1405.2325 [hep- th].
. R Kallosh, A Linde, B Vercnocke, T Wrase, arXiv:1406.4866hep-thR. Kallosh, A. Linde, B. Vercnocke and T. Wrase, arXiv:1406.4866 [hep-th].
. S S Gubser, hep-th/0002160Adv. Theor. Math. Phys. 4679S. S. Gubser, Adv. Theor. Math. Phys. 4, 679 (2000) [hep-th/0002160].
. I Bena, A Buchel, O J C Dias, arXiv:1212.5162Phys. Rev. D. 87663012hep-thI. Bena, A. Buchel and O. J. C. Dias, Phys. Rev. D 87, no. 6, 063012 (2013) [arXiv:1212.5162 [hep-th]];
. J Blaback, U H Danielsson, T Van Riet, arXiv:1202.1132JHEP. 130261hep-thJ. Blaback, U. H. Danielsson and T. Van Riet, JHEP 1302, 061 (2013) [arXiv:1202.1132 [hep-th]];
. I Bena, A Buchel, O J C Dias, arXiv:1212.5162Phys. Rev. D. 87663012hep-thI. Bena, A. Buchel and O. J. C. Dias, Phys. Rev. D 87, no. 6, 063012 (2013) [arXiv:1212.5162 [hep-th]].
. D Junghans, D Schmidt, M Zagermann, arXiv:1402.6040hep-thD. Junghans, D. Schmidt and M. Zagermann, arXiv:1402.6040 [hep-th].
. J M Maldacena, C Núñez, hep-th/0007018Int. J. Mod. Phys. A. 16J. M. Maldacena and C. Núñez, Int. J. Mod. Phys. A 16, 822 (2001) [hep-th/0007018].
. M M Sheikh-Jabbari, H Yavartanoo, arXiv:1107.5705JHEP. 111013hep-thM. M. Sheikh-Jabbari and H. Yavartanoo, JHEP 1110, 013 (2011) [arXiv:1107.5705 [hep-th]].
. R Fareghbal, C N Gowdigere, A E Mosaffa, M M Sheikh-Jabbari, arXiv:0805.0203Phys. Rev. D. 8146005hep-thR. Fareghbal, C. N. Gowdigere, A. E. Mosaffa and M. M. Sheikh-Jabbari, Phys. Rev. D 81, 046005 (2010) [arXiv:0805.0203 [hep-th]].
. R Fareghbal, C N Gowdigere, A E Mosaffa, M M Sheikh-Jabbari, arXiv:0801.4457JHEP. 080870hep-thR. Fareghbal, C. N. Gowdigere, A. E. Mosaffa and M. M. Sheikh-Jabbari, JHEP 0808, 070 (2008) [arXiv:0801.4457 [hep-th]].
. J Boer, M Johnstone, M M Sheikh-Jabbari, J Simon, arXiv:1112.4664Phys. Rev. D. 8584039hep-thJ. de Boer, M. Johnstone, M. M. Sheikh-Jabbari and J. Simon, Phys. Rev. D 85 (2012) 084039 [arXiv:1112.4664 [hep-th]].
. M Johnstone, M M Sheikh-Jabbari, J Simon, H Yavartanoo, arXiv:1301.3387JHEP. 130445M. Johnstone, M. M. Sheikh-Jabbari, J. Si- mon and H. Yavartanoo, JHEP 1304 (2013) 045 [arXiv:1301.3387].
. O Dewolfe, S S Gubser, C Rosen, arXiv:1312.7347hep-thO. DeWolfe, S. S. Gubser and C. Rosen, arXiv:1312.7347 [hep-th].
. M Cvetič, M J Duff, P Hoxha, J T Liu, H Lü, J X Lu, R Martinez-Acosta, C N Pope, H Sati, T , M. Cvetič, M. J. Duff, P. Hoxha, J. T. Liu, H. Lü, J. X. Lu, R. Martinez-Acosta, C. N. Pope H. Sati and T.
. A Tran, hep-th/9903214Nucl. Phys. B. 55896A. Tran, Nucl. Phys. B 558, 96 (1999) [hep-th/9903214].
. N Bobev, N Halmagyi, K Pilch, N P Warner, arXiv:1006.2546Class. Quant. Grav. 27235013hep-thN. Bobev, N. Halmagyi, K. Pilch and N. P. Warner, Class. Quant. Grav. 27, 235013 (2010) [arXiv:1006.2546 [hep-th]].
. M J Duff, B E W Nilsson, C N Pope, Phys. Lett. B. 139154M. J. Duff, B. E. W. Nilsson and C. N. Pope, Phys. Lett. B 139, 154 (1984).
. M Berkooz, S. -J Rey, hep-th/9807200Phys. Lett. B. 990168JHEPM. Berkooz and S. -J. Rey, JHEP 9901, 014 (1999) [Phys. Lett. B 449, 68 (1999)] [hep-th/9807200].
. O Dewolfe, D Z Freedman, S S Gubser, G T Horowitz, I Mitra, hep-th/0105047Phys. Rev. D. 6564033O. DeWolfe, D. Z. Freedman, S. S. Gubser, G. T. Horowitz and I. Mitra, Phys. Rev. D 65, 064033 (2002) [hep-th/0105047].
. X Dong, B Horn, E Silverstein, G Torroba, arXiv:1005.5403Class. Quant. Grav. 27245020hepthX. Dong, B. Horn, E. Silverstein and G. Torroba, Class. Quant. Grav. 27, 245020 (2010) [arXiv:1005.5403 [hep- th]].
. J P Gauntlett, D Martelli, J Sparks, D Waldram, hep-th/0402153Class. Quant. Grav. 21J. P. Gauntlett, D. Martelli, J. Sparks and D. Waldram, Class. Quant. Grav. 21, 4335 (2004) [hep-th/0402153].
. M M Caldarelli, J Camps, B Goutraux, K Skenderis, arXiv:1211.2815Phys. Rev. D. 87661502hep-thM. M. Caldarelli, J. Camps, B. Goutraux and K. Sk- enderis, Phys. Rev. D 87, no. 6, 061502 (2013) [arXiv:1211.2815 [hep-th]].
. M M Caldarelli, J Camps, B Goutraux, K Skenderis, arXiv:1312.7874JHEP. 140471hepthM. M. Caldarelli, J. Camps, B. Goutraux and K. Sk- enderis, JHEP 1404, 071 (2014) [arXiv:1312.7874 [hep- th]].
. H Elvang, R Emparan, D Mateos, H S Reall, hep-th/0408120Phys. Rev. D. 7124033H. Elvang, R. Emparan, D. Mateos and H. S. Reall, Phys. Rev. D 71, 024033 (2005) [hep-th/0408120].
. E Colgáin, O Varela, arXiv:1106.4781Phys. Lett. B. 703180hep-thE.Ó Colgáin and O. Varela, Phys. Lett. B 703, 180 (2011) [arXiv:1106.4781 [hep-th]].
. G Itsios, Y Lozano, E , G. Itsios, Y. Lozano, E. .
. K O Colgain, Sfetsos, arXiv:1205.2274JHEP. 1208132hep-thO Colgain and K. Sfetsos, JHEP 1208, 132 (2012) [arXiv:1205.2274 [hep-th]].
. F Benini, N Bobev, arXiv:1302.4451JHEP. 13065hep-thF. Benini and N. Bobev, JHEP 1306, 005 (2013) [arXiv:1302.4451 [hep-th]].
. S Cucu, H Lü, J F Vázquez-Poritz, hep-th/0303211Phys. Lett. B. 568261S. Cucu, H. Lü and J. F. Vázquez-Poritz, Phys. Lett. B 568, 261 (2003) [hep-th/0303211].
. S Cucu, H Lü, J F Vázquez-Poritz, hep-th/0304022Nucl. Phys. B. 677181S. Cucu, H. Lü and J. F. Vázquez-Poritz, Nucl. Phys. B 677, 181 (2004) [hep-th/0304022].
. A Almuhairi, J Polchinski, arXiv:1108.1213hep-thA. Almuhairi and J. Polchinski, arXiv:1108.1213 [hep-th].
. M Cvetič, J T Liu, H Lü, C N Pope, hep-th/9905096Nucl. Phys. B. 560230M. Cvetič, J. T. Liu, H. Lü and C. N. Pope, Nucl. Phys. B 560, 230 (1999) [hep-th/9905096].
. M Cvetic, H Lu, C N Pope, A Sadrzadeh, T A Tran, arXiv:hep-th/0003103Nucl. Phys. B. 586275M. Cvetic, H. Lu, C. N. Pope, A. Sadrzadeh and T. A. Tran, Nucl. Phys. B 586 (2000) 275 [arXiv:hep- th/0003103];
. A Khavaev, K Pilch, N P Warner, arXiv:hep-th/9812035Phys. Lett. B. 48714A. Khavaev, K. Pilch and N. P. Warner, Phys. Lett. B 487 (2000) 14 [arXiv:hep-th/9812035].
. M Cvetič, H Lü, C N Pope, A Sadrzadeh, T A Tran, hep- th/0005137Nucl. Phys. B. 590233M. Cvetič, H. Lü, C. N. Pope, A. Sadrzadeh and T. A. Tran, Nucl. Phys. B 590, 233 (2000) [hep- th/0005137].
. M J Duff, H Lü, C N Pope, hep-th/9807173Nucl. Phys. B. 544145M. J. Duff, H. Lü and C. N. Pope, Nucl. Phys. B 544, 145 (1999) [hep-th/9807173].
. J T Liu, R Minasian, hep-th/9903269Phys. Lett. B. 45739J. T. Liu and R. Minasian, Phys. Lett. B 457, 39 (1999) [hep-th/9903269].
. H Nastase, D Vaman, P Van Nieuwenhuizen, hep-th/9905075Phys. Lett. B. 46996H. Nastase, D. Vaman and P. van Nieuwenhuizen, Phys. Lett. B 469, 96 (1999) [hep-th/9905075].
. P Breitenlohner, D Z Freedman, Annals Phys. 144249P. Breitenlohner and D. Z. Freedman, Annals Phys. 144, 249 (1982).
. P Karndumri, E Colgáin, arXiv:1302.6532Phys. Rev. D. 8710101902hep-thP. Karndumri and E.Ó Colgáin, Phys. Rev. D 87, no. 10, 101902 (2013) [arXiv:1302.6532 [hep-th]].
. P Karndumri, E Colgáin, arXiv:1307.2086JHEP. 131094P. Karndumri and E.Ó Colgáin, JHEP 1310, 094 (2013) [arXiv:1307.2086].
. W Nahm, Nucl. Phys. B. 135149W. Nahm, Nucl. Phys. B 135, 149 (1978).
. P K Townsend, M N R Wohlfarth, hep-th/0303097Phys. Rev. Lett. 9161302P. K. Townsend and M. N. R. Wohlfarth, Phys. Rev. Lett. 91, 061302 (2003) [hep-th/0303097].
. G W Gibbons, C M Hull, hep-th/0111072G. W. Gibbons and C. M. Hull, hep-th/0111072.
. H Lü, J F Vázquez-Poritz, hep-th/0308104Phys. Lett. B. 597394H. Lü and J. F. Vázquez-Poritz, Phys. Lett. B 597, 394 (2004) [hep-th/0308104].
. C M Hull, N P Warner, Class. Quant. Grav. 51517C. M. Hull and N. P. Warner, Class. Quant. Grav. 5, 1517 (1988).
. C M Hull, hep-th/9806146JHEP. 980721C. M. Hull, JHEP 9807, 021 (1998) [hep-th/9806146].
. J T Liu, W A Sabra, W Y Wen, hep-th/0304253JHEP. 04017J. T. Liu, W. A. Sabra and W. Y. Wen, JHEP 0401, 007 (2004) [hep-th/0304253].
. H Lü, J F Vázquez-Poritz, hep-th/0305250JCAP. 04024H. Lü and J. F. Vázquez-Poritz, JCAP 0402, 004 (2004) [hep-th/0305250].
. S R Coleman, F De Luccia, Phys. Rev. D. 213305S. R. Coleman and F. De Luccia, Phys. Rev. D 21, 3305 (1980).
. H Lü, C N Pope, J F Vázquez-Poritz, hep-th/0307001Nucl. Phys. B. 70947H. Lü, C. N. Pope and J. F. Vázquez-Poritz, Nucl. Phys. B 709, 47 (2005) [hep-th/0307001].
. I-S Yang, arXiv:0910.1397Phys. Rev. D. 81125020hep-thI-S. Yang, Phys. Rev. D 81, 125020 (2010) [arXiv:0910.1397 [hep-th]].
. J J Blanco-Pillado, B Shlaer, arXiv:1002.4408Phys. Rev. D. 8286015hep-thJ. J. Blanco-Pillado and B. Shlaer, Phys. Rev. D 82, 086015 (2010) [arXiv:1002.4408 [hep-th]].
. B De Wit, H Nicolai, Nucl. Phys. B. 281211B. de Wit and H. Nicolai, Nucl. Phys. B 281 (1987) 211.
. I R Klebanov, J M Maldacena, hep-th/0409133Int. J. Mod. Phys. A. 195003I. R. Klebanov and J. M. Maldacena, Int. J. Mod. Phys. A 19, 5003 (2004) [hep-th/0409133].
. L Randall, R Sundrum, hep-th/9906064Phys. Rev. Lett. 834690L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 4690 (1999) [hep-th/9906064].
. A Strominger, hep- th/0106113JHEP. 011034A. Strominger, JHEP 0110, 034 (2001) [hep- th/0106113].
. D Anninos, arXiv:1205.3855Int. J. Mod. Phys. A. 271230013hep-thD. Anninos, Int. J. Mod. Phys. A 27, 1230013 (2012) [arXiv:1205.3855 [hep-th]].
. P Candelas, X C De La Ossa, Nucl. Phys. 342246P. Candelas and X.C. de la Ossa, Nucl. Phys. B342, 246 (1990).
| []
|
[
"A Channel Coding Perspective of Recommendation Systems",
"A Channel Coding Perspective of Recommendation Systems"
]
| [
"S T Aditya [email protected] ",
"Onkar Dabeer ",
"Bikash Kumar Dey [email protected] ",
"\nDepartment of Electrical Engineering\nSchool of Technology and Computer Science\nIndian Institute of Technology Bombay Mumbai\nIndia\n",
"\nDepartment of Electrical Engineering Indian Institute of Technology Bombay Mumbai\nTata Institute of Fundamental Research Mumbai\nIndia, India\n"
]
| [
"Department of Electrical Engineering\nSchool of Technology and Computer Science\nIndian Institute of Technology Bombay Mumbai\nIndia",
"Department of Electrical Engineering Indian Institute of Technology Bombay Mumbai\nTata Institute of Fundamental Research Mumbai\nIndia, India"
]
| []
| Motivated by recommendation systems, we consider the problem of estimating block constant binary matrices (of size m × n) from sparse and noisy observations. The observations are obtained from the underlying block constant matrix after unknown row and column permutations, erasures, and errors. We derive upper and lower bounds on the achievable probability of error. For fixed erasure and error probability, we show that there exists a constant C1 such that if the cluster sizes are less than C1 ln(mn), then for any algorithm the probability of error approaches one as m, n → ∞. On the other hand, we show that a simple polynomial time algorithm gives probability of error diminishing to zero provided the cluster sizes are greater than C2 ln(mn) for a suitable constant C2. | 10.1109/isit.2009.5205549 | [
"https://arxiv.org/pdf/0901.1753v1.pdf"
]
| 12,105,101 | 0901.1753 | ba9892291882f4d8c27e664d227f2adee6248dd2 |
A Channel Coding Perspective of Recommendation Systems
13 Jan 2009
S T Aditya [email protected]
Onkar Dabeer
Bikash Kumar Dey [email protected]
Department of Electrical Engineering
School of Technology and Computer Science
Indian Institute of Technology Bombay Mumbai
India
Department of Electrical Engineering Indian Institute of Technology Bombay Mumbai
Tata Institute of Fundamental Research Mumbai
India, India
A Channel Coding Perspective of Recommendation Systems
13 Jan 2009
Motivated by recommendation systems, we consider the problem of estimating block constant binary matrices (of size m × n) from sparse and noisy observations. The observations are obtained from the underlying block constant matrix after unknown row and column permutations, erasures, and errors. We derive upper and lower bounds on the achievable probability of error. For fixed erasure and error probability, we show that there exists a constant C1 such that if the cluster sizes are less than C1 ln(mn), then for any algorithm the probability of error approaches one as m, n → ∞. On the other hand, we show that a simple polynomial time algorithm gives probability of error diminishing to zero provided the cluster sizes are greater than C2 ln(mn) for a suitable constant C2.
I. INTRODUCTION
Recommender systems are commonly used to suggest content (movies, books, etc.) that is relevant to a given buyer. The most common approach is to predict the rating that a potential buyer might assign to an item and use the predicted ratings to recommend items. The problem thus reduces to completion of the rating matrix based on a sparse set of observations. This problem has been popularized by the Netflix Prize ([1]). A number of methods have been suggested to solve this problem; see for example [2], [3], [4] and references therein. Recently, several authors ( [5], [6] [7]) have used the assumption of a low-rank rating matrix to propose provably good algorithms. For example, in [5], [6], a "compressed sensing" approach based on nuclear-norm minimization is proposed. It is shown in [6] that if the number of samples is larger than a lower bound (depending on the matrix size and rank), then with high probability, the proposed optimization problem exactly recovers the underlying low-rank matrix from the samples. In [7], the relationship between the "fit-error" and the prediction error is studied for large random matrices with bounded rank. An efficient algorithm for matrix completion is also proposed.
In this paper, we consider a different setup. We assume that there is an underlying "true" rating matrix, which has block constant structure. In other words, buyers (respectively items) are clustered into groups of similar buyers (respectively items), and similar buyers rate similar items by the same value. The observations are obtained from this underlying matrix (say M) as described below.
1) The rows and columns of M are permuted with unknown permutations, that is, the clusters are not known. 2) Many entries of M are erased by a memoryless erasure channel. This models the sparsity of the available ratings.
3) The non-erased entries are observed through a discrete memoryless channel (DMC). This channel models
• the residual error in the block constant model, and, • the "noisy" behavior of buyers who may rate the same item differently at different times.
One may also treat these two channels as a single effective DMC, but we prefer the above break-up for conceptual reasons. Our goal is to identify conditions on the cluster sizes under which the underlying matrix can be recovered with small probability of error. Our recommendation system model differs from [5], [6], and in particular, we do not seek completion of the observed matrix, but rather the recovery of the underlying M. As described above, our goal reduces to analyzing the error performance of the code of block-constant matrices over the channel described above. From a practical stand-point, it is desirable to consider the case when the parameters of the erasure channel and DMC are not known. However, in this paper, we consider the simpler case when these channel parameters are known. In particular, for simplicity, we consider the case when M is an m×n matrix with entries in {0, 1} and the DMC is a binary symmetric channel (BSC) with error probability p. The erasure probability is ǫ. Our main results are of the following nature.
• If the "largest cluster size" (defined precisely in Section III) is less than C 1 ln(mn)), then the probability of error approaches unity for any estimator of M as mn → ∞ (Corollary 2, Part 2)). • We analyze a simple algorithm, which clusters rows and columns first, and then estimates the cluster values. We show that if the "smallest cluster size" is greater than a constant multiple of ln(mn), then the probability of error for this algorithm (averaged over the rating matrices), approaches zero as mn → ∞ (Theorem 3, Part 2)). Combined with the previous result, this implies that ln(mn) is a sharp threshold for exact recovery asymptotically.
• If we consider the probability of error for a fixed rating matrix, then the algorithm needs the smallest cluster size to be larger than a constant multiple of mn ln(m) ln(n). While we obtain the asymptotic results for fixed p and ǫ, the bounds we obtain in the process also apply to the case when p, ǫ depend on m, n.
The paper is organized as follows. In Section II, we describe our model. The main results are stated and proved in Section III. We conclude in Section IV.
II. OUR MODEL AND NOTATION
Suppose X is the unknown m× n rating matrix with entries in {0, 1}, where n is the number of buyers and m is the number of items. Let A = {A i } r i=1 and B = {B j } t j=1 be partitions of [1 : m] and [1 : n] respectively. The sets A i × B j are the clusters in the matrix X. We call A i 's (B j 's) the row (column) clusters. We denote the corresponding row and column cluster sizes by m i and n j , and the number of row clusters and the number of column clusters by r and t respectively. (We note that the A i 's (respectively B i 's) need not consist of adjacent rows (respectively columns) and hence this notation is different from that in the introduction). The entries of X are passed through the cascade of a memoryless erasure channel with erasure probability ǫ and a memoryless BSC with error probability p. While the erasure channel models the missing ratings, the BSC models noisy behavior of the buyers. The output of the channel, i.e. the observed rating matrix, is denoted by Y and its entries are in {0, 1, e}, where e denotes an erasure. We analyze the probability of error for a fixed rating matrix as well as the probability of error averaged over the rating matrices. We use the following probability law on the rating matrices. We assume that all row and column clusters have the same size m 0 and n 0 respectively, and the rt constant blocks (of size m 0 n 0 ) contain i.i.d. Bernoulli 1/2 random variables.
III. MAIN RESULTS
In Section III-A, we study the probability of error of the maximum likelihood decoder when the clusters A, B are known. This result provides a lower bound on the cluster size that ensures diminishing probability of error. In Section III-B, we analyze the probability of error in identifying the clusters for a specific algorithm. These results are integrated in Section III-C to obtain conditions on the cluster sizes for the overall probability of error to diminish to zero.
A. Probability of Error When Clustering is Known
In this section, we study the probability of error of the maximum likelihood decoder for a given rating matrix X when A and B are known. We denote this probability by P e|A,B (X). We note that the ML decoder ignores the erasures, counts the number of 0's and 1's in each cluster A i × B j , and takes a majority decision. Ties are resolved by tossing a fair coin. The following theorem provides simple upper and lower bounds on P e|A,B .
Theorem 1: Let 0 ≤ p ≤ 1/2, and let
p 1 = ǫ + 2(1 − ǫ) p(1 − p) G(u) = 1 − r,t i=1,j=1 (1 − u minj ) .
Then the probability of error of the ML decoder satisfies the following bounds:
G(ǫ) ≤ P e|A,B (X) ≤ G(p 1 ).
(1) Proof: We note that when p = 0, we make an error in a cluster iff all the entries in the cluster are erased. Since the erasures in different clusters are independent, it follows that P e|A,B (X) = G(ǫ) for p = 0. This gives the lower bound on P e|A,B (X) for p ≥ 0.
Next we prove the upper bound. Suppose in cluster A i × B j we have s non erased samples. Then the probability of correct decision in this cluster is given by
Pr(E c i,j,s ) = ⌊ s 2 ⌋ q=0 s q p q (1 − p) s−q if s is odd = s 2 −1 q=0 s q p q (1 − p) s−q + 1 2 s s 2 p s 2 (1 − p) s 2 if s is even.(2)
Averaging over the number of non erased samples, the probability of correct decision in cluster A i × B j is given by
Pr(E c i,j ) = minj s=0 m i n j s ǫ minj −s (1 − ǫ) s Pr(E c i,j,s ). (3)
Since the erasure and BSC are memoryless
P e|A,B (X) = Pr ∪ r,t i=1,j=1 E i,j = 1 − r,t i=1,j=1 Pr E c i,j .(4)
Equations (4), (3), and (2) specify the probability of error. The desired upper bound is obtained by deriving an upper bound on Pr(E c i,j,s ). First we note that from (2),
1 − Pr(E c i,j,s ) ≤ s ⌈ s 2 ⌉ s q p q (1 − p) s−q .
But for 0 ≤ p ≤ 1 2 and q ≥ s
2 , p q (1 − p) s−q ≤ p s 2 (1 − p)Pr(E c i,j,s ) ≥ 1 − (2 p(1 − p)) s .(5)
From Equations (3) and (5), we have Pr(E c i,j ) ≥ 1−p minj 1 and so from (4), P e|A,B (X) ≤ G(p 1 ). This completes the proof for the upper bound on P e|A,B (X).
Let us define the smallest cluster size as
s * (X) := min i,j m i n j ,(6)
and the largest cluster size as
s * (X) := max i,j m i n j .
The following corollary gives simpler bounds on P e|A,B (X). Corollary 1: Let N X (s) be the number of clusters in X with exactly s elements. Let s * (X) ≥ ln(2) ln(1/p 1 ) .
Then
P e|A,B (X) ≥ 1 − exp − ∞ s=1 N X (s)ǫ s , P e|A,B (X) ≤ 1 − exp −2 ln(2) ∞ s=1 N X (s)p s 1 .(7)
In particular,
P e|A,B (X) ≥ 1 − exp − mnǫ s * (X) s * (X) , P e|A,B (X) ≤ 1 − exp − 2 ln(2)mnp s * (X) 1 s * (X) .(8)
Proof: The proof is based on upper and lower bounds for
G(u). We note that (1 − x) ≤ exp(−x) and for x ∈ [0, 1/2], 1 − x ≥ exp(−2 ln(2)x). Hence exp −2 ln(2) r,t i=1,j=1 u minj ≤ r,t i=1,j=1 (1 − u minj ) ≤ exp − r,t i=1,j=1 u minj .
Where the first inequality holds for u minj ≤ 1 2 . The sum in the exponent can be written in terms of the size of the clusters:
r,t i=1,j=1 u minj = ∞ s=1 N X (s)u s .
The bounds (7) now follow from Theorem 1 by noting that p minj 1 ≤ 1/2 for s * (X) ≥ ln(2)/ln(1/p 1 ). To prove (8), we note that
∞ s=1 N X (s)u s ≤ rtu s * (X) ≤ mn s * (X) u s * (X) .
This gives the upper bound in (8). The lower bound in (8) follows similarly. We are interested in studying the cluster sizes that guarantee correct decisions asymptotically. Though (7) is tighter than (8), the conditions arising out of (8) are cleaner and are stated below.
Corollary 2: Suppose we are given a sequence of rating matrices of increasing size, that is, mn → ∞. Then the following are true. 1) If s * (X) ≥ ln(mn) ln(1/p 1 ) then P e|A,B (X) → 0.
2) If
s * (X) ≤
(1 − δ) ln(mn) ln(1/ǫ) , for some δ > 0, then P e|A,B (X) → 1. Proof: First consider Part 1. From (8), using e −x ≥ 1 − x we get
P e|A,B (X) ≤ 2 ln(2)mnp s * (X) 1 s * (X) .
The RHS is a decreasing function of s * (X) and hence substituting the lower bound on s * (X) we get P e|A,B (X) ≤ 2 ln(2) ln(1/p 1 ) ln(mn) → 0.
For Part 2, we note that 1 − exp −mnǫ s * (X) /s * (X) is a decreasing function of s * (X), and hence substituting the upper bound, we have from (8)
P e|A,B (X) ≥ 1 − exp − ln(1/ǫ)(mn) δ (1 − δ) ln(mn) .
But since (mn) δ / ln mn → ∞, we have P e|A,B → 1.
B. Probability of Error in Clustering
Data mining researchers have developed several techniques for clustering data; see for example [8,Chapter 4]. In this section, we analyze a simple polynomial time clustering algorithm. The algorithm clusters rows and columns separately. To cluster rows, we compute the normalized Hamming distance between two rows over commonly sampled entries. For rows i, j, this distance is:
d ij = 1 n n k=1 1 (Y ik = e, Y jk = e) 1(Y ik = Y jk ).
If this is less than a threshold d 0 , then the two rows are declared to be in the same cluster and otherwise they are declared to be in different clusters. We apply this process to all pairs of rows and all pairs of columns. Let I ij be equal to 1 if rows i, j belong to the same cluster and let it be 0 otherwise. The algorithm gives an estimate:
I ij = 1, d ij < d 0 , 0, d ij ≥ d 0 .
We are interested in the probability that we make an error in row clustering averaged over the probability law on the rating matrices described in Section II: P e,rc = Pr Î ij = I ij for some i, j .
Once the rows are clustered, we can apply the same procedure to cluster columns. Below we analyze the error probabilitȳ P e,rc ; the probability of error in finding column clusters has similar behavior.
Theorem 2: Suppose we are given a sequence of rating matrices with n → ∞ and t n column clusters, such that lim sup n→∞ m/n < ∞. Let
µ := 2p(1 − p)(1 − ǫ) 2 , δ := (1 − ǫ) 2 (1 − 2p) 2
and choose d 0 = µ+δ/3. Then there exists a positive constant C 0 such that if t n > C 0 ln(n), thenP e,rc → 0.
Proof: We start by considering the choice of the threshold. When i, j are in the same cluster,
E[d ij |I ij = 1, X] = 2p(1 − p)(1 − ǫ) 2 = µ.
When i, j are in different clusters, let s ij be the number of columns in which i, j disagree. Then
E[d ij |I ij = 0, X] = (1 − ǫ) 2 n (p 2 + (1 − p) 2 )s ij + 2p(1 − p)(n − s ij ) = µ + s ij n δ.
We
choose d 0 = µ + α n n δ,
where α n is chosen below to obtain diminishing probability of error. First we bound the probability of error when I ij = 1. We note that in this case d ij is the average of n i.i.d. Bernoulli random variables with mean µ = 2p(1 − p)(1 − ǫ) 2 . Hence
Pr Î ij = 1 I ij = 1, X = Pr d ij − µ ≥ α n n δ I ij = 1, X ≤ exp − δ 2 α 2 n µn(9)
where in the last step we have used the Chernoff bound [9, Theorem 4.4, pp. 64]. Next consider the case I ij = 0. In this case, d ij is the average of n − s ij identically distributed Bernoulli random variables with mean µ and s ij identically distributed Bernoulli random variables with mean ν = (1 − ǫ) 2 [p 2 + (1 − p) 2 ], all the random variables being independent. So we have
Pr Î ij = 0 I ij = 0, X ≤ (1 − µ + µe θ ) n−sij (1 − ν + νe θ ) sij e nd0θ , θ < 0 (10) ≤ exp n(e θ − 1)β ij − nd 0 θ , β ij = µ + δ s ij n(11)
where in (10) we have used the Chernoff bound and in (11) we have used the inequality 1 + x ≤ exp(x). Choosing θ = max(0, ln(d 0 /β ij )) (which is the optimal choice), we have
Pr Î ij = 0 I ij = 0, X ≤ exp n(d 0 − β ij ) + nd 0 ln βij d0 if s ij ≥ α n 1 if s ij < α n .(12)
Note that for s ij ≥ α n , we have 0 ≤ (β ij − d 0 )/d 0 ≤ 1, and so
ln β ij d 0 ≤ β ij − d 0 d 0 − 1 6 β ij − d 0 d 0 2 .
Substituting in (12), if s ij ≥ α n , then
Pr Î ij = 0 I ij = 0, X ≤ exp − δ 2 (s ij − α n ) 2 6(nµ + δα n ) .(13)
Taking expectation in (12) and using (13), we get,
E Pr Î ij = 0 I ij = 0, X ≤ Pr (s ij ≤ α n ) + E exp − δ 2 (s ij − α n ) 2 6(nµ + δα n ) =: T 1 + T 2 .
We note that s ij = n 0 X, where X is Binomial(t n ,1/2). Thus E[s ij ] = n 0 t n /2 = n/2 and var{s ij } = nn 0 /4. Thus if n 0 = o(n), then s ij concentrates around its mean. Hence to get a diminishing T 1 , we choose α n = n/3. Then
T 1 = P s ij ≤ n 3 = P X ≤ t n 3 ≤ P |X − t n /2| ≥ t n 6 ≤ 2 exp − t n 54(14)
where we have used the Chernoff bound [9, Corollary 4.6, pp. 67].
Substituting for α n in T 2 , we see that for a suitable positive constant c,
T 2 = E exp −cn 0 (X − t n /3) 2 t n = tn
From (14) and (15), it follows that
E Pr Î ij = 0 I ij = 0, X ≤ T 1 + T 2 ≤ nc 1 exp(−c 2 t n ).
(16) where c 1 , c 2 are positive constants. Since there are only m(m− 1)/2 pairs of rows, the desired result follows. Remark: If we consider the probability of error in clustering for a fixed rating matrix, then to get diminishing probability of error asymptotically, we need m 0 n 0 > C mn ln(m) ln(n).
C. Estimation Under Unknown Clustering
In this section, we consider our full problem -estimation of the underlying rating matrix from noisy, sparse observations when clustering is not known. Our result is the following.
Theorem 3: Consider the collection of block constant matrices with the probability law described in Section II. Let m = βn, β > 0 fixed. Then there exist constants C i , 1 ≤ i ≤ 4 such that the following holds for t > C 3 ln(n), r > C 4 ln(m).
1) If m 0 n 0 ≤ C 1 ln(mn), then for any estimator of X, P e → 1 as n → ∞. 2) Consider an estimator which first clusters the rows and columns using the algorithm described in Section III-B and then uses ML decoding as in Section III-A assuming that the clustering is correct. If m 0 n 0 ≥ C 2 ln(mn), then for this algorithmP e → 0 as n → ∞. Proof: When A, B are known, then under our model all feasible rating matrices are equally likely. Hence the ML decoder gives the minimum probability of error and so we haveP e ≥ E[P e|A,B (X)]. To prove Part 1), we next lower bound E[P e|A,B (X)]. Let T be the event that s * (X) > m 0 n 0 . We note that X ∈ T iff for some pair of row clusters all the t column clusters have been generated equal or for some pair of columns all the r row clusters have been generated equal. Using the union bound, we get that,
Pr(T ) ≤ r 2 2 t + t 2 2 r ≤ m 2 2 −t + n 2 2 −r .(17)
We choose C 1 , C 2 to ensure that the above bound decays to zero and hence Pr(T ) → 0. Now,
E[P e|A,B (X)] ≥ E[P e|A,B (X); T c ].
But on the event T c , s * (X) = m 0 n 0 and from the lower bound in (8) we get
P e ≥ E[P e|A,B (X)] ≥ (1 − Pr(T )) 1 − exp − ln(1/ǫ)(mn) δ (1 − δ) ln(mn)(18)
which → 1 as mn → ∞. This proves Part 1). Next we prove Part 2). Let D denote the event that the clustering is identified correctly. We note that the probability of error in estimating X averaged over the probability law on the block constant matrices satisfies P e ≤ E P e|A,B (X)Pr(D) + Pr(D c ) ≤ E P e|A,B (X) + P e,rc +P e,cc whereP e,cc is the probability of error in column clustering. The desired result follows from Part 1) of Corollary 2, and Theorem 2.
Remark:
The above result states that for a fixed p, ǫ, the smallest cluster size that leads to zero error asymptotically is O(ln(mn)) = O(ln(n)). When p = 0, then we can also apply the method in [6] to our model, and this yields a smallest cluster size of O(n 1/2 (ln(n)) 2 ), which is strictly worse than our result. Remark: In [7], the focus is on rating matrices of rank O(1) and ǫ = c/n, which leads to O(n) observations. For our model, O(1) rank corresponds to a cluster size of Θ(mn), and for ǫ = c/n, our algorithm can be seen to give zero error asymptotically for any fixed rating matrix.
IV. CONCLUSION
We considered the problem of estimating a block constant rating matrix. The observed matrix is obtained through unknown relabeling of the rows and columns of the underlying matrix, followed by an error and erasure channel. Our probability of error analysis showed that if the number of row clusters and the number column clusters are Ω(ln(m)) and Ω(ln(n)) respectively, then the matrix can be clustered and estimated with vanishing probability of error if the cluster sizes are Ω(ln(mn)).
V. ACKNOWLEDGMENTS
The work of Onkar Dabeer was supported by the XI Plan Project from TIFR and the Homi Bhabha Fellowship. The work of Bikash Kumar Dey was supported by Bharti Centre for Communication in IIT Bombay.
in the previous equation, we have
2
−tn 2 tnh(s/tn) ≤ exp (−cn/81) + t n 2 −tn(1−h(4/9)) .
Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. G Adomavicius, A Tuzhilin, IEEE Tran. Knowledge and Data Engineering. 176G. Adomavicius, A. Tuzhilin, "Toward the Next Generation of Rec- ommender Systems: A Survey of the State-of-the-Art and Possible Extensions," IEEE Tran. Knowledge and Data Engineering, vol. 17, no. 6, pp. 734-749, June 2005.
Guest Editor's Introduction: Recommender Systems. A Felfernig, G Friedrich, L Schmidt-Thieme, IEEE Intelligent Systems. 223A. Felfernig, G. Friedrich, L. Schmidt-Thieme, "Guest Editor's Intro- duction: Recommender Systems," IEEE Intelligent Systems, vol. 22 no. 3, pp. 18-21, May 2007.
Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model. Yehuda Koren, ACM Int. Conference on Knowledge Discovery and Data Mining (KDD'08). Yehuda Koren, "Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model," ACM Int. Conference on Knowledge Discovery and Data Mining (KDD'08), 2008.
Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. B Recht, M Fazel, P A Parrilo, preprint. submitted to SIAM ReviewB. Recht, M. Fazel, P. A. Parrilo, "Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization," preprint (2007), submitted to SIAM Review.
Exact Matrix Completion via Convex Optimization. E J Candes, B Recht, preprintE. J. Candes, B. Recht, "Exact Matrix Completion via Convex Optimization," preprint (2008), available at http://www.acm.caltech.edu/emmanuel/papers/MatrixCompletion.pdf
Learning low rank matrices from O(n) entries. R Keshavan, A Montanari, S Oh, AllertonR. Keshavan, A. Montanari, S. Oh, "Learning low rank matrices from O(n) entries," Allerton 2008.
Mining the Web. S Chakrabarti, Morgan Kaufmann PublishersSan FransiscoS. Chakrabarti, "Mining the Web," Morgan Kaufmann Publishers, San Fransisco, 2003.
M Mitzenmacher, E , Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University PressM. Mitzenmacher, E. Upfal, Probability and Computing: Randomized Algorithms and Probabilistic Analysis, Cambridge University Press, 2005.
| []
|
[
"Methods to integrate a language model with semantic information for a word prediction component",
"Methods to integrate a language model with semantic information for a word prediction component"
]
| [
"Tonio Wandmacher [email protected] \nLaboratoire d'Informatique (LI\nUniversité François Rabelais de Tours 3 place Jean-Jaurès\n41000BloisFrance\n",
"Jean-Yves Antoine [email protected] \nLaboratoire d'Informatique (LI)\nUniversité François Rabelais de Tours\n3 place Jean-Jaurès41000BloisFrance\n"
]
| [
"Laboratoire d'Informatique (LI\nUniversité François Rabelais de Tours 3 place Jean-Jaurès\n41000BloisFrance",
"Laboratoire d'Informatique (LI)\nUniversité François Rabelais de Tours\n3 place Jean-Jaurès41000BloisFrance"
]
| [
"Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning"
]
| Most current word prediction systems make use of n-gram language models (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such language models with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard language model: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4gram baseline, and most of them to a simple cache model as well. | null | null | 51,977,123 | 0801.4716 | 7303ba9323b728b27d02a46cd491cc580e1a1e5e |
Methods to integrate a language model with semantic information for a word prediction component
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2007. 2007
Tonio Wandmacher [email protected]
Laboratoire d'Informatique (LI
Université François Rabelais de Tours 3 place Jean-Jaurès
41000BloisFrance
Jean-Yves Antoine [email protected]
Laboratoire d'Informatique (LI)
Université François Rabelais de Tours
3 place Jean-Jaurès41000BloisFrance
Methods to integrate a language model with semantic information for a word prediction component
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningPragueAssociation for Computational LinguisticsJune 2007. 2007
Most current word prediction systems make use of n-gram language models (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such language models with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard language model: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4gram baseline, and most of them to a simple cache model as well.
Introduction: NLP for AAC systems
Augmented and Alternative Communication (AAC) is a field of research which concerns natural language processing as well as human-machine interaction, and which aims at restoring the communicative abilities of disabled people with severe speech and motion impairments. These people can be for instance cerebrally and physically handicapped persons or they suffer from a locked-in syndrome due to a cerebral apoplexy. Whatever the disease or impairment considered, oral communication is impossible for these persons who have in addition serious difficulties to control physically their environment. In particular, they are not able to use standard input devices of a computer. Most of the time, they can only handle a single switch device. As a result, communicating with an AAC system consists of typing messages by means of a virtual table of symbols (words, letters or icons) where the user successively selects the desired items.
Basically, an AAC system, such as FASTY (Trost et al. 2005) or SIBYLLE (Schadle et al, 2004), consists of four components. At first, one finds a physical input interface connected to the computer. This device is adapted to the motion capacities of the user. When the latter must be restricted to a single switch (eye glimpse or breath detector, for instance), the control of the environment is reduced to a mere Yes/No command.
Secondly, a virtual keyboard is displayed on screen. It allows the user to select successively the symbols that compose the intended message. In SIBYLLE, key selection is achieved by pointing letters through a linear scan procedure: a cursor successively highlights each key of the keyboard.
The last two components are a text editor (to write e-mails or other documents) and a speech synthesis module, which is used in case of spoken communication. The latest version of SIBYLLE works for French and German, and it is usable with any Windows™ application (text editor, web browser, mailer...), which means that the use of a specific editor is no longer necessary. The main weakness of AAC systems results from the slowness of message composition. On average, disabled people cannot type more than 1 to 5 words per minute; moreover, this task is very tiring. The use of NLP techniques to improve AAC systems is therefore of first importance. Two complementary approaches are possible to speed up communication. The first one aims at minimizing the duration of each item selection. Considering a linear scan procedure, one could for instance dynamically reorganize the keyboard in order to present the most probable symbols at first. The second strategy tries to minimize the number of keystrokes to be made. Here, the system tries to predict the words which are likely to occur just after those already typed. The predicted word is then either directly displayed after the end of the inserted text (a method referred to as "word completion", cf. Boissière and Dours, 1996), or a list of Nbest (typically 3 to 7) predictions is provided on the virtual keyboard. When one of these predictions corresponds to the intended word, it can be selected by the user. As can be seen in figure 1, the interface of the SIBYLLE system presents such a list of most probable words to the user.
Several approaches can be used to carry out word prediction. Most of the commercial AAC systems make only use of a simple lexicon: in this approach, the context is not considered.
On the other hand, stochastic language models can provide a list of word suggestions, depending on the n-1 (typically n = 3 or 4) last inserted words. It is obvious that such a model cannot take into account long-distance dependencies. There have been attempts to integrate part-of-speech information (Fazly and Hirst, 2003) or more complex syntactic models (Schadle et al, 2004) to achieve a better prediction. In this paper, we will nevertheless limit our study to a standard 4-gram model as a baseline to make our results comparable. Our main aim is here to investigate the use of long-distance semantic dependencies to dynamically adapt the prediction to the current semantic context of communication. Similar work has been done by Li and Hirst (2005) and Matiasek and Baroni (2003), who exploit Pointwise Mutual Information (PMI; Church and Hanks, 1989). Trnka et al. (2005) dynamically interpolate a high number of topic-oriented models in order to adapt their predictions to the current topic of the text or conversation.
Classically, word predictors are evaluated by an objective metric called Keystroke Saving Rate (ksr):
100 1 ⋅ − = a p n k k ksr (1)
with k p , k a being the number of keystrokes needed on the input device when typing a message with (k p ) and without prediction (k a = number of characters in the text that has been entered, n = length of the prediction list, usually n = 5). As Trost et al. (2005) and Trnka et al. (2005), we assume that one additional keystroke is required for the selection of a word from the list and that a space is automatically inserted afterwards. Note also that words, which have already occurred in the list, will not reappear after the next character has been inserted.
The perplexity measure, which is frequently used to assess statistical language models, proved to be less accurate in this context. We still present perplexities as well in order to provide comparative results.
Language modeling and semantics
Statistical Language Models
For about 10 to 15 years statistical language modeling has had a remarkable success in various NLP domains, for instance in speech recognition, machine translation, Part-of-Speech tagging, but also in word prediction systems. N-gram based language models (LM) estimate the probability of occurrence for a word, given a string of n-1 preceding words. However, computers have only recently become powerful enough to estimate probabilities on a reasonable amount of training data. Moreover, the larger n gets, the more important the problem of combinatorial explosion for the probability estimation becomes. A reasonable trade-off between performance and number of estimated events seems therefore to be an n of 3 to 5, including sophisticated techniques in order to estimate the probability of unseen events (smoothing methods).
Whereas n-gram-like language models are already performing rather well in many applications, their capacities are also very limited in that they cannot exploit any deeper linguistic structure. Long-distance syntactic relationships are neglected as well as semantic or thematic constraints.
In the past 15 years many attempts have been made to enrich language models with more complex syntactic and semantic models, with varying success (cf. (Rosenfeld, 1996), (Goodman, 2002) or in a word prediction task: (Fazly and Hirst, 2003), (Schadle, 2004), (Li and Hirst, 2005)). We want to explore here an approach based on Latent Semantic Analysis (Deerwester et al, 1990).
Latent Semantic Analysis
Several works have suggested the use of Latent Semantic Analysis (LSA) in order to integrate se-mantic similarity to a language model (cf. Bellegarda, 1997;Coccaro and Jurafsky, 1998). LSA models semantic similarity based on co-occurrence distributions of words, and it has shown to be helpful in a variety of NLP tasks, but also in the domain of cognitive modeling (Landauer et al, 1997).
LSA is able to relate coherent contexts to specific content words, and it is good at predicting the occurrence of a content word in the presence of other thematically related terms. However, since it does not take word order into account ("bag-ofwords" model) it is very poor at predicting their actual position within the sentence, and it is completely useless for the prediction of function words. Therefore, some attempts have been made to integrate the information coming from an LSA-based model with standard language models of the ngram type.
In the LSA model (Deerwester et al, 1990) a word w i is represented as a high-dimensional vector, derived by Singular Value Decomposition (SVD) from a term × document (or a term × term) co-occurrence matrix of a training corpus. In this framework, a context or history h (= w 1 , ... , w m ) can be represented by the sum of the (already normalized) vectors corresponding to the words it contains (Landauer et al. 1997):
∑ = = m i i w h 1 r r (2)
This vector reflects the meaning of the preceding (already typed) section, and it has the same dimensionality as the term vectors. It can thus be compared to the term vectors by well-known similarity measures (scalar product, cosine).
Transforming LSA similarities into probabilities
We make the assumption that an utterance or a text to be entered is usually semantically cohesive. We then expect all word vectors to be close to the current context vector, whose corresponding words belong to the semantic field of the context. This forms the basis for a simple probabilistic model of LSA: After calculating the cosine similarity for each word vector i w r with the vector h r of the current context, we could use the normalized similarities as probability values. This probability distribution however is usually rather flat (i.e. the dynamic range is low). For this reason a contrasting (or temperature) factor γ is normally applied (cf. Coccaro and Jurafsky, 1998), which raises the cosine to some power (γ is normally between 3 and 8). After normalization we obtain a probability distribution which can be used for prediction purposes. It is calculated as follows: Example 1: Most probable words returned by the LSA model for the given context.
( ) ( ) ∑ − − = k γ k γ i i LSA h h w h h w h w P ) ( cos ) , cos( ) ( cos ) ,cos( )
As can be seen in example 1, all ten predicted words are semantically related to the context, they should therefore be given a high probability of occurrence. However, this example also shows the drawbacks of the LSA model: it totally neglects the presence of function words as well as the syntactic structure of the current phrase. We therefore need to find an appropriate way to integrate the information coming from a standard n-gram model and the LSA approach.
Density as a confidence measure
Measuring relation quality in an LSA space, Wandmacher (2005) pointed out that the reliability of LSA relations varies strongly between terms. He also showed that the entropy of a term does not correlate with relation quality (i.e. number of semantically related terms in an LSA-generated term cluster), but he found a medium correlation (Pearson coeff. = 0.56) between the number of semantically related terms and the average cosine similarity of the m nearest neighbors (density). The closer the nearest neighbors of a term vector are, the more probable it is to find semantically related terms for the given word. In turn, terms having a high density are more likely to be semantically related to a given context (i.e. their specificity is higher). We define the density of a term w i as follows:
∑ = ⋅ = m j i j i i m w NN w m w D 1 )) ( , cos( 1 ) ( r r(4)
In the following we will use this measure (with m=100) as a confidence metric to estimate the reliability of a word being predicted by the LSA component, since it showed to give slightly better results in our experiments than the entropy measure.
Integrating semantic information
In the following we present several different methods to integrate semantic information as it is provided by an LSA model into a standard LM.
Semantic cache model
Cache (or recency promotion) models have shown to bring slight but constant gains in language modeling (Kuhn and De Mori, 1990). The underlying idea is that words that have already occurred in a text are more likely to occur another time. Therefore their probability is raised by a constant or exponentially decaying factor, depending on the position of the element in the cache. The idea of a decaying cache function is that the probability of reoccurrence depends on the cosine similarity of the word in the cache and the word to be predicted. The highest probability of reoccurrence is usually after 15 to 20 words. Similar to Clarkson and Robinson (1997), we implemented an exponentially decaying cache of length l (usually between 100 and 1000), using the following decay function for a word w i and its position p in the cache. We extend this model by calculating for each element having occurred in the context its m nearest LSA neighbors ( ) , ( θ w NN occ m r , using cosine similarity), if their cosine lies above a threshold θ, and add them to the cache as well, right after the word that has occurred in the text ("Bring your friends"strategy). The size of the cache is adapted accordingly (for µ, σ and l), depending on the number of neighbors added. This results in the following cache function:
) , ( ) , ( ) ( 1 cos p w f w w f β w P i d l i i occ i cache ∑ ⋅ ⋅ =(6)
with l = size of the cache. β is a constant controlling the influence of the component (usually β ≈ 0.1/l); w i occ is a word that has already recently occurred in the context and is therefore added as a standard cache element, whereas w i is a nearest neighbor to w i occ . f cos (w i occ , w i ) returns the cosine similarity between i occ w r )=1, terms having actually occurred before will be given full weight, whereas all w i being only nearest LSA neighbors to w i occ will receive a weight correspond-ing to their cosine similarity with w i occ , which is less than 1 (but larger than θ). f d (w i ,p) is the decay factor for the current position p of w i in the cache, calculated as shown in equation (5).
Partial reranking
The underlying idea of partial reranking is to regard only the best n candidates from the basic language model for the semantic model in order to prevent the LSA model from making totally implausible (i.e. improbable) predictions. Words being improbable for a given context will be disregarded as well as words that do not occur in the semantic model (e.g. function words), because LSA is not able to give correct estimates for this group of words (here the base probability remains unchanged). For the best n candidates their semantic probability is calculated and each of these words is assigned an additional value, after a fraction of its base probability has been subtracted (jackpot strategy). For a given context h we calculate the ordered set BEST n (h) = <w 1 , … , w n >, so that P(w 1 |h) ≥ P(w 2 |h) ≥…≥P(w n |h)
For each w i in BEST n (h) we then calculate its reranking probability as follows:
) ), ( ( ) ( ) , cos( ) ( i n i i i RR w h Best I w D h w β w P ⋅ ⋅ ⋅ = r r (7)
β is a weighting constant controlling the overall influence of the reranking process, cos( i w r , i w r ) returns the cosine of the word's vector and the current context vector, D(w i ) gives the confidence measure of w i and I is an indicator function being 1, iff w i ∈BEST(h), and 0 otherwise.
Standard interpolation
Interpolation is the standard way to integrate information from heterogeneous resources. While for a linear combination we simply add the weighted probabilities of two (or more) models, geometric interpolation multiplies the probabilities, which are weighted by an exponential coefficient (0≤λ 1 ≤1):
Linear Interpolation (LI):
) ( ) 1 ( ) ( ) ( ' 1 1 i s i b i w P λ w P λ w P ⋅ − + ⋅ =(8)
Geometric Interpolation (GI):
∑ = − − ⋅ ⋅ = n j λ j s λ j b λ i s λ i b i w P w P w P w P w P 1 ) 1 1 ( 1 ) 1 1 ( 1 ) ( ) ( ) ( ) ( ) ( '(9)
The main difference between the two methods is that the latter takes the agreement of two models into account. Only if each of the single models assigns a high probability to a given event will the combined probability be assigned a high value. If one of the models assigns a high probability and the other does not the resulting probability will be lower.
Confidence-weighted interpolation
Whereas in standard settings the coefficients are stable for all probabilities, some approaches use confidence-weighted coefficients that are adapted for each probability. In order to integrate n-gram and LSA probabilities, Coccaro and Jurafsky (1998) proposed an entropy-related confidence measure for the LSA component, based on the observation that words that occur in many different contexts (i.e. have a high entropy), cannot well be predicted by LSA. We use here a density-based measure (cf. section 2.2), because we found it more reliable than entropy in preliminary tests. For interpolation purposes we calculate the coefficient of the LSA component as follows: (10) with β being a weighting constant to control the influence of the LSA predictor. For all experiments, we set β to 0.4 (i.e. 0 ≤ λ i ≤ 0.4), which proved to be optimal in pre-tests.
) ( i i w D β λ ⋅ = , iff D(w i ) > 0; 0 otherwise
Results
We calculated our baseline n-gram model on a 44 million word corpus from the French daily Le Monde (1998)(1999). Using the SRI toolkit (Stolcke, 2002) 1 we computed a 4-gram LM over a controlled 141,000 word vocabulary, using modified Kneser-Ney discounting (Goodman, 2001), and we applied Stolcke pruning (Stolcke, 1998) to reduce the model to a manageable size (θ = 10 -7 ). 1 SRI Toolkit: www.speech.sri.com.
The LSA space was calculated on a 100 million word corpus from Le Monde (1996 -2002). Using the Infomap toolkit 2 , we generated a term × term co-occurrence matrix for an 80,000 word vocabulary (matrix size = 80,000 × 3,000), stopwords were excluded. After several pre-tests, we set the size of the co-occurrence window to ±100. The matrix was then reduced by singular value decomposition to 150 columns, so that each word in the vocabulary was represented by a vector of 150 dimensions, which was normalized to speed up similarity calculations (the scalar product of two normalized vectors equals the cosine of their angle).
Our test corpus consisted of 8 sections from the French newspaper Humanité, (January 1999, from 5,378 to 8,750 words each), summing up to 58,457 words. We then calculated for each test set the keystroke saving rate based on a 5-word list (ksr 5 ) and perplexity for the following settings 3 : Using the results of our 8 samples, we performed paired t tests for every method with the baseline as well as with the cache model. All gains for ksr turned out to be highly significant (sig. level < 0.001), and apart from the results for CWLI, all perplexity reductions were significant as well (sig. level < 0.007), with respect to the cache results. We can therefore conclude that, with exception of CWLI, all methods tested have a beneficial effect, even when compared to a simple cache model. The highest gain in ksr (with respect to the baseline) was obtained for the confidence-weighted geometric interpolation method (CWGI; +1.05%), the highest perplexity reduction was measured for GI as well as for CWGI (-9.3% for both). All other methods (apart from IWLI) gave rather similar results (+0.6 to +0.8% in ksr, and -6.8% to -7.7% in perplexity).
We also calculated for all samples the correlation between ksr and perplexity. We measured a Pearson coefficient of -0.683 (Sig. level < 0.0001).
At first glance, these results may not seem overwhelming, but we have to take into account that our ksr baseline of 57.9% is already rather high, and at such a level, additional gains become hard to achieve (cf. Lesher et al, 2002).
The fact that CWLI performed worse than even simple LI was not expected, but it can be explained by an inherent property of linear interpolation: If one of the models to be interpolated overestimates the probability for a word, the other cannot compensate for it (even if it gives correct estimates), and the resulting probability will be too high. In our case, this happens when a word receives a high confidence value; its probability will then be overestimated by the LSA component.
Conclusion and further work
Adapting a statistical language model with semantic information, stemming from a distributional analysis like LSA, has shown to be a non-trivial problem. Considering the task of word prediction in an AAC system, we tested different methods to integrate an n-gram LM with LSA: A semantic cache model, a partial reranking approach, and some variants of interpolation.
We evaluated the methods using two different measures, the keystroke saving rate (ksr) and perplexity, and we found significant gains for all methods incorporating LSA information, compared to the baseline. In terms of ksr the most successful method was confidence-weighted geometric interpolation (CWGI; +1.05% in ksr); for perplexity, the greatest reduction was obtained for standard as well as for confidence-weighted geometric interpolation (-9.3% for both). Partial reranking and the semantic cache gave very similar results, despite their rather different underlying approach.
We could not provide here a comparison with other models that make use of distributional information, like the trigger approach by Rosenfeld (1996), Matiasek and Baroni (2003) or the model presented by Li and Hirst (2005), based on Pointwise Mutual Information (PMI). A comparison of these similarities with LSA remains to be done.
Finally, an AAC system has not only the function of simple text entering but also of providing cognitive support to its user, whose communicative abilities might be totally depending on it. Therefore, she or he might feel a strong improvement of the system, if it can provide semantically plausible predictions, even though the actual gain in ksr might be modest or even slightly decreasing. For this reason we will perform an extended qualitative analysis of the presented methods with persons who use our AAC system SIBYLLE. This is one of the main aims of the recently started ESAC_IMC project. It is conducted at the Functional Reeducation and Rehabilitation Centre of Kerpape, Brittany, where SIBYLLE is already used by 20 children suffering from traumatisms of the motor cortex. They appreciate the system not only for communication but also for language learning purposes.
Moreover, we intend to make the word predictor of SIBYLLE publicly available (AFM Voltaire project) in the not-too-distant future.
Figure 1 :
1User interface of the SIBYLLE AAC system
µ/3 if p < µ and σ = l/3 if p ≥ µ. The function returns 0 if w i is not in the cache, and it is 1 if p = µ. A typical graph for (5) can be seen in figure (2).
Figure 2 :
2Decay function with µ=20 and l=300.
gram + LSA using linear interpolation with λ LSA = 0.11 (LI). 4. 4-gram + LSA using geometric interpolation, with λ LSA = 0.07 (GI). 5. 4-gram + LSA using linear interpolation and (density-based) confidence weighting (CWLI). 6. 4-gram + LSA using geometric interpolation and (density-based) confidence weighting (CWGI). 4000; m = 10; θ = 0.4, β = 0.0001) Figures 3 and 4 display the overall results in terms of ksr and perplexity.
Figure 3 :
3Results (ksr 5 ) for all methods tested.
Figure 4 :
4Results (perplexity) for all methods tested.
w i is a word in the vocabulary, h is the current context (history) Let us illustrate the capacities of this model by giving a short example from the French version of our own LSA predictor:Context: "Mon père était professeur en mathématiques(
min
min
r
r
r
r
r
r
(3)
i
w
r and h
r
are their corresponding vec-
tors in the LSA space; cos min ( h
r
) returns the lowest
cosine value measured for h
r
). The denominator
then normalizes each similarity value to ensure that
∑
=
n
k
k
LSA
h
w
P
1
)
,
(
.
et je pense que "
("My dad has been a professor in mathemat-
ics and I think that ")
Rank
Word
P
1. professeur ('professor')
0.0117
2. mathématiques ("mathematics")
0.0109
3. enseigné (participle of 'taught')
0.0083
4. enseignait ('taught')
0.0053
5. mathematicien ('mathematician')
0.0049
6. père ('father')
0.0046
7. mathématique ('mathematics')
0.0045
8. grand-père ('grand-father')
0.0043
9. sciences ('sciences')
0.0036
10. enseignant ('teacher')
0.0032
Infomap Project: http://infomap-nlp.sourceforge.net/ 3 All parameter settings presented here are based on results of extended empirical pre-tests. We used held-out development data sets that have randomly been chosen from the Humanité corpus.(8k to 10k words each). The parameters being presented here were optimal for our test sets. For reasons of simplicity we did not use automatic optimization techniques such as the EM algorithm (cf.Jelinek, 1990).
AcknowledgementsThis research is partially founded by the UFA (Université Franco-Allemande) and the French foundations APRETREIMC (ESAC_IMC project) and AFM (VOLTAIRE project). We also want to thank the developers of the SRI and the Infomap toolkits for making their programs available.
A Latent Semantic Analysis Framework for Large-Span Language Modeling. J Bellegarda, Proceedings of the Eurospeech 97. the Eurospeech 97Rhodes, GreeceBellegarda, J. (1997): "A Latent Semantic Analysis Framework for Large-Span Language Modeling", Proceedings of the Eurospeech 97, Rhodes, Greece.
VITIPI : Versatile interpretation of text input by persons with impairments. Boissière Ph, D Dours, Proceedings ICCHP'1996. ICCHP'1996Linz, AustriaBoissière Ph. and Dours D. (1996). "VITIPI : Versatile interpretation of text input by persons with impair- ments". Proceedings ICCHP'1996. Linz, Austria.
Word association norms, mutual information and lexicography. K Church, P Hanks, Proceedings of ACL. ACLChurch, K. and Hanks, P. (1989). "Word association norms, mutual information and lexicography". Pro- ceedings of ACL, pp. 76-83.
Language Model Adaptation using Mixtures and an Exponentially Decaying Cache. P R Clarkson, A J Robinson, Proc. of the IEEE ICASSP-97. of the IEEE ICASSP-97MunichClarkson, P. R. and Robinson, A.J. (1997). "Language Model Adaptation using Mixtures and an Exponen- tially Decaying Cache", in Proc. of the IEEE ICASSP-97, Munich.
Towards better integration of semantic predictors in statistical language modeling. N Coccaro, D Jurafsky, Proc. of the ICSLP-98. of the ICSLP-98SydneyCoccaro, N. and Jurafsky, D. (1998). "Towards better integration of semantic predictors in statistical lan- guage modeling", Proc. of the ICSLP-98, Sydney.
Indexing by Latent Semantic Analysis. S C Deerwester, S Dumais, T Landauer, G Furnas, R Harshman, JASIS. 416Deerwester, S. C., Dumais, S., Landauer, T., Furnas, G. and Harshman, R. (1990). "Indexing by Latent Se- mantic Analysis", JASIS 41(6), pp. 391-407.
Testing the efficacy of part-of-speech information in word completion. A Fazly, G Hirst, Proceedings of the Workshop on Language Modeling for Text Entry Methods on EACL. the Workshop on Language Modeling for Text Entry Methods on EACLBudapestFazly, A. and Hirst, G. (2003). "Testing the efficacy of part-of-speech information in word completion", Proceedings of the Workshop on Language Modeling for Text Entry Methods on EACL, Budapest.
A Bit of Progress in Language Modeling. J Goodman, MSR-TR-2001-72Extended Version Microsoft Research Technical ReportGoodman, J. (2001): "A Bit of Progress in Language Modeling", Extended Version Microsoft Research Technical Report MSR-TR-2001-72.
Self-organized Language Models for Speech Recognition. F Jelinek, Readings in Speech Recognition. A. Waibel and K.-F. LeeMorgan Kaufman PublishersJelinek, F. (1990): "Self-organized Language Models for Speech Recognition", In: A. Waibel and K.-F. Lee (eds.), Readings in Speech Recognition, Morgan Kaufman Publishers, pp. 450-506.
A Cache-Based Natural Language Model for Speech Reproduction. R Kuhn, R De Mori, IEEE Transactions on Pattern Analysis and Machine Intelligence. 126Kuhn, R. and De Mori, R. (1990). "A Cache-Based Natural Language Model for Speech Reproduction", IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (6), pp. 570-583.
How well can passage meaning be derived without using word order? A comparison of LSA and humans. T K Landauer, D Laham, B Rehder, M E Schreiner, Proceedings of the 19th annual meeting of the Cognitive Science Society. the 19th annual meeting of the Cognitive Science SocietyErlbaum Mawhwah, NJLandauer, T. K., Laham, D., Rehder, B. and Schreiner, M. E. (1997). "How well can passage meaning be de- rived without using word order? A comparison of LSA and humans", Proceedings of the 19th annual meeting of the Cognitive Science Society, pp. 412- 417, Erlbaum Mawhwah, NJ.
Limits of human word prediction performance. G W Lesher, B J Moulton, D J Higginbotham, B Alsofrom, Proceedings of the CSUN. the CSUNLesher, G. W., Moulton, B. J, Higginbotham, D.J. and Alsofrom, B. (2002). "Limits of human word predic- tion performance", Proceedings of the CSUN 2002.
Semantic knowledge in a word completion task. J Li, G Hirst, Proc. of the 7 th Int. ACM Conference on Computers and Accessibility. of the 7 th Int. ACM Conference on Computers and AccessibilityBaltimoreLi, J., Hirst, G. (2005). "Semantic knowledge in a word completion task", Proc. of the 7 th Int. ACM Confer- ence on Computers and Accessibility, Baltimore.
Exploiting long distance collocational relations in predictive typing. H Matiasek, M Baroni, Proceedings of the EACL-03 Workshop on Language Modeling for Text Entry Methods. the EACL-03 Workshop on Language Modeling for Text Entry MethodsBudapestMatiasek, H. and Baroni, M. (2003). "Exploiting long distance collocational relations in predictive typing", Proceedings of the EACL-03 Workshop on Language Modeling for Text Entry Methods, Budapest.
A maximum entropy approach to adaptive statistical language modelling. R Rosenfeld, Computer Speech and Language. 101Rosenfeld, R. (1996). "A maximum entropy approach to adaptive statistical language modelling", Computer Speech and Language, 10 (1), pp. 187-228.
Sibyl -AAC system using NLP techniques. I Schadle, J.-Y Antoine, Le Pévédic, B Poirier, F , Proc. ICCHP'. ICCHP'Paris, FranceSpringer Verlag3118Schadle I., Antoine J.-Y., Le Pévédic B., Poirier F. (2004). "Sibyl -AAC system using NLP tech- niques". Proc. ICCHP'2004, Paris, France. LNCS 3118, Springer Verlag.
Entropy-based pruning of backoff language models. A Stolcke, Proc.s of the DARPA Broadcast News Transcription and Understanding Workshop. .s of the DARPA Broadcast News Transcription and Understanding WorkshopStolcke, A. (1998): "Entropy-based pruning of backoff language models". Proc.s of the DARPA Broadcast News Transcription and Understanding Workshop.
SRILM -An Extensible Language Modeling Toolkit. A Stolcke, Proc. of the Intl. Conference on Spoken Language Processing. of the Intl. Conference on Spoken Language essingDenver, ColoradoStolcke, A. (2002): "SRILM -An Extensible Language Modeling Toolkit", in Proc. of the Intl. Conference on Spoken Language Processing, Denver, Colorado.
Topic Modeling in Fringe Word Prediction for AAC. K Trnka, D Yarrington, K F Mccoy, C Pennington, Proceedings of the 2006 International Conference on Intelligent User Interfaces. the 2006 International Conference on Intelligent User InterfacesSydney, AustraliaTrnka, K., Yarrington, D., McCoy, K. F. and Penning- ton, C. (2006): "Topic Modeling in Fringe Word Pre- diction for AAC", In Proceedings of the 2006 Inter- national Conference on Intelligent User Interfaces, pp. 276 -278, Sydney, Australia.
The Language Component of the FASTY Text Prediction System. H Trost, J Matiasek, M Baroni, Applied Artificial Intelligence. 198Trost, H., Matiasek, J. and Baroni, M. (2005): "The Language Component of the FASTY Text Prediction System", Applied Artificial Intelligence, 19 (8), pp. 743-781.
How semantic is Latent Semantic Analysis?. T Wandmacher, Proceedings of TALN/RECITAL 2005. TALN/RECITAL 2005Dourdan, FranceWandmacher, T. (2005): "How semantic is Latent Se- mantic Analysis?", in Proceedings of TALN/RECITAL 2005, Dourdan, France, 6-10 june.
| []
|
[
"On the Manev spatial isosceles three-body problem",
"On the Manev spatial isosceles three-body problem"
]
| [
"Daniel Paşca ",
"Cristina Stoica "
]
| []
| []
| We study the isosceles three-body problem with Manev interaction. Using a McGehee-type technique, we blow up the triple collision singularity into an invariant manifold, called the collision manifold, pasted into the phase space for all energy levels. We find that orbits tending to/ejecting from total collision are present for a large set of angular momenta. We also find that as the angular momentum is increased, the collision manifold changes its topology. We discuss the flow near-by the collision manifold, study equilibria and homographic motions, and prove some statements on the global flow. | 10.1007/s10509-019-3504-5 | [
"https://arxiv.org/pdf/1805.08364v2.pdf"
]
| 89,604,714 | 1805.08364 | d3866021373c2c406fabe9331e1329afe99e9236 |
On the Manev spatial isosceles three-body problem
May 23, 2018
Daniel Paşca
Cristina Stoica
On the Manev spatial isosceles three-body problem
May 23, 2018spatial isosceles three-body problemManev interactiontopology of the collision manifold
We study the isosceles three-body problem with Manev interaction. Using a McGehee-type technique, we blow up the triple collision singularity into an invariant manifold, called the collision manifold, pasted into the phase space for all energy levels. We find that orbits tending to/ejecting from total collision are present for a large set of angular momenta. We also find that as the angular momentum is increased, the collision manifold changes its topology. We discuss the flow near-by the collision manifold, study equilibria and homographic motions, and prove some statements on the global flow.
Introduction
In 1930, the Bulgarian physicist Georgy Manev proposed gravitational law of the form
U (r) = − µ r − 3µ 2 2c 2 1 r 2(1)
where where r is the distance between the bodies, µ the gravitational parameter, and c the speed of light. He showed that by applying a general action-reaction principle to classical mechanics, one is naturally led to the aforementioned law [Manev 1925, Manev 1930. Provided the constants are chosen appropriately, the Manev model can be used in calculations involving the perihelion advance of Mercury and the other inner planets. The N -body problem with Manev interaction was brought into focus in the early 90's by Diacu [Diacu 1993]. Due to its rich and interesting dynamics, it became subject to many studies [Diacu & al. 1995], [Diacu & al. 2000], [Szenkovits & al. 1999], [Stoica 2000], [Diacu & Santoprete 2001], [Santoprete 2002], [Puta & Hedrea 2005], [Kyuldjiev 2007], [Balsas & al. 2009, Llibre & Makhlouf 2012, Lemou & al. 2012, Alberti 2015, Barrabés & al. 2017]. For instance, in contrast to its Newtonian counterpart, the Manev problem displays binary collisions for non-zero angular momenta: when approaching collision, two mass points spin infinitely many times around each other [Diacu & al. 1995, Diacu & al. 2000]. In celestial mechanics community, this dynamical behaviour is known as a black-hole, somehow in analogy with the black-hole gravitational effect [Diacu & al. 1995] in relativity.
In the relative two-body problem, the Manev interactions delineates two distinct type of nearcollision dynamics. Let us consider the class of potentials of the form −1/r − B/r α , with α > 0, and B > 0 small so that the term −B/r α may thought as a corrective augmentation to the Newtonian potential. It can be shown that for all α > 0 the collision manifold is a torus. For all 0 < α < 2, the collision is possible only for zero angular momentum. The dynamics on the collision manifold is similar to the Newtonian case, with a gradient-like flow matching to circles of equilibria. Moreover, when α = 2(1 − 1/n), n ≥ 2, n ∈ N, the flow is regularizable in the sense of Levi-Civita [Stoica 2000]. For α = 2, the Manev case, the collision is possible for angular momenta C with |C| ≤ b, b > 0 being some constant depending on masses; on the collision manifold the dynamics is trivial displaying two circles of degenerate equilibria [Diacu & al. 2000]. For α > 2 the collision manifold is reached for all angular momenta. Its the flow is gradient-like, matching two circles of equilibria as well, but is not regularizable [Stoica 1997]. An intuitive and physically reasonable explanation for the above is that the Manev corrective term (−B/r 2 ) adds to the rotational inertial term C 2 /r 2 (the latter being a consequence of the angular momentum conservation).
We believe that, similar to the case of two bodies, in the generalized stands as the threshold between two distinct type of near-collision dynamics. This is suggested by the studies of the isosceles three-body problem with Newtonian [Devaney 1980, Shibayama & al. 2009], Manev [Diacu 1993] and Schwarzschild "−1/r − B/r 3 " interaction [Arredondo & al. 2014]. The present paper is a further step aiming to this problem clarification.
In this paper we investigate the dynamics near total collapse in a three-body problem with Manev binary interaction. Considering two of the masses equal, we study the dynamics on the invariant manifold of isosceles configurations. Using a McGehee technique similar to that in [Devaney 1980], we blow up the collision singularity and replace it by an invariant collision manifold pasted to the phase space for all energy levels. While fictitious, due to the continuity of ODE solutions with respect to the initial data, the collision manifold provides information about orbits passing close to collision. Its flow is rendered by the evolution of 3 variables, v, θ and w, describing the (fictitious) rate of change of the size of the system, the shape of it configuration and the rate of change of the latter, respectively.
The Manev isosceles three-body problem, and in particular the near-collision dynamics, was studied by Diacu [Diacu 1993], but only for zero total angular momentum. The bodies were confined to a fixed plane, with the middle body oscillating above and below the line joining the other two. One of the open problems stated in Diacu's paper concerns the existence of non-zero angular momenta orbits ejecting/tending asymptotically to triple collision. Are such orbits possible? In the present work we find that orbits tending to/ejecting from total collision are present for a large set of non-zero momenta.
We also detect an interesting feature of the Manev three-body problem: as the size C of the total angular momentum increases from zero, the collision manifold changes its topology from a sphere with 4 points removed, as in the Newtonian [Shibayama & al. 2009] and Schwarzschild [Arredondo & al. 2014] cases, to the union of a sphere with two lines, to the union a point with two lines, and finally to two lines. To our knowledge, this phenomenon was not observed anywhere else. The lines that persist for all momenta correspond to (fictitious) double collisions.
On the collision manifold C, for all angular momenta, the double collisions lines are filled with equilibria. For low momenta, we find six more equilibria, similar to the Newtonian case [Shibayama & al. 2009]. This points correspond to two distinct total collision limit configurations: one linear (with one of the body fixed on the midpoint between the other two) and one spatial (modulo a reflection symmetry), with the ratio of the triangle sides depending on the bodies' masses. As C is increased, the spatial limit configurations disappear. For high C, the linear limit configurations disappear as well and triple collision is reached (asymptotically) only by solutions with double collision as limit configuration. The flow on C is constant in the v coordinate: for low C, the orbits connect the double collision manifolds, whereas when C is diffeomorphic to the union of a sphere with the double collision lines, all orbits are either periodic or equilibria. We prove that none of these periodic orbits is an attractor for the global flow. We also prove that homographic motions, that is motions for with self-similar configurations, have linear configurations only.
The paper is organized as follows: in Section 2 we introduce the isosceles Manev three-body problem and reduce the dynamics to a two degrees of freedom using the angular momentum conservation. In Section 3 we regularize the equations of motion. In Section 4 we define the collision manifold, and classify its topology and investigate the associated dynamics. In Section 5 we discuss the flow near-by the collision manifold, study equilibria and homographic motions, and prove some statements on the global flow.
Dynamics
In cylindrical coordinates (R, φ, Z, p R , p φ , p Z ) (see Figure 1) the Hamiltonian is
H(R, φ, Z, P R , P φ , P Z ) = 1 M P 2 R + P 2 φ R 2 + 2M + m 4M m P 2 Z + U (R, Z),
with a Manev-type potential given by
U (R, Z) = − GM 2 R 1 + γ 0 R − 4GM m √ R 2 + 4Z 2 1 + 4γ √ R 2 + 4Z 2 .(2)
where γ 0 , γ > 0 and γ 0 = γ0. For reason to be discussed later, we assume that
16γ > γ 0(3)
Using the angular momentum conservation P φ (t) = const. =: C we reduced the dynamics to a two degree of freedom Hamiltonian system determined by
H red (R, Z, P R , P Z ; C) (4) = 1 2 (p R p z ) 2 M 0 0 2M +m 2M m P R P Z + U eff (R, Z; C)
where U eff (R, Z; C) the effective (or amended) potential
U eff (R, Z; C) := C 2 M R 2 + U (R, Z).(5)
and C ∈ R is a parameter. The equations of motion arė
R = 2P R M , Z = 2M + m 2M m P Z , P R = 2C 2 M R 3 − ∂U (R, Z; C) ∂Ṙ P Z = − ∂U (R, Z; C) ∂Z .
Since the Hamiltonian is time-independent, along any solution the energy is conserved: Figure 1: The spatial isosceles three-body problem
H red (R(t), Z(t), P R (t), P z (t); c) = const. = h. (6) φ R Z x y z m M M
The regularized dynamics
We now regularize the equations of motion of the isosceles Manev three body problem. We follow closely the McGehee technique as used in the Newtonian isosceles problem by Devaney [Devaney 1980]. Denoting
x := R Z , p := p R p Z , K = M 2 0 0 2M m 2M +m , we introduce the coordinates (r, v, s, u) defined by r = √ x t K x, v = r(s · p),(7)s = x r , u = r(K −1 p − (s · p)s).
Notice that r = 0 corresponds to R = Z = 0, i.e., to the triple collision of the bodies. The coordinate v describes the rate of change of the size of the system as given by r, whereas the vector s describes R and Z separately. One may verify that in the new coordinates we have that s t K s = 1 and s t K u = 0.
The equations of motion arė
r = r −1 v, v = r −2 v 2 + r −2 u t Ku + r −2 2C 2 M s 2 1 − V (s) r + 2W (s) r 2 , s = r −2 u, u = −r −2 u t Ku − r −2 2C 2 M s 2 1 + r −2 V (s) r + 2W (s) r 2 s + r −1 2 M ∂V ∂s 1 2M + m 2M m ∂V ∂s 2 + r −2 ∂ ∂s 1 − 2C 2 M 2 s 2 1 0 + r −2 2 M ∂W ∂s 1 2M + m 2M m ∂W ∂s 2 , with V (s) = GM 2 s 1 + 4GM m s 2 1 + 4s 2 2 1 2 and W (s) = GM 2 γ 0 s 2 1 + 8GM mγ s 2 1 + 4s 2 2 .
We further introduce the change of coordinates
s = (K −1 )(cos θ, sin θ) t , u = u (K −1 )(− sin θ, cos θ) t
where − π 2 < θ < π 2 so that the boundaries θ = ± π 2 correspond in the original coordinates to R = 0, that is, to double collisions of the masses M. More precisely, at θ = π/2 we have R = 0 and z > 0, whereas at θ = −π/2, R = 0 and z < 0. Also, the θ varies, the ratio between R and Z varies as well; a direct calculation also shows that
Z cos θ = √ µ 2 R sin θ .(8)
Thus, for instance, Z = 0 at θ = 0, and R = 0 at ±θ = π/2. One may also verify that that u t T u = u 2 andu = (u/u)u − uθ s. Denoting
µ := 2M + m m(9)
and applying the time re-parametrization dt = r 2 dτ , we obtain the system
r = rv,(10)v = v 2 + u 2 + C 2 cos 2 θ − rV (θ) − 2W (θ),(11)θ = u,(12)u = −C 2 sin θ cos 3 θ + r ∂V (θ) ∂θ + ∂W (θ) ∂θ ,(13)
where
V (θ) = GM M 2 1 2 M cos θ + 4m (cos 2 θ + µ sin 2 θ) 1 2 ,(14)W (θ) = GM M 2 M γ 0 cos 2 θ + 8mγ (cos 2 θ + µ sin 2 θ) .(15)
In the new coordinates the energy integral is given by
hr 2 = 1 2 u 2 + v 2 − rV (θ) − W (θ) .(16)
Potential functions V (θ) and W (θ)
First we notice that V (θ) and W (θ) are positive on their domain θ ∈ (−π/2, π/2). A direct calculation shows that, V (θ) has three critical points at θ 0 = 0 and θ = ±θ v , where
cos θ v = µ µ + 3 .(17)
Similarly, provided the conditions (3) is satisfied, W (θ) displays three critical points at θ 0 = 0 and
We leave for future work the case when the parameters γ 0 and γ do not obey (3) (that is when γ 0 ≥ 16γ). It is immediate that the nonzero critical points of V (θ) and W (θ) coincide only if γ = γ 0 , case already excluded in our model; see equation (2).
Regularized Equations of Motion
In the system (10)-(12) and the energy integral (16) we make the substitutions
U (θ) = W (θ) cos 2 θ, w = cos 2 θ U (θ) u,(19)
and introduce a new time parametrization given by dτ
dσ = cos 2 θ √ U (θ) to obtain r = cos 2 θ U (θ) rv, v = v 2 + U (θ) cos 4 θ w 2 + C 2 cos 2 θ − rV (θ) − 2 U (θ) cos 2 θ cos 2 θ U (θ) ,(20)θ = w, w = −C 2 sin 2θ 2U (θ) + r cos 4 θ V (θ) U (θ) + cos 2 θ U (θ) U (θ) + sin 2θ,
and 2hr 2 cos 4 θ = w 2 U (θ) + v 2 cos 4 θ + C 2 cos 2 θ − 2r cos 4 θV (θ) − 2 cos 2 θU (θ) .
Notice that U (θ) is smooth and U (θ) > 0 for all θ ∈ (−π/2, π/2); see its sketch in Figure 3. Finally, Figure 3: The function U (θ).
using the energy relation, we substitute the term containing the angular momentum C into the v equation and obtain
r = cos 2 θ U (θ) rv,(22)v = r(2hr + V (θ)) cos 2 θ U (θ) ,(23)θ = w,(24)w = cos θ U (θ) rV (θ) cos 3 θ + U (θ) cos θ − C 2 − 2U (θ) sin θ .(25)
The Triple Collision Manifold
The vector field (22)-(25) is analytic on [0, ∞) × R × − π 2 , π 2 × R, and thus the flow is well defined everywhere on its domain, including the points corresponding to triple collision (r = 0). The restriction of the energy relation (21) to r = 0
C := (v, θ, w) ∈ R × − π 2 , π 2 × R w 2 + v 2 cos 4 θ U (θ) + C 2 − 2U (θ) cos 2 θ U (θ) = 0 ,(26)
is a (fictitious) invariant set, called the triple collision manifold, pasted into the phase space for any level of energy. By continuity with respect to the initial data, the flow on the smooth subsets of C provides information about the orbits that pass close to collision.
Topology
Let is denote by U m the minimum and maximum values of U (θ) (see Figure 3). We calculate
U m = U ± π 2 = GM 3 γ 0 2 .(27)
We also observe that the maximum value of U (θ) occurs at θ = 0 and it is given by
U (0) = GM 2 2 (M γ 0 + 8mγ) .(28)
The collision manifold is non-void if C 2 − 2U (θ) ≤ 0. Considering the graph of 2U (θ) and the sign of C 2 − 2U (θ) as C 2 is increasing from zero, we distinguish the following cases:
1. If 0 ≤ |C| < 2U m the collision manifold C is homeomorphic to a sphere with 4 points removed; see Figure 4. C is a smooth manifold everywhere, except at the (fictitious) double collision boundaries B l,r : Figure 4: The collision manifold C for angular momenta 0 ≤ |C| ≤ 2U m .
= {(v, θ, w) | v = v 0 ∈ R , θ = ± π 2 w = 0} .(29)P + - P E E E E + + 1 1 - - 2 2 B B l r
If |C| ∈
2U m , 2U (0) then C consists in the union of a sphere with the lines B l,r ; see Figure 5.
3. If |C| = 2U (0) then C is the union of one point, the origin, with B l,r .
4. If |C| > 2U (0) then C consists of the lines B l,r .
Thus we have proved:
Proposition 4.1 As the momentum |C| is increased, the triple collision manifold changes its topology, from a sphere with 4 points removed, to the union of a sphere with two lines, to the union of a point with two lines and finally, to two lines.
Dynamics on the collision manifold
The vector field on the collision manifold is obtained by setting r = 0 in system (22) and it is given by Figure 5: The collision manifold C for angular momenta |C| ∈ 2U m , 2U (0) . The compact part C \ B l,r of the collision manifold shrinks as the total angular momentum |C| is increasing, and it disappears for |C| > 2U (0).
v = 0,(30)θ = w,(31)w = cos θ U (θ) U (θ) cos θ − C 2 − 2U (θ) sin θ .(32)P - P + B B l r
It is immediate that v is constant along the orbits, the flow being degenerate in this direction. For every v = const. = v 0 , the restriction of collision manifold C to a level v = const. = v 0 is
V v 0 := (θ, w) ∈ − π 2 , π 2 × R w 2 + v 2 0 cos 4 θ U (θ) + C 2 − 2U (θ) cos 2 θ U (θ) = 0 .(33)
When connected to C, the double collision lines B consist in degenerate equilibria. All orbits are horizontal.
For all momentum values for C exists, that is for
|C| ≤ 2U (0) = GM 3 γ 0 + 8GM 2 γ(34)
we have two equilibria located at
P ± := (± 2U (0) − C 2 , 0, 0) .(35)
For momenta
|C| = 2U m = GM 3 γ 0(36)
the equilibria P ± coalesce.
For lower momenta
|C| ≤ 2U m = GM 3 γ 0(37)
we also have four more equilibria located at
E 1 ± = (±v 0 , −θ 0 , 0) and E 2 ± = (±v 0 , θ 0 , 0)(38)
where
v 0 = 1 µ 8GM 2 mγ + 2M m (GM 3 γ 0 − C 2 )(39)
and θ 0 ∈ (0, π/2) so that
tan 2 θ 0 = 1 µ 16GM 3 γ GM 3 γ 0 − C 2 − 1 .(40)
Consequently, for |C| ≤ GM 3 γ 0 we have the following type of orbits (see Figure 4): -homoclinic connections joining a double collision equilibrium; -heteroclinic connections joining a double collision equilibrium to one of the "E"points; -homoclinic connections between two "E" points; -heteroclinic connections joining two double collision equilibria . On the edges B l,r the system (30)-(32) may lose uniqueness of solutions. The double collisions are not regularizable (and thus they cannot be equivalent to elastic bounces, as in the Newtonian case), as it is known from the [Diacu & al. 2000, Stoica 2000]. For √ 2U m < |C| < 2U (0), that is for
GM 3 γ 0 < |C| < GM 3 γ 0 + 8GM 2 mγ
the flow wraps around C (see Figure 5).
The Near-Collision Flow
Equilibria and their stability
We now discuss the equilibria on the collision manifold as embedded in the full (r, v, θ, w) regularized phase-space, and calculate their stability. We have -for all momenta |C| ≤ 2U (0), we find a pair equilibria on C at
P ± := (0, ± 2U (0) − C 2 , 0, 0) .(41)
-for momenta such |C| ≤ √ 2U m the flow also displays four fixed points
E 1 ± = (0, ±v 0 , −θ 0 , 0) E 2 ± = (0, ±v 0 , θ 0 , 0)(42)
with v 0 and θ 0 given by (39) and (40), respectively. Also, we find an infinite number of equilibria on the edges B l.r .
To determine the stability of P ± we start by writing the energy relation (21) as a level set:
E := {(r, v, θ, w) | F (r, v, θ, w) = 0}(43)
where F (r,v, θ, w) := 2hr 2 cos 4 θ − w 2 U (θ) − v 2 cos 4 θ − C 2 cos 2 θ + 2rV (θ) cos 4 θ + 2U (θ) cos 2 θ .
Next we calculate the spectrum of the linearization of system (22) at an equilibrium and then we restrict it to the tangent space of the collision manifold E. We denote by J the linearization of 22 andJ its restriction to a tangent space.
At P ± = (0, ± 2U (0) − C 2 , 0, 0) we find
J = ± 2 − C 2 U (0) 0 0 0 V (0) √ U (0) 0 0 0 0 0 0 1 0 0 2 + U (0)−C 2 U (0) 0 .(45)
The tangent space to E at an equilibrium point P ± = (0, ± 2U (0) − C 2 , 0, 0) is
T P ± E = {(ρ 1 , ρ 2 , ρ 3 , ρ 4 ) | ∇F | P ± · (ρ 1 , ρ 2 , ρ 3 , ρ 4 ) = 0} = {(ρ 1 , ρ 2 , ρ 3 , ρ 4 ) | V (0)ρ 1 ± 2U (0) − C 2 ρ 2 = 0} .
For angular momenta |C| < 2U (0) = GM 2 (M γ 0 + 8mγ)/2, a basis for T P ± E is given by
ξ 1 = (± 2U (0) − C 2 , −V (0), 0, 0),
ξ 3 = (0, 0, 1, 0) and ξ 4 = (0, 0, 0, 1) and a representative ofJ in this basis is
± 2 − C 2 U (0) 0 0 0 0 1 0 2 + U (0)−C 2 U (0) 0 .(46)
The eigenvalues ofJ are given by
λ 1 = ± 2 − C 2 U (0) = ± GM 3 γ 0 − C 2 GM 3 γ 0 ∈ R and λ 2,3 = ±i (−2) (GM 3 (γ 0 − 16γ) − C 2 ) GM 2 (M γ 0 + 8mγ)(47)
where the quantity under square root is positive given that condition (3) is satisfied.
If |C| = ± 2U (0), the collision manifold collapses to a point, the origin O, which is also an equilibrium. We have T O E = {(ρ 1 , ρ 2 , ρ 3 , ρ 4 ) | ρ 1 = 0}. The linear part of the vector field (22) restricted to the tangent space is given byJ
= 0 0 0 0 0 0 0 0 0 0 0 1 0 0 U (0) U (0) 0 ,(48)
and so a basis for T ±P E is given by ξ 2 = (0, 1, 0, 0), ξ 3 = (0, 0, 1, 0) and ξ 4 = (0, 0, 0, 1). A representative ofJ in this basis is
0 0 0 0 0 1 0 U (0) U (0) 0 .(49)
The eigenvalues are given by λ 1 = 0 and λ 2,3 = ±4i mγµ/(M γ 0 + 8mγ) .
Now we study the behaviour near the points E 1,2 ± . We calculate the Jacobian matrix of system (22) evaluated at this points and find:
J = ±v 0 cos 2 θ 0 √ U (θ 0 ) 0 0 0 V (θ 0 ) cos 2 θ 0 √ U (θ 0 ) 0 0 0 0 0 0 1 V (θ 0 ) cos 4 θ 0 U (θ 0 ) 0 a 0 (50) where a = 16M m 2 (2M + m)γ sin 2 θ 0 cos 4 θ 0 M cos 2 θ 0 − M − m 2 2 1 (M 2 γ 0 − 4m 2 γ) cos 2 θ 0 − M M + m 2 γ 0 .(51)
The sign of the term a is decided by the sign of the expression
T := M 2 γ 0 − 4m 2 γ cos 2 θ 0 − M M + m 2 γ 0 .(52)
For this we calculate cos 2 θ 0 = 1/(1 + tan 2 θ 0 ) using (40) that we then substitute into (52). We obtain
T = − m(2M + m) 8mγ + M γ 0 16GM 3 γ GM 3 γ 0 −C 2 2 2M + m 16GM 3 γ GM 3 γ 0 −C 2 .(53)
Thus the sign of a is negative. The tangent space to the energy level manifold (43) at an equilibrium
point E 1 ± , E 2 ± is T E 1,2 ± E ={(ρ 1 , ρ 2 , ρ 3 , ρ 4 ) | cos 3 θ 0 V (θ 0 )ρ 1 − v 0 cos 3 θ 0 ρ 2 + [sin θ 0 (2v 2 0 cos 2 θ 0 + C 2 − 2U (θ 0 )) + cos θ 0 U (θ 0 )]ρ 3 = 0} .(54)
Then a basis forT E 1,2 ± E is given by ξ 1 = (1, 0, 0, 0), ξ 3 = (0, 0, 1, 0) and ξ 4 = (0, 0, 0, 1). A representative ofJ in this basis isJ
= ±v 0 cos 2 θ 0 √ U (θ 0 ) 0 0 0 0 1 V (θ 0 ) cos 4 θ 0 U (θ 0 ) a 0 .(55)
The eigenvalues are
λ 1 = v 0 cos 2 θ 0 U (θ 0 ) for E 1 + and E 2 + , λ 2 = −v 0 cos 2 θ 0 U (θ 0 ) for E 1 − and E 2 − .
and λ 2,3 = ±i √ −a . Thus we have proven:
Proposition 5.1 For every fixed energy level h and any fixed angular momentum |C| ∈ 0, 2U (0) , the equilibria P + (P − ) have a one-dimensional unstable (stable) manifold and a two-dimensional centre manifold.
Proposition 5.2 For every fixed energy level h and any fixed angular momentum |C| ∈ 0, √ 2U m , the equilibria E 1,2 + (E 1,2 − ) have a one-dimensional unstable (stable) manifold and a two-dimensional centre manifold.
Proposition 5.3 For every fixed energy level h and any fixed angular momentum |C| > U (0), the triple collision manifold is reached (asymptotically) by solutions with double collision as limit configuration (i.e., the limit configuration has R = 0).
Remark 5.4 When γ 0 ≥ 16γ, the functions V (θ) and W (θ) lose their critical points at θ = 0, and consequently, the collision manifold does not display a "hump". The only equilibria on C \ B l,r are those at P ± .
Homographic motions
Using similar arguments as in [Arredondo & al. 2014], one may prove that motions ejecting/ending from/to the equilibria P ± are homographic, i.e., they maintain the a self-similar shape of the triangle formed by the three bodies. In the Manev isosceles problem, homographic motions form the invariant manifold H := {(r, v, θ, w) | θ = 0 , w = 0} (56)
of the system (22)-(25), and the dynamics on H are given by
r = cos 2 θ U (θ) rv,(57)v = r(2hr + V (θ)) cos 2 θ U (θ) .(58)
with the energy integral
v 2 + 2(−h)r 2 − 2rV (0) + C 2 − 2U (0) = 0 .(59)
Since on H we have θ(t) = 0 for all t, physically homographic motions have a linear configurations, with body m positioned midway between the other two. For h < 0 we re-write the energy relation (59) as
v 2 2(−h) + r − V (0) 2(−h) 2 + 1 2(−h) C 2 − 2U (0) − V 2 (0) 2(−h) = 0 .(60)
and notice that the motion is possible only for momenta C such that
|C| < 2U (0) + V 2 (0) 2(−h) .(61)
We also observe that for
2U (0) < |C| < 2U (0) + V 2 (0) 2(−h)(62)
all orbits are periodic and non-collisional, and surround the equilibrium located at
S = V (0) 2(−h) , 0 .(63)
As mentioned, in physical space, homographic motions correspond to motions with linear configuration. The homographic equilibrium is a rotating steady state with the outer bodies rotating at a fixed distance from the central body. The homographic periodic orbits are motions in which the outer bodies rotate and "pulsate" between a maximum and minimum distance from the central body. For h > 0 all homographic orbits are unbounded. They either eject/fall into the collision manifold or come from infinity, attain a configuration minimal size, and return to infinity. A sketch of the phase portrait of homographic motions is given in Figure 6.
Other aspects of the global flow
Proposition 5.5 For every fixed h < 0 and |C| ∈ √ 2U m , 2U (0) the set C \ (B l,r ∪ P ± ) is not an attractor.
Proof: Let h < 0 and |C| ∈ √ 2U m , 2U (0) be fixed. In this case the collision manifold and its flow are depicted in Figure 5. The evolution of the r and v variables is driven by the equations (22) and (23); for reader's convenience we re-write these equations below r = cos 2 θ U (θ) rv,
v = 2h cos 2 θ U (θ) r 2 + 2V (θ)) cos 2 θ U (θ) .
We will show that for the given h and C no orbit can tend to C \ (B l,r ∪ P ± ) . Assume that there is an orbit that approaches asymptotically C \ (B l,r ∪ P ± ) . This means that from some t 0 the function r(t) is monotone decreasing for all t > t 0 . Looking at (64), this implies that v(t) < 0 for all t > t 0 . Since h is finite, the term cos 2 θ U (θ) bounded and V (θ) > 0 for all θ, for r small enough the right hand side of the (65) becomes positive, so making v > 0. Then v starts increasing, becoming positive again for some t 1 > t 0 , and thus implying that r is increasing for t > t 1 . But this contradicts the assumption that r(t) is decreasing for all t > t 0 .
Corollary 5.6 The triple collision manifold is reached (asymptotically) by solutions for which the limit configuration have zero area, i.e., by solutions with limit configurations that are either linear (Z = 0), or vertical, with the equal mass bodies in double collision (R = 0).
Using Propositions 5.1 and 5.2 we also deduce Proposition 5.7 For any h < 0 fixed and low angular momenta |C| < √ 2U m the triple collision is attainable (either as a ejection or collision) by solutions with lspatial and linear limit configurations.
A direct analysis of the system (22) also implies that Proposition 5.8 For h > 0, all orbits are unbounded.
Figure 2 :
2The shape and the intersection of the functions V (θ) (bottom) and W (θ) (top). (The figure is generated for M = 10, m = 1, γ 0 = 1 and γ = 3.)
Figure 6 :
6Homographic motions. Orbits with h < 0 and h > 0 are represented with solid lines and dashlines, respectively.
AcknowledgmentsCS was supported by an NSERC Discovery Grant.
Arredondo, J A Arredondo, E Pérez-Chavela, C Stoica, Dynamics in the Schwarzschild isosceles three body problem. 24997Arredondo & al. 2014] Arredondo J.A., Pérez-Chavela E. and Stoica C.:, Dynamics in the Schwarzschild isosceles three body problem, Journal of Nonlinear Sciences 24, 997 (2014)
New families of symmetric periodic solutions of the spatial anisotropic Manev problem. A Alberti, C Vidal, J. Math. Phys. 56112901[Alberti 2015] Alberti A. and Vidal C.: New families of symmetric periodic solutions of the spatial anisotropic Manev problem, J. Math. Phys. 56, no. 1, 012901, (2015)
Qualitative analysis of the phase flow of a Manev system in a rotating reference frame. Balsas, M C Balsas, J L Guirao, E S Jiménez, J A Vera, Int. Journal of Computational Mathematics. 8610Balsas & al. 2009] Balsas, M. C., Guirao J. L., Jiménez E. S. and Vera, J. A.: Qualitative analysis of the phase flow of a Manev system in a rotating reference frame, Int. Journal of Computational Mathematics 86, no. 10-11, 1817, (2009)
Spatial collinear restricted four-body problem with repulsive Manev potential. Barrabés, E Barrabs, J M Cors, C Vidal, Celest. Mech. Dyn. Astron. 1291-2[Barrabés & al. 2017] Barrabs E., Cors J. M. and Vidal C.: Spatial collinear restricted four-body prob- lem with repulsive Manev potential, Celest. Mech. Dyn. Astron. 129, no. 1-2, 153, (2017)
Collision in the Planar Isosceles Three Body Problem. R Devaney, Inventiones Math. 60249[Devaney 1980] Devaney R.: Collision in the Planar Isosceles Three Body Problem, Inventiones Math. 60, 249, (1980).
The planar isosceles problem for Maneff's gravitational law. ] Diacu, F , J. Math. Phys. 345671[Diacu 1993] Diacu F.: The planar isosceles problem for Maneff's gravitational law, J. Math. Phys. 34, 5671, (1993)
The Manev Two-Body Problem: quantitative and qualitative theory, Dynamical systems and applications. Diacu, F Diacu, A Mingarelli, V Mioc, C Stoica, World Sci. Ser. Appl. Anal. 4Diacu & al. 1995] Diacu F., Mingarelli A., Mioc V., Stoica C.: The Manev Two-Body Problem: quan- titative and qualitative theory, Dynamical systems and applications, in World Sci. Ser. Appl. Anal. 4, 213, (1995)
Diacu, F Diacu, V Mioc, C Stoica, Phase-Space Structure and Regularization of Manev-Type Problems. 411029Diacu & al. 2000] Diacu F., Mioc V., Stoica C.: Phase-Space Structure and Regularization of Manev- Type Problems, Nonlinear Analysis 41, 1029, (2000)
Nonintegrability and Chaos in the Anisotropic Manev Problem. F Diacu, M Santoprete, Physica D. 15639Diacu & Santoprete[Diacu & Santoprete 2001] Diacu F. and Santoprete M. Nonintegrability and Chaos in the Anisotropic Manev Problem, Physica D 156, 39, (2001)
On the symmetries of the Manev problem and its real Hamiltonian form. Geometry, integrability and quantization. A Kyuldjiev, V Gerdjikov, G Marmo, G Vilasi, Softex, Sofia[Kyuldjiev 2007] Kyuldjiev A., Gerdjikov V., Marmo, G. and Vilasi G.: On the symmetries of the Manev problem and its real Hamiltonian form. Geometry, integrability and quantization, 221-233, Softex, Sofia, (2007)
Stable ground states and self-similar blowup solutions for the gravitational Vlasov-Manev system. Lemou, M Lemou, F Méhats, C Rigault, SIAM J. Math. Anal. 446Lemou & al. 2012] Lemou M., Méhats F. and Rigault C.: Stable ground states and self-similar blow- up solutions for the gravitational Vlasov-Manev system, SIAM J. Math. Anal. 44, no. 6, 3928, (2012)
Periodic orbits of the spatial anisotropic Manev problem. J Llibre & Makhlouf ; Llibre, A Makhlouf, J. Math. Phys. 5312122903Llibre & Makhlouf 2012] Llibre J. and Makhlouf A.: Periodic orbits of the spatial anisotropic Manev problem. J. Math. Phys. 53, no. 12, 122903, (2012)
Die Gravitation und das Prinzip von Wirkung und Gegenwirkung. G Maneff, Z. Phys. 31Manev 1925[Manev 1925] Maneff G.: Die Gravitation und das Prinzip von Wirkung und Gegenwirkung, Z. Phys. 31, 786, (1925)
Le principe de la moindre action et la gravitation. ] Maneff, G , C. R. Acad. Sci. 963Manev 1930[Manev 1930] Maneff G.: Le principe de la moindre action et la gravitation, C. R. Acad. Sci. Paris 190, 963, (1930)
Triple Collision in the Collinear Three-Body Problem. R Mcgehee, Inventiones Mathematicae. 27191[McGehee 1974] McGehee R.: Triple Collision in the Collinear Three-Body Problem, Inventiones Math- ematicae 27, 191 (1974)
Some remarks on Manev's Hamiltonian system. M Puta & Hedrea ; Puta, I C Hedrea, Tensor (N.S.6671Puta & Hedrea 2005] Puta, M., Hedrea I. C.: Some remarks on Manev's Hamiltonian system. Tensor (N.S.) 66, no. 1, 71, (2005)
Symmetric periodic solutions of the anisotropic Manev problem. J. Math. Phys. 436[Santoprete 2002] Santoprete M.: Symmetric periodic solutions of the anisotropic Manev problem, J. Math. Phys. 43 no. 6, 3207, (2002)
Shibayama, M Shibayama, K Yagasaki, Heteroclinic connections between triple collisions and relative periodic orbits in the isosceles three-body problem. 222377[Shibayama & al. 2009] Shibayama M., Yagasaki K.: Heteroclinic connections between triple collisions and relative periodic orbits in the isosceles three-body problem, Nonlinearity 22 no. 10, 2377, (2009)
Qualitative Study of the Planar Isosceles Three-Body Problem. C Simo, R Martinez, Celest. Mech. Dyn. Astron. 41Simo & Martinez[Simo & Martinez 1988] Simo, C. and Martinez, R.: Qualitative Study of the Planar Isosceles Three- Body Problem, Celest. Mech. Dyn. Astron 41, 179, (1988)
The Schwarzschild problem in astrophysics. Stoica, V Mioc, Astrophys. Space Sci. 249161[Stoica 1997] Stoica, C and Mioc V.: The Schwarzschild problem in astrophysics. Astrophys. Space Sci. 249, 161 (1997)
Stoica, C: Particle systems with quasihomogeneous interaction. University of VictoriaPhD Thesis[Stoica 2000] Stoica, C: Particle systems with quasihomogeneous interaction. PhD Thesis, University of Victoria (2000)
The Manev-type problems: a topological view. Szenkovits, F Szenkovits, C Stoica, V Mioc, Mathematica. 4164105[Szenkovits & al. 1999] Szenkovits F., Stoica C. and Mioc V.: The Manev-type problems: a topological view, Mathematica 41, (64), no. 1, 105, (1999)
| []
|
[
"Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes",
"Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes",
"Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes",
"Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes"
]
| [
"Tomáš Brázdil [email protected] ",
"Václav Brožek [email protected] ",
"Krishnendu Chatterjee ",
"Vojtěch Forejt ",
"Antonín Kučera [email protected] ",
"\nFaculty of Informatics\nSchool of Informatics\nMasaryk University Brno\nCzech Republic\n",
"\nUniversity of Edinburgh\nUK\n",
"\nIST Austria Klosterneuburg\nAustria\n",
"\nFaculty of Informatics Masaryk University Brno\nComputing Laboratory University of Oxford\nUK, Czech Republic\n",
"Tomáš Brázdil [email protected] ",
"Václav Brožek [email protected] ",
"Krishnendu Chatterjee ",
"Vojtěch Forejt ",
"Antonín Kučera [email protected] ",
"\nFaculty of Informatics\nSchool of Informatics\nMasaryk University Brno\nCzech Republic\n",
"\nUniversity of Edinburgh\nUK\n",
"\nIST Austria Klosterneuburg\nAustria\n",
"\nFaculty of Informatics Masaryk University Brno\nComputing Laboratory University of Oxford\nUK, Czech Republic\n"
]
| [
"Faculty of Informatics\nSchool of Informatics\nMasaryk University Brno\nCzech Republic",
"University of Edinburgh\nUK",
"IST Austria Klosterneuburg\nAustria",
"Faculty of Informatics Masaryk University Brno\nComputing Laboratory University of Oxford\nUK, Czech Republic",
"Faculty of Informatics\nSchool of Informatics\nMasaryk University Brno\nCzech Republic",
"University of Edinburgh\nUK",
"IST Austria Klosterneuburg\nAustria",
"Faculty of Informatics Masaryk University Brno\nComputing Laboratory University of Oxford\nUK, Czech Republic"
]
| []
| We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k k k reward functions, in the expectation objective the goal is to maximize the expected limitaverage value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the single-objective case, both randomization and memory are necessary for strategies, and that finite-memory randomized strategies are sufficient. Under the satisfaction objective, in contrast to the single-objective case, infinite memory is necessary for strategies, and that randomized memoryless strategies are sufficient for ε ε ε-approximation, for all ε > 0 ε > 0 ε > 0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be ε ε ε-approximated in time polynomial in the size of the MDP and 1 ε 1 ε 1 ε , and exponential in the number of reward functions, for all ε > 0 ε > 0 ε > 0. Our results also reveal flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, correct the flaws and obtain improved results.Related Work. In [4] MDPs with multiple discounted reward functions were studied. It was shown that memoryless strate- | 10.2168/lmcs-10(1:13)2014 | [
"https://arxiv.org/pdf/1104.3489v3.pdf"
]
| 125,883,809 | 1104.3489 | f03c59e77c2beea02f9385c76519b33e2b2a5208 |
Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes
18 Apr 2011
Tomáš Brázdil [email protected]
Václav Brožek [email protected]
Krishnendu Chatterjee
Vojtěch Forejt
Antonín Kučera [email protected]
Faculty of Informatics
School of Informatics
Masaryk University Brno
Czech Republic
University of Edinburgh
UK
IST Austria Klosterneuburg
Austria
Faculty of Informatics Masaryk University Brno
Computing Laboratory University of Oxford
UK, Czech Republic
Two Views on Multiple Mean-Payoff Objectives in Markov Decision Processes
18 Apr 2011arXiv:1104.3489v1 [cs.GT]
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k k k reward functions, in the expectation objective the goal is to maximize the expected limitaverage value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the single-objective case, both randomization and memory are necessary for strategies, and that finite-memory randomized strategies are sufficient. Under the satisfaction objective, in contrast to the single-objective case, infinite memory is necessary for strategies, and that randomized memoryless strategies are sufficient for ε ε ε-approximation, for all ε > 0 ε > 0 ε > 0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be ε ε ε-approximated in time polynomial in the size of the MDP and 1 ε 1 ε 1 ε , and exponential in the number of reward functions, for all ε > 0 ε > 0 ε > 0. Our results also reveal flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, correct the flaws and obtain improved results.Related Work. In [4] MDPs with multiple discounted reward functions were studied. It was shown that memoryless strate-
I. INTRODUCTION
Markov decision processes (MDPs) are the standard models for probabilistic dynamic systems that exhibit both probabilistic and nondeterministic behaviors [14], [7]. In each state of an MDP, a controller chooses one of several actions (the nondeterministic choices), and the system stochastically evolves to a new state based on the current state and the chosen action. A reward (or cost) is associated with each transition and the central question is to find a strategy of choosing the actions that optimizes the rewards obtained over the run of the system. One classical way to combine the rewards over the run of the system is the limit-average (or mean-payoff) function that assigns to every run the long-run average of the rewards over the run. MDPs with single mean-payoff functions have been widely studied in literature (see, e.g., [14], [7]). In many modeling domains, however, there is not a single goal to be optimized, but multiple, potentially dependent and conflicting goals. For example, in designing a computer system, the goal is to maximize average performance while minimizing average power consumption. Similarly, in an inventory management system, the goal is to optimize several potentially dependent costs for maintaining each kind of product. These motivate the study of MDPs with multiple mean-payoff functions.
Traditionally, MDPs with mean-payoff functions have been studied with only the expectation objective, where the goal is to maximize (or minimize) the expectation of the meanpayoff function. There are numerous applications of MDPs with expectation objectives in inventory control, planning, and performance evaluation [14], [7]. In this work we consider both the expectation objective and also the satisfaction objective for a given MDP. In both cases we are given an MDP with k reward functions, and the goal is to maximize (or minimize) either the k-tuple of expectations, or the probability of runs such that the mean-payoff value stays above a given vector.
To get some intuition about the difference between the expectation/satisfaction objectives and to show that in some scenarios the satisfaction objective is preferable, consider a filehosting system where the users can download files at various speed, depending on the current setup and the number of connected customers. For simplicity, let us assume that a user has 20% chance to get a 2000kB/sec connection, and 80% chance to get a slow 20kB/sec connection. Then, the overall performance of the server can be reasonably measured by the expected amount of transferred data per user and second (i.e., the expected mean payoff) which is 416kB/sec. However, a single user is more interested in her chance of downloading the files quickly, which can be measured by the probability of establishing and maintaining a reasonably fast connection (say, ≥ 1500kB/sec). Hence, the system administrator may want to maximize the expected mean payoff (by changing the internal setup of the system), while a single user aims at maximizing the probability of satisfying her preferences (she can achieve that, e.g., by buying a priority access, waiting till 3 a.m., or simply connecting to a different server; obviously, she might also wish to minimize other mean payoffs such as the price per transferred bit). In other words, the expectation objective is relevant in situations when we are interested in the "average" behaviour of many instances of a given system, while the satisfaction objective is useful for analyzing and optimizing particular executions.
In MDPs with multiple mean-payoff functions, various strategies may produce incomparable solutions, and consequently there is no "best" solution in general. Informally, the set of achievable solutions (i) under the expectation objective is the set of all vectors v such that there is a strategy to ensure that the expected mean-payoff value vector under the strategy is at least v; (ii) under the satisfaction objective is the set of tuples (ν, v) where ν ∈ [0, 1] and v is a vector such that there is a strategy under which with probability at least ν the mean-payoff value vector of a run is at least v. The "trade-offs" among the goals represented by the individual mean-payoff functions are formally captured by the Pareto curve, which consists of all minimal tuples (wrt. componentwise ordering) that are not strictly dominated by any achievable solution. Intuitively, the Pareto curve consists of "limits" of achievable solutions, and in principle it may contain tuples that are not achievable solutions (see Section III). Pareto optimality has been studied in cooperative game theory [12] and in multi-criterion optimization and decision making in both economics and engineering [10], [17], [16].
Our study of MDPs with multiple mean-payoff functions is motivated by the following fundamental questions, which concern both basic properties and algorithmic aspects of the expectation/satisfaction objectives: Q.1 What type of strategies is sufficient (and necessary) for achievable solutions? Q.2 Are the elements of the Pareto curve achievable solutions? Q.3 Is it decidable whether a given vector represents an achievable solution? Q. 4 Given an achievable solution, is it possible to compute a strategy which achieves this solution? Q.5 Is it decidable whether a given vector belongs to the Pareto curve? Q.6 Is it possible to compute a finite representation/approximation of the Pareto curve? We provide comprehensive answers to the above questions, both for the expectation and the satisfaction objective. We also analyze the complexity of the problems given in Q.3-Q. 6. From a practical point of view, it is particularly encouraging that most of the considered problems turn out to be solvable efficiently, i.e., in polynomial time. More concretely, our answers to Q.1-Q.6 are the following: 1a. For the expectation objectives, finite-memory strategies are sufficient and necessary for all achievable solutions. 1b. For the satisfaction objectives, achievable solutions require infinite memory in general, but memoryless randomized strategies are sufficient to approximate any achievable solution up to an arbitrarily small ε > 0. 2. All elements of the Pareto curve are achievable solutions. 3. The problem whether a given vector represents an achievable solution is solvable in polynomial time. 4.a For the expectation objectives, a strategy which achieves a given solution is computable in polynomial time. 4.b For the satisfaction objectives, a strategy which ε-approximates a given solution is computable in polynomial time. 5. The problem whether a given vector belongs to the Pareto curve is solvable in polynomial time. 6. A finite description of the Pareto curve is computable in exponential time. Further, an ε-approximate Pareto curve is computable in time which is polynomial in 1/ε and the size of a given MDP, and exponential in the number of mean-payoff functions.
A more detailed and precise explanation of our results is postponed to Section III.
Let us note that MDPs with multiple mean-payoff functions under the expectation objective were also studied in [3], and it was claimed that randomized memoryless strategies are sufficient for ε-approximation of the Pareto curve, for all ε > 0, and an NP algorithm was presented to find a randomized memoryless strategy achieving a given vector. We show with an example that under the expectation objective there exists ε > 0 such that randomized strategies do require memory for ε-approximation, and thus reveal a flaw in the earlier paper (our results not only correct the flaws of [3], but also significantly improve the complexity of the algorithm for finding a strategy achieving a given vector).
Similarly to the related papers [4], [6], [8] (see Related Work), we obtain our results by a characterization of the set of achievable solutions by a set of linear constraints, and from the linear constraints we construct witness strategies for any achievable solution. However, our approach differs significantly from the previous works. In all the previous works, the linear constraints are used to encode a memoryless strategy either directly for the MDP [4], or (if memoryless strategies do not suffice in general) for a finite "product" of the MDP and the specification function expressed as automata, from which the memoryless strategy is then transfered to a finitememory strategy for the original MDP [6], [8], [5]. In our setting new problems arise. Under the expectation objective with mean-payoff function, neither is there any immediate notion of "product" of MDP and mean-payoff function and nor do memoryless strategies suffice. Moreover, even for memoryless strategies the linear constraint characterization is not straightforward for mean-payoff functions, as in the case of discounted [4], reachability [6] and total reward functions [8]: for example, in [3] even for memoryless strategies there was no linear constraint characterization for mean-payoff function and only an NP algorithm was given. Our result, obtained by a characterization of linear constraints directly on the original MDP, requires involved and intricate construction of witness strategies. Moreover, our results are significant and non-trivial generalizations of the classical results for MDPs with a single mean-payoff function, where memoryless pure optimal strategies exist, while for multiple functions both randomization and memory is necessary. Under the satisfaction objective, any finite product on which a memoryless strategy would exist is not feasible as in general witness strategies for achievable solutions may need an infinite amount of memory. We establish a correspondence between the set of achievable solutions under both types of objectives for strongly connected MDPs. Finally, we use this correspondence to obtain our result for satisfaction objectives. gies suffice for Pareto optimization, and a polynomial time algorithm was given to approximate (up to a given relative error) the Pareto curve by reduction to multi-objective linearprogramming and using the results of [13]. MDPs with multiple qualitative ω-regular specifications were studied in [6]. It was shown that the Pareto curve can be approximated in polynomial time; the algorithm reduces the problem to MDPs with multiple reachability specifications, which can be solved by multi-objective linear-programming. In [8], the results of [6] were extended to combine ω-regular and expected total reward objectives. MDPs with multiple mean-payoff functions under expectation objectives were considered in [3], and our results reveal flaws in the earlier paper, correct the flaws, and present significantly improved results (a polynomial time algorithm for finding a strategy achieving a given vector as compared to the previously known NP algorithm). Moreover, the satisfaction objective has not been considered in multiobjective setting before, and even in single objective case it has been considered only in a very specific setting [1].
II. PRELIMINARIES
We use N, Z, Q, and R to denote the sets of positive integers, integers, rational numbers, and real numbers, respectively. Given two vectors
v, u ∈ R k , where k ∈ N, we write v ≤ u iff v i ≤ u i for all 1 ≤ i ≤ k, and v < u iff v ≤ u and v i < u i for some 1 ≤ i ≤ k.
We assume familiarity with basic notions of probability theory, e.g., probability space, random variable, or expected value. As usual, a probability distribution over a finite or countably infinite set X is a function f : X → [0, 1] such that x∈X f (x) = 1. We call f positive if f (x) > 0 for every x ∈ X, rational if f (x) ∈ Q for every x ∈ X, and Dirac if f (x) = 1 for some x ∈ X. The set of all distributions over X is denoted by dist (X).
Markov chains.
A Markov chain is a tuple M = (L, →, µ) where L is a finite or countably infinite set of locations, → ⊆ L × (0, 1] × L is a transition relation such that for each fixed ℓ ∈ L, ℓ x →ℓ ′ x = 1, and µ is the initial probability distribution on L.
A run in M is an infinite sequence ω = ℓ 1 ℓ 2 . . . of locations such that ℓ i x → ℓ i+1 for every i ∈ N. A finite path in M is a finite prefix of a run. Each finite path w in M determines the set Cone(w) consisting of all runs that start with w. To M we associate the probability space (Runs M , F , P), where Runs M is the set of all runs in M , F is the σ-field generated by all Cone(w), and P is the unique probability measure such that P(Cone(ℓ 1 , . . . , ℓ k )) = µ(ℓ 1 ) · k−1 i=1 x i , where ℓ i xi → ℓ i+1 for all 1 ≤ i < k (the empty product is equal to 1).
Markov decision processes. A Markov decision process
(MDP) is a tuple G = (S, A, Act , δ) where S is a finite set of states, A is a finite set of actions, Act : S → 2 A \ ∅ is an action enabledness function that assigns to each state s the set Act(s) of actions enabled at s, and δ : S × A → dist (S) is a probabilistic transition function that given a state s and an action a ∈ Act(s) enabled at s gives a probability distribution over the successor states. For simplicity, we assume that every action is enabled in exactly one state, and we denote this state Src(a). Thus, henceforth we will assume that δ : A → dist (S).
A run in G is an infinite alternating sequence of states and actions ω = s 1 a 1 s 2 a 2 . . . such that for all i ≥ 1, Src(a i ) = s i and δ(a i )(s i+1 ) > 0. We denote by Runs G the set of all runs in G. A finite path of length k in G is a finite prefix w = s 1 a 1 . . . a k−1 s k of a run in G. For a finite path w we denote by last (w) the last state of w.
A pair (T, B) with ∅ = T ⊆ S and B ⊆ t∈T Act(t) is an end component of G if (1) for all a ∈ B, whenever δ(a)(s ′ ) > 0 then s ′ ∈ T ; and (2) for all s, t ∈ T there is a finite path ω = s 1 a 1 . . . a k−1 s k such that s 1 = s, s k = t, and all states and actions that appear in w belong to T and B, respectively. (T, B) is a maximal end component (MEC) if it is maximal wrt. pointwise subset ordering. Given an end component C = (T, B), we sometimes abuse notation by using C instead of T or B, e.g., by writing a ∈ C instead of a ∈ B for a ∈ A.
Strategies and plays. Intuitively, a strategy in an MDP G is a "recipe" to choose actions. Usually, a strategy is formally defined as a function σ : (SA) * S → dist (A) that given a finite path w, representing the history of a play, gives a probability distribution over the actions enabled in last (w). In this paper, we adopt a somewhat different (though equivalent -see Appendix E) definition, which allows a more natural classification of various strategy types. Let M be a finite or countably infinite set of memory elements. A strategy is a triple σ = (σ u , σ n , α), where σ u : A × S × M → dist (M) and σ n : S × M → dist (A) are memory update and next move functions, respectively, and α is an initial distribution on memory elements. We require that for all (s, m) ∈ S × M , the distribution σ n (s, m) assigns a positive value only to actions enabled at s. The set of all strategies is denoted by Σ (the underlying MDP G will be always clear from the context).
Let s ∈ S be an initial state. A play of G determined by s and a strategy σ is a Markov chain G σ s (or just G σ if s is clear from the context) where the set of locations is S × M × A, the initial distribution µ is positive only on (some) elements of {s} × M × A where µ(s, m, a) = α(m) · σ n (s, m)(a), and (t, m, a)
x → (t ′ , m ′ , a ′ ) iff x = δ(a)(t ′ ) · σ u (a, t ′ , m)(m ′ ) · σ n (t ′ , m ′ )(a ′ ) > 0 .
Hence, G σ s starts in a location chosen randomly according to α and σ n . In a current location (t, m, a), the next action to be performed is a, hence the probability of entering t ′ is δ(a)(t ′ ). The probability of updating the memory to m ′ is σ u (a, t ′ , m)(m ′ ), and the probability of selecting a ′ as the next action is σ n (t ′ , m ′ )(a ′ ). We assume that these choices are independent, and thus obtain the product above.
In this paper, we consider various functions over Runs G that become random variables over Runs G σ s after fixing some σ and s. For example, for F ⊆ S we denote by Reach(F ) ⊆ Runs G the set of all runs reaching F . Then Reach(F ) naturally determines Reach σ s (F ) ⊆ Runs G σ s by simply "ignoring" the visited memory elements. To simplify and unify our notation, we write, e.g., P σ s [Reach(F )] instead of P σ s [Reach σ s (F )], where P σ s is the probability measure of the probability space associated to G σ s . We also adopt this notation for other events and functions, such as lr inf ( r) or lr sup ( r) defined in the next section, and write, e.g., E σ s [lr inf ( r)] instead of E[lr inf ( r) σ s ]. Strategy types. In general, a strategy may use infinite memory, and both σ u and σ n may randomize. According to the use of randomization, a strategy, σ, can be classified as • pure (or deterministic), if α is Dirac and both the memory update and the next move function give a Dirac distribution for every argument; • deterministic-update, if α is Dirac and the memory update function gives a Dirac distribution for every argument; • stochastic-update, if α, σ u , and σ n are unrestricted. Note that every pure strategy is deterministic-update, and every deterministic-update strategy is stochastic-update. A randomized strategy is a strategy which is not necessarily pure. We also classify the strategies according to the size of memory they use. Important subclasses are memoryless strategies, in which M is a singleton, n-memory strategies, in which M has exactly n elements, and finite-memory strategies, in which M is finite. By Σ M we denote the set of all memoryless strategies. Memoryless strategies can be specified as σ : S→dist (A). Memoryless pure strategies, i.e., those which are both pure and memoryless, can be specified as σ : S→A.
For a finite-memory strategy σ, a bottom strongly con-
nected component (BSCC) of G σ s is a subset of loca- tions W ⊆ S × M × A such that for all ℓ 1 ∈ W and ℓ 2 ∈ S × M × A we have that (i) if ℓ 2 is reachable from ℓ 1 , then ℓ 2 ∈ W , and (ii) for all ℓ 1 , ℓ 2 ∈ W we have that ℓ 2 is reachable from ℓ 1 . Every BSCC W determines a unique end component ({s | (s, m, a) ∈ W }, {a | (s, m, a) ∈ W }) of G,
and we sometimes do not strictly distinguish between W and its associated end component.
As we already noted, stochastic-update strategies can be easily translated into "ordinary" strategies of the form σ : (SA) * S → dist (A), and vice versa (see Appendix E). Note that a finite-memory stochastic-update strategy σ can be easily implemented by a stochastic finite-state automaton that scans the history of a play "on the fly" (in fact, G σ s simulates this automaton). Hence, finite-memory stochasticupdate strategies can be seen as natural extensions of ordinary (i.e., deterministic-update) finite-memory strategies that are implemented by deterministic finite-state automata. A running example (I). As an example, consider the MDP G = (S, A, Act , δ) of Fig. 1a. Here, S = {s 1 , . . . , s 4 }, A = {a 1 , . . . , a 6 }, Act is denoted using the labels on lines going from actions, e.g., Act(s 1 ) = {a 1 , a 2 }, and δ is given by the arrows, e.g., δ(a 4 )(s 4 ) = 0.3. Note that G has four end components (two different on {s 3 , s 4 }) and two MECs.
Let
s 1 s 2 b a 1 a 2 (b) Example of insufficiency of mem- oryless strategies (s1, m1, a2) (s1, m1, a1) (s3, m1, a5) (s2, m1, a3) (s3, m2, a4) (s4, m2, a6) 0.5 0.5 0.5 0.5 1 0.5 1 0.5 0.3 0.7 1 (c) A
III. MAIN RESULTS
In this paper we establish basic results about Markov decision processes with expectation and satisfaction objectives specified by multiple limit average (or mean payoff ) functions. We adopt the variant where rewards are assigned to edges (i.e., actions) rather than states of a given MDP.
Let G = (S, A, Act, δ) be a MDP, and r : A → Q a reward function. Note that r may also take negative values. For every j ∈ N, let A j : Runs G → A be a function which to every run ω ∈ Runs G assigns the j-th action of ω. Since the limit average function lr(r) : Runs G → R given by
lr(r)(ω) = lim T →∞ 1 T T t=1 r(A t (ω))
may be undefined for some runs, we consider its lower and upper approximation lr inf (r) and lr sup (r) that are defined for all ω ∈ Runs as follows:
lr inf (r)(ω) = lim inf T →∞ 1 T T t=1 r(A t (ω)), lr sup (r)(ω) = lim sup T →∞ 1 T T t=1 r(A t (ω)).
For a vector r = (r 1 , . . . , r k ) of reward functions, we similarly define the R k -valued functions lr( r) = (lr(r 1 ), . . . , lr(r k )), lr inf ( r) = (lr inf (r 1 ), . . . , lr inf (r k )), lr sup ( r) = (lr sup (r 1 ), . . . , lr sup (r k )).
Now we introduce the expectation and satisfaction objectives determined by r.
• The expectation objective amounts to maximizing or minimizing the expected value of lr( r). Since lr( r) may be undefined for some runs, we actually aim at maximizing the expected value of lr inf ( r) or minimizing the expected value of lr sup ( r) (wrt. componentwise ordering ≤). • The satisfaction objective means maximizing the probability of all runs where lr( r) stays above or below a given vector v. Technically, we aim at maximizing the probability of all runs where lr inf ( r) ≥ v or lr sup ( r) ≤ v. The expectation objective is relevant in situtaions when we are interested in the average or aggregate behaviour of many instances of a system, and in contrast, the satisfaction objective is relevant when we are interested in particular executions of a system and wish to optimize the probability of generating the desired executions. Since lr inf ( r) = −lr sup (− r), the problems of maximizing and minimizing the expected value of lr inf ( r) and lr sup ( r) are dual. Therefore, we consider just the problem of maximizing the expected value of lr inf ( r). For the same reason, we consider only the problem of maximizing the probability of all runs where lr inf ( r) ≥ v.
If k (the dimension of r) is at least two, there might be several incomparable solutions to the expectation objective; and if v is slightly changed, the achievable probability of all runs satisfying lr inf ( r) ≥ v may change considerably. Therefore, we aim not only at constructing a particular solution, but on characterizing and approximating the whole space of achievable solutions for the expectation/satisfaction objective. Let s ∈ S be some (initial) state of G. We define the sets AcEx(lr inf ( r)) and AcSt(lr inf ( r)) of achievable vectors for the expectation and satisfaction objectives as follows:
AcEx(lr inf ( r)) = { v | ∃σ ∈ Σ : E σ s [lr inf ( r)] ≥ v}, AcSt(lr inf ( r)) = {(ν, v) | ∃σ ∈ Σ : P σ s [lr inf ( r) ≥ v] ≥ ν}. Intuitively, if v, u are achievable vectors such that v > u,
then v represents a "strictly better" solution than u. The set of "optimal" solutions defines the Pareto curve for AcEx(lr inf ( r)) and AcSt(lr inf ( r)). In general, the Pareto curve for a given set Q ⊆ R k is the set P of all minimal vectors v ∈ R k such v < u for all u ∈ Q. Note that P may contain vectors that are not in Q (for example, if Q = {x ∈ R | x < 2}, then P = {2}). However, every vector v ∈ P is "almost" in Q in the sense that for every ε > 0 there is u ∈ Q with v ≤ u + ε, where ε = (ε, . . . , ε). This naturally leads to the notion of an ε-approximate Pareto curve, P ε , which is a subset of Q such that for all vectors v ∈ P of the Pareto curve there is a vector u ∈ P ε such that v ≤ u + ε. Note that P ε is not unique.
A running example (II). Consider again the MDP G of Fig. 1a, and the strategy σ constructed in our running example (I). Let r = (r 1 , r 2 ), where r 1 (a 6 ) = 1, r 2 (a 3 ) = 2, r 2 (a 4 ) = 1, and otherwise the rewards are zero. Let
ω = (s1, m1, a 2 )(s 3 , m 1 , a 5 ) (s 3 , m 2 , a 4 )(s 4 , m 2 , a 6 ) ω
Then lr( r)(ω) = (0.5, 0.5). Considering the expectation objective, we have that E σ s1 [lr inf ( r)] = ( 3 52 , 22 13 ). Considering the satisfaction objective, we have that (0.5, 0, 2) ∈ AcSt( r) because P σ s1 [lr inf ( r) ≥ (0, 2)] = 0.5. The Pareto curve for AcEx(lr inf ( r)) consists of the points {( 3 13 x, 10 13 x + 2(1−x)) | 0 ≤ x ≤ 0.5}, and the Pareto curve for AcSt(lr inf ( r)) is
{(1, 0, 2)} ∪ {(0.5, x, 1 − x) | 0 < x 1 ≤ 10 13 }.
Now we are equipped with all the notions needed for understanding the main results of this paper. Our work is motivated by the six fundamental questions given in Section I. In the next subsections we give detailed answers to these questions.
A. Expectation objectives
The answers to Q.1-Q.6 for the expectation objectives are the following: A.1 2-memory stochastic-update strategies are sufficient for all achievable solutions, i.e., for all v ∈ AcEx(lr inf ( r)) there is a 2-memory stochastic-update strategy σ satisfy-
ing E σ s [lr inf ( r)] ≥ v. A.2
The Pareto curve P for AcEx(lr inf ( r)) is a subset of AcEx(lr inf ( r)), i.e., all optimal solutions are achievable. A.3 There is a polynomial time algorithm which, given v ∈ Q k , decides whether v ∈ AcEx(lr inf ( r)).
A.4 If v ∈ AcEx(lr inf ( r)), then there is a 2-memory stochastic-update strategy σ constructible in polynomial time satisfying E σ s [lr inf ( r)] ≥ v. A.5
There is a polynomial time algorithm which, given v ∈ R k , decides whether v belongs to the Pareto curve for AcEx(lr inf ( r)). A.6 AcEx(lr inf ( r)) is a convex hull of finitely many vectors that can be computed in exponential time. The Pareto curve for AcEx(lr inf ( r)) is a union of all facets of AcEx(lr inf ( r)) whose vectors are not strictly dominated by vectors of AcEx(lr inf ( r)). Further, an ε-approximate Pareto curve for AcEx(lr inf ( r)) is computable in time polynomial in 1 ε , |G|, and max a∈A max 1≤i≤k | r i (a)|, and exponential in k. Let us note that A.1 is tight in the sense that neither memoryless randomized nor pure strategies are sufficient for achievable solutions. This is witnessed by the MDP of Fig. 1b with reward functions r 1 , r 2 such that r i (a i ) = 1 and r i (a j ) = 0 for i = j. Consider a strategy σ which initially selects between the actions a 1 and b randomly (with probability 0.5) and then keeps selecting a 1 or a 2 , whichever is available. Hence, E σ s1 [lr inf ((r 1 , r 2 ))] = (0.5, 0.5). However, the vector (0.5, 0.5) is not achievable by a strategy σ ′ which is memoryless or pure, because then we inevitably have that
E σ ′ s1 [lr inf ((r 1 , r 2 ))] is equal either to (0, 1) or (1, 0).
On the other hand, the 2-memory stochastic-update strategy constructed in the proof of Theorem 1 can be efficiently transformed into a finite-memory deterministic-update randomized strategy, and hence the answers A.1 and A.4 are also valid for finite-memory deterministic-update randomized strategies (see Appendix C). Observe that A.2 can be seen as a generalization of the well-known result for single payoff functions which says that finite-state MDPs with mean-payoff objectives have optimal strategies (in this case, the Pareto curve consists of a single number known as the "value"). Also observe that A.2 does not hold for infinite-state MDPs (a counterexample is trivial to construct).
Finally, note that if σ is a finite-memory stochastic-update strategy, then G σ s is a finite-state Markov chain. Hence, for almost all runs ω in G σ s we have that lr( r)(ω) exists and it is equal to lr inf ( r)(ω). This means that there is actually no difference between maximizing the expected value of lr inf ( r) and the expected value of lr( r).
B. Satisfaction objectives
The answers to Q.1-Q.6 for the satisfaction objectives are presented below. B.1 Achievable vectors require strategies with infinite memory in general. However, memoryless randomized strategies are sufficient for ε-approximate achievable vectors, i.e., for every ε > 0 and (ν, v) ∈ AcSt(lr inf ( r)), there is a memoryless randomized strategy σ with . . . , ε). B.2 The Pareto curve P for AcSt(lr inf ( r)) is a subset of AcSt(lr inf ( r)), i.e., all optimal solutions are achievable. B.3 There is a polynomial time algorithm which, given ν ∈ [0, 1] and v ∈ Q k , decides whether (ν, v) ∈ AcSt(lr inf ( r)). B.4 If (ν, v) ∈ AcSt(lr inf ( r)), then for every ε > 0 there is a memoryless randomized strategy σ constructible in polynomial time such that P σ
P σ s [lr inf ( r) ≥ v − ε] ≥ ν − ε. Here ε = (ε,s [lr inf ( r) ≥ v − ε] ≥ ν − ε. B.5
There is a polynomial time algorithm which, given ν ∈ [0, 1] and v ∈ R k , decides whether (ν, v) belongs to the Pareto curve for AcSt(lr inf ( r)). B.6 The Pareto curve P for AcSt(lr inf ( r)) may be neither connected, nor closed. However, P is a union of finitely many sets whose closures are convex polytopes, and, perhaps surprisingly, the set {ν | (ν, v) ∈ P } is always finite. The sets in the union that gives P (resp. the inequalities that define them) can be computed. Further, an ε-approximate Pareto curve for AcSt(lr inf ( r)) is computable in time polynomial in 1 ε , |G|, and max a∈A max 1≤i≤k | r i (a)|, and exponential in k. The algorithms of B.3 and B.4 are polynomial in the size of G and the size of binary representations of v and 1 ε . The result B.1 is again tight. In Appendix D we show that memoryless pure strategies are insufficient for ε-approximate achievable vectors, i.e., there are ε > 0 and (ν, v) ∈ AcSt(lr inf ( r)) such that for every memoryless pure strategy σ we have that P σ s [lr inf ( r) ≥ v − ε] < ν − ε. As noted in B.1, a strategy σ achieving a given vector (ν, v) ∈ AcSt(lr inf ( r)) may require infinite memory. Still, our proof of B.1 reveals a "recipe" for constructing such a σ by simulating the memoryless randomized strategies σ ε which ε-approximate (ν, v) (intuitively, for smaller and smaller ε, the strategy σ simulates σ ε longer and longer; the details are discussed in Section V). Hence, for almost all runs ω in G σ s we again have that lr( r)(ω) exists and it is equal to lr inf ( r)(ω).
IV. PROOFS FOR EXPECTATION OBJECTIVES
The technical core of our results for expectation objectives is the following:
Theorem 1: Let G = (S, A, Act, δ) be a MDP, r = (r 1 , . . . , r k ) a tuple of reward functions, and v ∈ R k . Then
E σ s0 [lr inf ( r)] ≥ v; • if v ∈ AcEx(lr inf ( r))
, then L has a nonnegative solution.
As we already noted in Section I, the proof of Theorem 1 is non-trivial and it is based on novel techniques and observations. Our results about expectation objectives are corollaries to Theorem 1 and the arguments developed in its proof. For the rest of this section, we fix an MDP G, a vector of rewards, r = (r 1 , . . . , r k ), and an initial state s 0 (in the considered plays of G, the initial state is not written explicitly, unless it is different from s 0 ).
Consider the system L of Fig. 2 (parametrized by v). Obviously, L is constructible in polynomial time. Probably most demanding are Eqns. (1) and Eqns. (4). The equations of (1) are analogous to similar equalities in [6], and their purpose is clarified at the end of the proof of Proposition 2. The meaning of Eqns. (4) is explained in Lemma 1.
As both directions of Theorem 1 are technically involved, we prove them separately as Propositions 1 and 2.
Proposition 1: Every nonnegative solution of the system L induces a 2-memory stochastic-update strategy σ satisfying
E σ s0 [lr inf ( r)] ≥ v.
Proof of Proposition 1: First, let us consider Eqn. (4) of L. Intuitively, this equation is solved by an "invariant" distribution on actions, i.e., each solution gives frequencies of actions (up to a multiplicative constant) defined for all a ∈ A, s ∈ S, and σ ∈ Σ by freq(σ, s, a) := lim
T →∞ 1 T T t=1 P σ s [A t = a] ,
assuming that the defining limit exists (which might not be the case-cf. the proof of Proposition 2). We prove the following:
Lemma 1: Assume that assigning (nonnegative) valuesx a to x a solves Eqn. (4). Then there is a memoryless strategy ξ such that for every BSCCs D of G ξ , every s ∈ D ∩ S, and every a ∈ D ∩ A, we have that freq(ξ, s, a) equals a common value freq(ξ, D, a) :=x a / a ′ ∈D∩Ax a ′ .
A proof of Lemma 1 is given in Appendix A. Assume that the system L is solved by assigning nonnegative values x a to x a andȳ χ to y χ where χ ∈ A ∪ S. Let ξ be the strategy of Lemma 1. Using Eqns. (1), (2), and (3), we will define a 2-memory stochastic update strategy σ as follows. The strategy σ has two memory elements, m 1 and m 2 . A run of G σ starts in s 0 with a given distribution on memory elements (see below). Then σ plays according to a suitable memoryless strategy (constructed below) until the memory changes to m 2 , and then it starts behaving as ξ forever. Given a BSCC D of G ξ , we denote by P σ s0 [switch to ξ in D] the probability that σ switches from m 1 to m 2 while in D. We construct σ so that
P σ s0 [switch to ξ in D] = a∈D∩Ax a .(6)
Then freq(σ, s 0 , a) = P σ s0 [switch to ξ in D] · freq(ξ, D, a) = x a . Finally, we obtain the following:
E σ s0 [lr inf ( r i )] = a∈A r i (a) ·x a .(7)
A complete derivation of Eqn. (7) is given in Appendix A2.
Note that the right-hand side of Eqn. (7) is greater than or equal to v i by Inequality (5) of L. So, it remains to construct the strategy σ with the desired "switching" property expressed by Eqn. (6). Roughly speaking, we proceed in two steps.
1. We construct a finite-memory stochastic update strategyσ satisfying Eqn. (6). The strategyσ is constructed so that it initially behaves as a certain finite-memory stochastic update strategy, but eventually this mode is "switched" to the strategy ξ which is followed forever. 2. The only problem withσ is that it may use more than two memory elements in general. This is solved by applying the results of [6] and reducing the "initial part" ofσ (i.e., the part before the switch) into a memoryless strategy. Thus, we transformσ into an "equivalent" strategy σ which is 2-memory stochastic update. Now we elaborate the two steps.
Step 1. For every MEC C of G, we denote by y C the number s∈Cȳ s = a∈A∩Cx a . By combining the solution of L with the results of Sections 3 and 5 of [6] (the details are given in Appendix A, Lemma 2), one can construct a finite-memory stochastic-update strategy ζ which stays eventually in each MEC C with probability y C .
The strategyσ works as follows. For a run initiated in s 0 , the strategyσ plays according to ζ until a BSCC of G ζ is reached. This means that every possible continuation of the path stays in the current MEC C of G. Assume that C has states s 1 , . . . , s k . We denote byx s the sum a∈Act(s)x a . At this point, the strategyσ changes its behavior as follows: First, the strategyσ strives to reach s 1 with probability one. Upon reaching s 1 , it chooses (randomly, with probabilityx s 1 yC ) either to behave as ξ forever, or to follow on to s 2 . If the strategyσ chooses to go on to s 2 , it strives to reach s 2 with probability one. Upon reaching s 2 , the strategyσ chooses (randomly, with probabilityx s 2 yC −xs 1 ) either to behave as ξ forever, or to follow on to s 3 , and so, till s k . That is, the probability of switching to ξ in s i isx s i
yC − i−1 j=1x s j .
Since ζ stays in a MEC C with probability y C , the probability that the strategyσ switches to ξ in s i is equal tox si . However, then for every BSCC D of G ξ satisfying D ∩ C = ∅ (and thus D ⊆ C) we have that the strategyσ switches to ξ in a state of D with probability s∈D∩Sx s = a∈D∩Ax a . Hence,σ satisfies Eqn. (6).
Step 2. Now we show how to reduce the first phase ofσ (before the switch to ξ) into a memoryless strategy, using the results of [6, Section 3]. Unfortunately, these results are not applicable directly. We need to modify the MDP G into a new MDP G ′ as follows: For each state s we add a new absorbing state, d s . The only available action for d s leads to a loop transition back to d s with probability 1. We also add a new action, a d s , to every s ∈ S. The distribution associated with a d s assigns probability 1 to d s . Let us consider a finite-memory stochastic-update strategy, σ ′ , for G ′ defined as follows. The strategy σ ′ behaves asσ before the switch to ξ. Onceσ switches to ξ, say in a state s of G with probability p s , the strategy σ ′ chooses the action a d s with probability p s . It follows that the probability ofσ switching in s is equal to the probability of reaching d s in G ′ under σ ′ . By [6, Theorem 3.2], there is a memoryless strategy, σ ′′ , for G ′ that reaches d s with probability p s . We define σ in G to behave as σ ′′ with the exception that, in every state s, instead of choosing an action a d s with probability p s it switches to behave as ξ with probability p s (which also means that the initial distribution on memory elements assigns p s0 to m 2 ). Then, clearly, σ satisfies Eqn. (6) because
P σ s0 [switch in D] = s∈D P σ ′′ s0 fire a d s = s∈D P σ ′ s0 fire a d s = Pσ s0 [switch in D] = a∈D∩Ax a .
This concludes the proof of Proposition 1.
Proposition 2:
If v ∈ AcEx(lr inf ( r)), then L has a nonnegative solution.
Proof of Proposition 2: Let ̺ ∈ Σ be a strategy such that E ̺ s0 [lr inf ( r)] ≥ v. In general, the frequencies freq(̺, s 0 , a) of the actions may not be well defined, because the defining limits may not exist. A crucial trick to overcome this difficulty is to pick suitable "related" values, f (a), lying between lim inf T →∞
1 T T t=1 P ̺ s0 [A t = a] and lim sup T →∞ 1 T T t=1 P ̺ s0 [A t = a]
, which can be safely substituted for x a in L. Since every infinite sequence contains an infinite convergent subsequence, there is an increasing sequence of indices, T 0 , T 1 , . . ., such that the following limit exists for each action a ∈ A f (a) := lim
ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [A t = a] .
Setting x a := f (a) for all a ∈ A satisfies Inqs. (5) and Eqns. (4) of L. Indeed, the former follows from E ̺ s0 [lr inf ( r)] ≥ v and the following inequality, which holds for all 1 ≤ i ≤ k:
a∈A ri(a) · f (a) ≥ E ̺ s 0 [lr inf ( ri)] .(8)
A proof of Inequality (8)
A proof of Eqn. (9) is given in Appendix A4. Now we have to set the values for y χ , χ ∈ A ∪ S, and prove that they satisfy the rest of L when the values f (a) are assigned to x a . Note that every run of G ̺ eventually stays in some MEC of G (cf., e.g., [5,Proposition 3.1]). For every MEC C of G, let y C be the probability of all runs in G ̺ that eventually stay in C. Note that
a∈A∩C f (a) = a∈A∩C lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s 0 [At = a] = lim ℓ→∞ 1 T ℓ T ℓ t=1 a∈A∩C P ̺ s 0 [At = a] = lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s 0 [At ∈ C] = yC .(10)
Here the last equality follows from the fact that lim ℓ→∞ P ̺ s0 [A T ℓ ∈ C] is equal to the probability of all runs in G ̺ that eventually stay in C (recall that almost every run stays eventually in a MEC of G) and the fact that the Cesàro sum of a convergent sequence is equal to the limit of the sequence.
To obtain y a and y s , we need to simplify the behavior of ̺ before reaching a MEC for which we use the results of [6]. As in the proof of Proposition 1, we first need to modify the MDP G into another MDP G ′ as follows: For each state s we add a new absorbing state, d s . The only available action for d s leads to a loop transition back to d s with probability 1. We also add a new action, a d s , to every s ∈ S. The distribution associated with a d s assigns probability 1 to d s . By [6,Theorem 3.2], the existence of ̺ implies the existence of a memoryless pure strategy ζ for G ′ such that
s∈C P ζ s 0 [Reach(ds)] = yC.(11)
Let U a be a function over the runs in G ′ returning the (possibly infinite) number of times the action a is used. We are now ready to define the assignment for the variables y χ of L.
y a := E ζ s0 [U a ] for all a ∈ A y s := E ζ s0 U a d s = P ζ s0 [Reach(d s )]
for all s ∈ S.
Note that [6,Lemma 3.3] ensures that all y a and y s are indeed well-defined finite values, and satisfy Eqns. The item A.1 in Section III-A follows directly from Theorem 1. Let us analyze A.2. Suppose v is a point of the Pareto curve. Consider the system L ′ of linear inequalities obtained from L by replacing constants v i in Inqs. (5) with new variables z i . Let Q ⊆ R n be the projection of the set of solutions of L ′ to z 1 , . . . , z n . From Theorem 1 and the definition of Pareto curve, the (Euclid) distance of v to Q is 0. Because the set of solutions of L ′ is a closed set, Q is also closed and thus v ∈ Q. This gives us a solution to L with variables z i having values v i , and we can use Theorem 1 to get a strategy witnessing that v ∈ AcEx(lr inf ( r)). Now consider the items A.3 and A.4. The system L is linear, and hence the problem whether v ∈ AcEx(lr inf ( r)) is decidable in polynomial time by employing polynomial time algorithms for linear programming. A 2-memory stochasticupdate strategy σ satisfying E σ s [lr inf ( r)] ≥ v can be computed as follows (note that the proof of Proposition 1 is not fully constructive, so we cannot apply this proposition immediately). First, we find a solution of the system L, and we denote bȳ x a the value assigned to x a . Let (T 1 , B 1 ), . . . , (T n , B n ) be the end components such that a ∈ n i=1 B i iffx a > 0, and T 1 , . . . , T n are pairwise disjoint. We construct another system of linear inequalities consisting of Eqns. (1) of L and the equations s∈Ti y s = s∈Ti a∈Act(s)x a for all 1 ≤ i ≤ n. Due to [6], there is a solution to this system iff in the MDP G ′ from the proof of Proposition 1 there is a strategy that for every i reaches d s for s ∈ T i with probability s∈Ti a∈Act(s)x a . Such a strategy indeed exists (consider, e.g., the strategy σ ′ from the proof of Proposition 1). Thus, there is a solution to the above system and we can denote byŷ s andŷ a the values assigned to y s and y a . We define σ by σ n (s, m 1 )(a) =ȳ a / a ′ ∈Act(s)ȳ a ′ σ n (s, m 2 )(a) =x a / a ′ ∈Act(s)x a ′ and further σ u (a, s, m 1 )(m 2 )=y s , σ u (a, s, m 2 )(m 2 )=1, and the initial memory distribution assigns (1 − y s0 ) and y s0 to m 1 and m 2 , respectively. Due to [6] we have P σ s0 [change memory to m 2 in s] =ŷ s , and the rest follows similarly as in the proof of Proposition 1.
The item A.5 can be proved as follows: To test that v ∈ AcEx(lr inf ( r)) lies in the Pareto curve we turn the system L into a linear program LP by adding the objective to maximize 1≤i≤n a∈A x a · r i (a). Then we check that there is no better solution than 1≤i≤n v i .
Finally, the item A.6 is obtained by considering the system L ′ above and computing all exponentially many vertices of the polytope of all solutions. Then we compute projections of these vertices onto the dimensions z 1 , . . . , z n and retrieve all the maximal vertices. Moreover, if for every v ∈ {ℓ·ε | ℓ ∈ Z∧ −M r ≤ ℓ · ε ≤ M r } k where M r = max a∈A max 1≤i≤k | r i (a)| we decide whether v ∈ AcEx(lr inf ( r)), we can easily construct an ε-approximate Pareto curve.
V. PROOFS FOR SATISFACTION OBJECTIVES
In this section we prove the items B.1-B.6 of Section III-B. Let us fix a MDP G, a vector of rewards, r = (r 1 , . . . , r k ), and an initial state s 0 . We start by assuming that the MDP G is strongly connected (i.e., (S, A) is an end component).
Proposition 3: Assume that G is strongly connected and that there is a strategy π such that P π s0 [lr inf ( r) ≥ v] > 0. Then the following is true.
1. There is a strategy ξ satisfying P ξ s [lr inf ( r) ≥ v] = 1 for all s ∈ S. 2. For each ε>0 there is a memoryless randomized strategy ξ ε satisfying P ξε s [lr inf ( r) ≥ v − ε] = 1 for all s ∈ S. Moreover, the problem whether there is some π such that P π s0 [lr inf ( r) ≥ v] > 0 is decidable in polynomial time. Strategies ξ ε are computable in time polynomial in the size of G, the size of the binary representation of r, and 1 ε . Proof: By [2], [9], P π s0 [lr inf ( r) ≥ v] > 0 implies that there is a strategy ξ such that P ξ s0 [lr inf ( r) ≥ v] = 1 (the details are given in Appendix B). This gives us item 1. of Proposition 3 and also immediately implies v ∈ AcEx(lr inf ( r)). It follows that there are nonnegative valuesx a for all a ∈ A such that assigningx a to x a solves Eqns. (4) and (5) of the system L (see Fig. 2). Let us assume, w.l.o.g., that a∈Ax a = 1.
Lemma 1 gives us a memoryless randomized strategy ζ such that for all BSCCs D of G ζ , all s ∈ D ∩ S and all a ∈ D ∩ A we have that freq(ζ, s, a) =x a a∈D∩Ax a . We denote by freq(ζ, D, a) the valuex a a∈D∩Ax a . Now we are ready to prove the item 2 of Proposition 3. Let us fix ε > 0. We obtain ξ ε by a suitable perturbation of the strategy ζ in such a way that all actions get positive probabilities and the frequencies of actions change only slightly. There exists an arbitrarily small (strictly) positive solution x ′ a of Eqns. (4) of the system L (it suffices to consider a strategy τ which always takes the uniform distribution over the actions in every state and then assign freq(τ, s 0 , a)/N to x a for sufficiently large N ). As the system of Eqns. (4) is linear and homogeneous, assigningx a + x ′ a to x a also solves this system and Lemma 1 gives us a strategy ξ ε satisfying freq(ξ ε , s 0 , a) = (x a + x ′ a )/X. Here X = a ′ ∈Ax a ′ + x ′ a ′ = 1+ a ′ ∈A x ′ a ′ . We may safely assume that a ′ ∈A x ′ a ′ ≤ ε 2·Mr where M r = max a∈A max 1≤i≤k | r i (a)|. Thus, we obtain
a∈A freq(ξ ε , s 0 , a) · r i (a) ≥ v i − ε.(12)
A proof of Inequality (12) is given in Appendix B1. As G ξε is strongly connected, almost all runs ω of G ξε initiated in s 0 satisfy
lr inf ( r)(ω) = a∈A freq(ξ ε , s 0 , a) · r(a) ≥ v − ε.
This finishes the proof of item 2.
Concerning the complexity of computing ξ ε , note that the binary representation of every coefficient in L has only polynomial length. Asx a 's are obtained as a solution of (a part of) L, standard results from linear programming imply that eachx a has a binary representation computable in polynomial time. The numbers x ′ a are also obtained by solving a part of L and restricted by a ′ ∈A x ′ a ′ ≤ ε 2·Mr which allows to compute a binary representation of x ′ a in polynomial time. The strategy ξ ε , defined in the proof of Proposition 3, assigns to each action only small arithmetic expressions overx a and x ′ a . Hence, ξ ε is computable in polynomial time.
To prove that the problem whether there is some ξ such that P ξ s0 [lr inf ( r) ≥ v] > 0 is decidable in polynomial time, we show that whenever v ∈ AcEx(lr inf ( r)), then (1, v) ∈ AcSt(lr inf ( r)). This gives us a polynomial time algorithm by applying Theorem 1. Let v ∈ AcEx(lr inf ( r)). We show that there is a strategy ξ such that P ξ s [lr inf ( r) ≥ v] = 1. The strategy σ needs infinite memory (an example demonstrating that infinite memory is required is given in Appendix D).
Since v ∈ AcEx(lr inf ( r)), there are nonnegative rational valuesx a for all a ∈ A such that assigningx a to x a solves Eqns. (4) and (5) of the system L. Assume, without loss of generality, that a∈Ax a = 1.
Given a ∈ A, let I a : A → {0, 1} be a function given by I a (a) = 1 and I a (b) = 0 for all b = a. For every i ∈ N, we denote by ξ i a memoryless randomized strategy satisfying P ξi s lr inf (I a ) ≥x a − 2 −i−1 = 1. Note that for every i ∈ N there is κ i ∈ N such that for all a ∈ A and s ∈ S we get P ξi
s inf T ≥κi 1 T T t=0 I a (A t ) ≥x a − 2 −i ≥ 1 − 2 −i .
Now let us consider a sequence n 0 , n 1 , . . . of numbers where n i ≥ κ i and j<i nj ni ≤ 2 −i and κi+1 ni ≤ 2 −i . We define ξ to behave as ξ 1 for the first n 1 steps, then as ξ 2 for the next n 2 steps, then as ξ 3 for the next n 3 steps, etc. In general, denoting by N i the sum j<i n j , the strategy ξ behaves as ξ i between the N i 'th step (inclusive) and N i+1 'th step (non-inclusive).
Let us give some intuition behind ξ. The numbers in the sequence n 0 , n 1 , . . . grow rapidly so that after ξ i is simulated for n i steps, the part of the history when ξ j for j < i were simulated becomes relatively small and has only minor impact on the current average reward (this is ensured by the condition j<i nj ni ≤ 2 −i ). This gives us that almost every run has infinitely many prefixes on which the average reward w.r.t. I a is arbitrarily close tox a infinitely often. To get thatx a is also the limit average reward, one only needs to be careful when the strategy ξ ends behaving as ξ i and starts behaving as ξ i+1 , because then up to the κ i+1 steps we have no guarantee that the average reward is close tox a . This part is taken care of by picking n i so large that the contribution (to the average reward) of the n i steps according to ξ i prevails over fluctuations introduced by the first κ i+1 steps according to ξ i+1 (this is ensured by the condition κi+1 ni ≤ 2 −i ). Let us now prove the correctness of the definition of ξ formally. We prove that almost all runs ω of G ξ satisfy
lim inf T →∞ 1 T T t=0 I a (A t (ω)) ≥x a .
Denote by E i the set of all runs ω = s 0 a 0 s 1 a 1 . . . of G ξ such that for some κ i ≤ d ≤ n i we have
1 d Ni+d j=Ni I a (a j ) <x a − 2 −i . We have P ξ s0 [E i ] ≤ 2 −i and thus ∞ i=1 P ξ s0 [E i ] = 1 2 < ∞.
By Borel-Cantelli lemma [15], almost surely only finitely many of E i take place. Thus, almost every run ω = s 0 a 0 s 1 a 1 . . . of G ξ satisfies the following: there is ℓ such that for all i ≥ ℓ and all κ i ≤ d ≤ n i we have that
1 d Ni+d j=Ni I a (a j ) ≥x a − 2 −i . Consider T ∈ N such that N i ≤ T < N i+1 where i > ℓ.
We prove the following (see Appendix B2):
1 T T t=0 I a (a t ) ≥ (x a − 2 −i )(1 − 2 1−i ).(13)
Since the above sum converges tox a as i (and thus also T ) goes to ∞, we obtain
lim inf T →∞ 1 T T t=0 I a (a t ) ≥x a .
We are now ready to prove the items B.1, B.3 and B.4. Let C 1 , . . . , C ℓ be all MECs of G. We say that a MEC C i is good for v if there is a state s of C i and a strategy π satisfying P π s [lr inf ( r) ≥ v] > 0 that never leaves C i when starting in s. Using Proposition 3, we can decide in polynomial time whether a given MEC is good for a given v. Let C be the union of all MECs good for v. Then, by Proposition 3, there is a strategy ξ such that for all s ∈ C we have P ξ s [lr inf ( r) ≥ v] = 1 and for each ε > 0 there is a memoryless randomized strategy ξ ε , computable in polynomial time, such that for all s ∈ C we have P ξε s0 [lr inf ( r) ≥ v − ε]. Consider a strategy τ , computable in polynomial time, which maximizes the probability of reaching C. Denote by σ a strategy which behaves as τ before reaching C and as ξ afterwards. Similarly, denote by σ ε a strategy which behaves as τ before reaching C and as ξ ε afterwards. Note that σ ε is computable in polynomial time.
Clearly, (ν, v) ∈ AcSt(lr inf ( r)) iff P τ s0 [Reach(C)] ≥ ν because σ achieves v with probability P τ s0 [Reach(C)]. Thus, we obtain that ν ≤ P τ s0 [Reach(C)] ≤ P ξε s0 [lr inf ( r) ≥ v − ε]. Finally, to decide whether (ν, v) ∈ AcSt(lr inf ( r)), it suffices to decide whether P τ s0 [Reach(C)] ≥ ν in polynomial time. Now we prove item B.2. Suppose (ν, v) is a vector of the Pareto curve. We let C be the union of all MECs good for v. Recall that the Pareto curve constructed for expectation objectives is achievable (item A.2). Due to the correspondence between AcSt and AcEx in strongly connected MDPs we obtain the following. There is λ > 0 such that for every MEC D not contained in C, every s ∈ D, and every strategy σ that does not leave D, it is possible to have P σ s [lr inf ( r) ≥ u] > 0 only if there is i such that v i − u i ≥ λ, i.e., when v is greater than u by λ in some component. Thus, for every ε < λ and every strategy σ such that P σ s0 [lr inf ( r) ≥ v − ε] ≥ ν−ε it must be the case that P σ s0 [Reach(C)] ≥ ν − ε. Because for single objective reachability the optimal strategies exist, we get that there is a strategy τ satisfying P τ s0 [Reach(C)] ≥ ν, and by using methods similar to the ones of the previous paragraphs we obtain (ν, v) ∈ AcSt(lr inf ( r)).
The polynomial-time algorithm mentioned in item B.5 works as follows. First check whether (ν, v) ∈ AcSt(lr inf ( r)) and if not, return "no". Otherwise, find all MECs good for v and compute the maximal probability of reaching them from the initial state. If the probability is strictly greater than ν, return "no". Otherwise, continue by performing the following procedure for every 1 ≤ i ≤ k, where k is the dimension of v: Find all MECs C for which there is ε > 0 such that C is good for u, where u is obtained from v by increasing the i-th component by ε (this can be done in polynomial time using linear programming). Compute the maximal probability of reaching these MECs. If for any i the probability is at least ν, return "no", otherwise return "yes".
The first claim of B.6 follows from Running example (II). The other claims of item B.6 require further observations and they are proved in Appendix F.
→ s ′ in G ξ : s∈Sx s · a∈Act(s) ξ(s)(a) · δ(a)(s ′ ) = s∈S a∈Act(s)x s ·x ā x s · δ(a)(s ′ ) = a∈Ax a · δ(a)(s ′ ) = a∈Act(s ′ )x a =x s ′ .
As a consequence,x s > 0 iff s lies in some BSCC of G ξ . Choose some BSCC D, and denote byx D the number a∈D∩Ax a = s∈D∩Sx s . Also denote by I a t the indicator of A t = a, given by I a t = 1 if A t = a and 0 otherwise. By the Ergodic theorem for finite Markov chains (see, e.g. [11,Theorem 1.10.2]), for all s ∈ D ∩ S and a ∈ D ∩ A we have
E ξ s lim T →∞ 1 T T t=1 I a t = s ′ ∈D∩Sx s ′ x D ·ξ(s ′ )(a) =x s ′ x D ·x ā x s ′ =x ā x D .
Because 2) Proof of the equation (7): Here the second equality follows from the fact that the limit is almost surely defined, following from the Ergodic theorem applied to the BSCCs of the finite Markov chain G σ . The third equality holds by Lebesgue Dominated convergence theorem, because |r(A t )| ≤ max a∈A |r(a)|. The seventh equality follows because freq(σ, s 0 , a) =x a . (8):
E σ s0 [lr inf (r i )] = E σ s0 lim inf T →∞ 1 T T t=1 r(A t ) = E σ s0 lim T →∞ 1 T T t=1 r(A t ) = lim T →∞ 1 T T t=1 E σ s0 [r(A t )] = lim T →∞ 1 T T t=1 a∈A r i (a) · P σ s0 [A t = a] = a∈A r i (a) · lim T →∞ 1 T T t=1 P σ s0 [A t = a]
3) Proof of the inequality
a∈A r i (a) · f (a) = a∈A r i (a) · lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [A t = a] = lim ℓ→∞ 1 T ℓ T ℓ t=1 a∈A r i (a) · P ̺ s0 [A t = a] ≥ lim inf T →∞ 1 T T t=1 a∈A r i (a) · P ̺ s0 [A t = a] ≥ lim inf T →∞ 1 T T t=1 E ̺ s0 [r i (A t )] ≥ E ̺ s0 [lr inf (r i )] .
Here, the first equality is the definition of f (a), and the second follows from the linearity of the limit. The first inequality is the definition of lim inf. The second inequality relies on the linearity of the expectation, and the final inequality is a consequence of Fatou's lemma (see, e.g. [15, Chapter 4, Section 3]) -although the function r i (A t ) may not be nonnegative, we can replace it with the non-negative function r i (A t ) − min a∈A r i (a) and add the subtracted constant afterwards. (9):
4) Proof of the equation
a∈A f (a) · δ(a)(s) = a∈A lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [A t = a] · δ(a)(s) = lim ℓ→∞ 1 T ℓ T ℓ t=1 a∈A P ̺ s0 [A t = a] · δ(a)(s) = lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [S t+1 = s] = lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [S t = s] = lim ℓ→∞ 1 T ℓ T ℓ t=1 a∈Act(s) P ̺ s0 [A t = a] = a∈Act(s) lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [A t = a] = a∈Act(s) f (a) .
Here the first and the seventh equality follow from the definition of f . The second and the sixth equality follow from the linearity of the limit. The third equality follows by the definition of δ. The fourth equality follows from the following:
lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [S t+1 = s] − lim ℓ→∞ 1 T ℓ T ℓ t=1 P ̺ s0 [S t = s] = lim ℓ→∞ 1 T ℓ T ℓ t=1 (P ̺ s0 [S t+1 = s] − P ̺ s0 [S t = s]) = lim ℓ→∞ 1 T ℓ (P ̺ s0 [S T ℓ +1 = s] − P ̺ s0 [S 1 = s]) = 0.
5)
Proof of the existence of the strategy ζ: Lemma 2: Consider numbersȳ χ for all χ ∈ S ∪ A such that the assignment y χ :=ȳ χ is a part of some non-negative solution to L. Then there is a finite-memory stochastic update strategy ζ which, starting from s 0 , stays eventually in each MEC C with probability y C := s∈Cȳ s .
Proof: As in the proofs of Propositions 1 and 2, in order to be able to use results of [6, Section 3] we modify the MDP G and obtain a new MDP G ′ as follows: For each state s we add a new absorbing state, d s . The only available action for d s leads to a loop transition back to d s with probability 1. We also add a new action, a d s , to every s ∈ S. The distribution associated with a d s assigns probability 1 to d s . Let us call K the set of constraints of the LP on Figure 3 in [6]. From the valuesȳ χ we now construct a solution to K: for every state s ∈ S and every action a ∈ Act(s) we set y (s,a) :=ȳ a , and y (s,a d s ) :=ȳ s . The values of the rest of variables in K are determined by the second set of equations in K. The non-negative constraints in K are satisfied sincē y χ are non-negative. Finally, the equations (1) from L imply that the first set of equations in K are satisfied, becauseȳ χ are part of a solution to L.
By Theorem 3.2 of [6] we thus have a memoryless strategy ̺ for G ′ such that P s0 ̺ [Reach(d s )] ≥ y s for all s ∈ S. The strategy ζ then mimics the behavior of ̺ until the moment when ̺ chooses an action to enter some of the new absorbing states. From that point on, ζ may choose some arbitrary fixed behavior to stay in the current MEC. As a consequence: P s0 ζ [stay eventually in C] ≥ y C , and in fact, we get equality here, because of the equations (2) from L. Note that ζ only needs a finite constant amount of memory.
B. Proofs of Section V
Explanation of Proposition 3. The result that P π s0 [lr inf ( r) ≥ v] > 0 implies that there is a strategy π ′ such that P π ′ s0 [lr inf ( r) ≥ v] = 1 can be derived from the results of [2], [9] as follows. Since lr inf ( r) ≥ v is a tail or prefix-independent function, it follows from the results of [2] that if P π s0 [lr inf ( r) ≥ v] > 0, then there exists a state s in the MDP with value 1, i.e., there exists s such that sup π P π s [lr inf ( r) ≥ v] = 1. It follows from the results of [9] that in MDPs with tail functions, optimal strategies exist and thus it follows that there exist a strategy π 1 from s such that P π1 s [lr inf ( r) ≥ v] = 1. Since the MDP is strongly connected, the state s can be reached with probability 1 from s 0 by a strategy π 2 . Hence the strategy π 2 , followed by the strategy π 1 after reaching s, is the witness strategy π ′ such that P π ′ s0 [lr inf ( r) ≥ v] = 1.
1) Proof of the inequality (12):
a∈A freq(ξε, s0, a) · ri(a) Now, we distinguish two cases. First, if T − N i ≤ κ i+1 , then
= a∈Ax a + x ′ a X · ri(a) (def) = 1 X · a∈Ax a · ri(a) + 1 X · a∈A x ′ a · ri(a) (rearranging) = a∈Ax a · ri(a) + 1 − X X · a∈Ax a · ri(a) + 1 X · a∈A x ′ a · ri(a) (rearranging) ≥ a∈Ax a · ri(a) − 1 − X X · a∈Ax a · ri(a) − 1 X · a∈A x ′ a · ri(a) (property of abs. value) ≥ a∈Ax a · ri(a) − (1 − X) · a∈Ax a · ri(a) + a∈A x ′ a · ri(a) (from X > 1) ≥ a∈Ax a · ri(a) − (1 − X) · a∈Ax a · | ri(a)| + a∈A x ′ a · |n i T ≥ n i N i−1 + n i + κ i+1 = 1 − N i−1 + κ i+1 N i−1 + n i + κ i+1 ≥ (1 − 2 1−i )
and thus, by Equation (14),
1 T T t=0 I a (a t ) ≥ (x a − 2 −i )(1 − 2 1−i ). Second, if T − N i ≥ κ i+1 , then 1 T T t=Ni+1 I a (a t ) = 1 T − N i T t=Ni+1 I a (a t ) · T − N i T ≥ (x a − 2 −i−1 ) 1 − N i−1 + n i T ≥ (x a − 2 −i−1 ) 1 − 2 −i − n i
T and thus, by Equation (14),
1 T T t=0 I a (a t ) ≥ (x a − 2 −i ) n i T + (x a − 2 −i−1 ) 1 − 2 −i − n i T ≥ (x a − 2 −i ) n i T + 1 − 2 −i − n i T ≥ (x a − 2 −i )(1 − 2 −i )
which finishes the proof of (13).
C. Deterministic-update Strategies for Expectation Objectives
Recall the system L of linear inequalities presented on Figure 2 on page 6. Proposition 4: Every nonnegative solution of the system L induces a finite-memory deterministic-update strategy σ satisfying E σ s0 [lr inf ( r)] ≥ v. Proof: The proof proceeds almost identically to the proof of Proposition 1. Let us recall the important steps from the said proof first. There we worked with the numbersx a , a ∈ A, which, assigned to the variables x a , formed a part of the solution to L. We also worked with two important strategies. The first one, a finite-memory deterministic-update strategy ζ, made sure that, starting in s 0 , a run stays in a MEC C forever with probability y C = a∈A∩Cx a . The second one, a memoryless strategy σ ′ , had the property that when the starting distribution was α(s) :=x s = a∈Act(s)x a then E σ ′ α [lr inf ( r)] ≥ v. 1 To produce the promised finite-memory deterministic-update strategy σ we now have to combine the strategies ζ and σ ′ using only deterministic memory updates.
We now define the strategy σ. It works in three phases. First, it reaches every MEC C and stays in it with the probability y C . Second, it prepares the distribution α, and finally third, it switches to σ ′ . It is clear how the strategy is defined in the third s 1 s 2 a 1 a 2 b 1 b 2 Fig. 3: MDP for Appendix D phase. As for the first phase, this is also identical to what we did in the proof of Proposition 1 forσ: The strategy σ follows the strategy ζ from beginning until in the associated finite state Markov chain G ζ a bottom strongly connected component (BSCC) is reached. At that point the run has already entered its final MEC C to stay in it forever, which happens with probability y C . The last thing to solve is thus the second phase. Two cases may occur. Either there is a state s ∈ C such that |Act(s)∩C| > 1, i.e., there are at least two actions the strategy can take from s without leaving C. Let us denote these actions a and b. Consider an enumeration C = {s 1 , . . . , s k } of vertices of C. Now we define the second phase of σ when in C. We start with defining the memory used in the second phase. We symbolically represent the possible contents of the memory as {W AIT 1 , . . . , W AIT k , SW IT CH 1 , . . . , SW IT CH k }. The second phase then starts with the memory set to W AIT 1 . Generally, if the memory is set to W AIT i then σ aims at reaching s with probability 1. This is possible (since s is in the same MEC) and it is a well known fact that it can be done without using memory. On visiting s, the strategy chooses the action a with probabilityx si /(y C − i−1 j=1x sj ) and the action b with the remaining probability. In the next step the deterministic update function sets the memory either to SW IT CH i or W AIT i+1 , depending on whether the last action seen is a or b, respectively. (Observe that if i = k then the probability of taking b is 0.) The memory set to SW IT CH i means that the strategy aims at reaching s i almost surely, and upon doing so, the strategy switches to the third phase, following σ ′ . It is easy to observe that on the condition of staying in C the probability of switching to the third phase in some s i ∈ C isx si /y C , thus the unconditioned probability of doing so isx si , as desired.
The remaining case to solve is when |Act(s)∩C| = 1 for all s ∈ C. But then switching to the third phase is solved trivially with the right probabilities, because staying in C inevitably already means mimicking σ ′ .
D. Sufficiency of Strategies for Satisfaction Objectives
Lemma 3: There is an MDP G a vector of reward functions r = (r 1 , r 2 ), and a vector (ν, v) ∈ AcSt(lr inf ( r)) such that there is no finite-memory strategy σ satisfying P σ s [lr inf ( r) ≥ v] > ν. Proof: We let G be the MDP from Fig. 3, and the reward function r i (for i ∈ {1, 2}) returns 1 for b i and 0 for all other actions. Let s 1 be the initial vertex. It is easy to see that (0.5, 0.5) ∈ AcEx(lr inf ( r)): consider for example a strategy that first chooses both available actions in s 1 with uniform probabilities, and in subsequent steps chooses self-loops on s 1 or s 2 deterministically. From the results of Section V we subsequently get that (1, 0.5, 0.5) ∈ AcSt(lr inf ( r)).
On the other hand, let σ be arbitrary finite-memory strategy. The Markov chain it induces is by definition finite and for each of its BSCC C we have the following. One of the following then takes place:
s 1 be the initial state and M = {m 1 , m 2 }. Consider a stochastic-update finite-memory strategy σ = (σ u , σ n , α) where α chooses m 1 deterministically, and σ n (m 1 , s 1 ) = [a 1 → 0.5, a 2 → 0.5], σ n (m 2 , s 3 ) = [a 4 → 1] and otherwise σ n chooses self-loops. The memory update function σ u leaves the memory intact except for the case σ u (m 1 , s 3 ) where both
Fig. 1 :
1Example MDPs m 1 and m 2 are chosen with probability 0.5. The play G σ s1 is depicted inFig. 1c.
Fig. 2 :
2System L of linear inequalities. (We define 1 s0 (s) = 1 if s = s 0 , and 1 s0 (s) = 0 otherwise.) there exists a system of linear inequalities L constructible in polynomial time such that • every nonnegative solution of L induces a 2-memory stochastic-update strategy σ satisfying
is given in Appendix A3. To prove that Eqns. (4) are satisfied, it suffices to show that for all s ∈ S we have a∈A f (a) · δ(a)(s) = a∈Act(s) f (a).
(1) of L. Eqns. (3) of L are satisfied due to Eqns.(11) and(10). Eqn.(11) together with a∈A f (a)=1 imply Eqn. (2) of L. This completes the proof of Proposition 2.
Lemma 1 :
1For all s ∈ S we setx s = b∈Act(s)x b and define ξ by ξ(s)(a) :=x ā xs ifx s > 0, and arbitrarily otherwise. We claim that the vector of values x s forms an invariant measure of G ξ . Indeed, noting that a∈Act(s) ξ(s)(a) · δ(a)(s ′ ) is the probability of the transition s
and thus freq(ξ, s, a) =x ā xD = freq(ξ, D, a). This finishes the proof of Lemma 1.
(a) · freq(σ, s 0 , a) = a∈A r i (a) ·x a .
I
a (a t ) · n i T ≥ (x a − 2 −i )
Here we extend the notation in a straightforward way from a single initial state to a general initial distribution, α.
• C contains both s 1 and s 2 . Then by Ergodic theorem for almost every run ω we have lr(I a1 , ω) + lr(I a2 , ω) > 0, which means that lr(I b1 , ω) + lr(I b2 , ω) < 1, and thus necessarily lr inf ( r, ω) ≥ (0.5, 0.5). • C contains only the state s 1 (resp. s 2 ), in which case all runs that enter it satisfy lr inf ( r, ω) < (1, 0) (resp. lr inf ( r, ω) = (0, 1)).From the basic results of the theory of Markov chains we get P σ s1 [lr inf ( r) ≥ (0.5, 0.5)] = 0. Lemma 4: There is an MDP G a vector of reward functions r = (r 1 , r 2 ), a number ε > 0 and a vector (ν, v) ∈ AcSt(lr inf ( r)) such that there is no memoryless-pure strategy σ satisfying P σ s [lr inf ( r) ≥ v − ε] > ν − ε. Proof: We can reuse G and r from the proof of Lemma 3. We let ν = 1 and v = (0.5, 0.5). We have shown that (ν, v) ∈ AcSt(lr inf ( r)). Taking e.g. ε = 0.1, it is a trivial observation that no memoryless pure strategy satisfiesE. Equivalence of Definitions of StrategiesIn this section we argue that the definitions of strategies as functions (SA) * S → dist (A) and as triples (σ u , σ n , α) are interchangeable.Note that formally a strategy π : (SA) * S → dist (A) gives rise to a Markov chain G π with states (SA) * S and transitions w σ(w)(a)·δ(a)(s)→was for all w ∈ (SA) * S, a ∈ A and s ∈ S. Given σ = (σ u , σ n , α) and a run w = (s 0 , m 0 , a 0 )(s 1 , m 1 , a 1 ) . . . of G σ denote w[i] = s 0 a 0 s 1 a 1 . . . s i−1 a i−1 s i . We define f (w) = w[0]w[1]w[2]. . ..We need to show that for every strategy σ = (σ u , σ n , α) there is a strategy π : (SA) * S → dist (A) (and vice versa) such that for every set of runs W of G π we have P σ s0 f −1 (W ) = P π s0 [W ]. We only present the construction of strategies and basic arguments, the technical part of the proof is straightforward.Given π : (SA) * S → dist (A), one can easily define a deterministic-update strategy σ = (σ u , σ n , α) which uses memory (SA) * S. The initial memory element is the initial state s 0 , the next move function is defined by σ(s, w) = π(w), and the memory update function σ u is defined by σ u (a, s, w) = was. Reader can observe that there is a naturally defined bijection between runs in G π and in G σ , and that this bijection preserves probabilities of sets of runs.In the opposite direction, given σ = (σ u , σ n , α), we define π : (SA) * S → dist (A) as follows. Given w = s 0 a 0 . . . s n−1 a n−1 s n ∈ (SA) * S and a ∈ A, we denote U w a the set of all paths in G σ that have the formfor some m 1 , . . . m n . We put π(w)(a) =.The key observation for the proof of correctness of this construction is that the probability of U w a in G σ is equal to probability of taking a path w and then an action a in G π .F. Details from the proof of B.6Here we prove the rest of B.6. We start with proving that the set N := {ν | (ν, v) ∈ P }, where P is the Pareto curve for AcSt(lr inf ( r)), is indeed finite. As we already showed, for every fixed v there is a union C of MECs good for v, and (ν, v) ∈ AcSt(lr inf ( r)) iff the C can be reached with probability at least ν. Hence |N | ≤ 2 |G| , because the latter is an upper bound on a number of unions of MECs in G.To proceed with the proof of B.6, let us consider a fixed ν ∈ N . This gives us a collection R(ν) of all unions C of MECs which can be reached with probability at least ν. For a MEC C let Sol(C) be the set AcEx(lr inf ( r)) of the MDP given by restricting G to C. Further, for every C ∈ R(ν) we set Sol(C) := C∈C Sol(C). Finally, Sol(R(ν)) := C∈R(ν) Sol(C). From the analysis above we already know that Sol(R(ν)) = { v | (ν, v) ∈ AcSt(lr inf ( r)}. As a consequence, (ν, v) ∈ P iff ν ∈ N and v is maximal in Sol(R(ν)) and v / ∈ Sol(R(ν ′ )) for any ν ′ ∈ N, ν ′ > ν. In other words, P is also the Pareto curve of the set Q := {(ν, v) | ν ∈ N, v ∈ Sol(R(ν))}. Observe that Q is a finite union of bounded convex polytopes, because every Sol(C) is a bounded convex polytope. Finally, observe that N can be computed using the algorithms for optimizing single-objective reachability. Further, the inequalities defining Sol(C) can also be computed using our results on AcEx. By a generalised convex polytope we denote a set of points described by a finite conjunction of linear inequalities, which may be both strict and non-strict.Claim 1: Let X be a generalised convex polytope. The smallest convex polytope containing X is its closure, cl(X). Moreover, the set cl(X) \ X is a union of some of the facets of cl(X).Proof: Let I by the set of inequalities defining X, and denote by I ′ the modification of this set where all the inequalities are transformed to non-strict ones. The closure cl(X) indeed is a convex polytope, as it is described by I ′ . Since every convex polytope is closed, if it contains X then it must contain also its closure. Thus cl(X) is the smallest one containing X. Let α < β be a strict inequality from I. By I ′ (α = β) we denote the set I ′ ∪ {α = β}. The points of cl(X) \ X form a union of convex polytopes, each one given by the set I ′ (α = β) for some α < β ∈ I. Thus, it is a union of facets of cl(X). The following lemma now finishes the proof of B.6:Lemma 5: Let Q be a finite union of bounded convex polytopes, Q 1 , . . . , Q m . Then its Pareto curve, P , is a finite union of bounded generalised convex polytopes, P 1 , . . . , P n . Moreover, if the inequalities describing Q i are given, then the inequalities describing P i can be computed.Proof:We proceed by induction on the number m of components of Q. If m = 0 then P = ∅ is clearly a bounded convex polytope easily described by arbitrary two incompatible inequalities. For m ≥ 1 we denote set Q ′ := m−1 i=1 Q i . By the induction hypothesis, the Pareto curve of Q ′ is some P ′ := n ′ i=1 P i where every P i , 1 ≤ i ≤ n ′ is a bounded generalised convex polytope, described by some set of linear inequalities. Denote by dom(X) the (downward closed) set of all points dominated by some point of X. Observe that P , the Pareto curve of Q, is the union of all points which either are maximal in Q m and do not belong to dom(P ′ ) (observe that dom(P ′ ) = dom(Q ′ )), or are in P ′ and do not belong to dom(Q m ). In symbols:The set dom(P ′ ) of all x for which there is some y ∈ P ′ such that y ≥ x is a union of projections of generalised convex polytopes -just add the inequalities from the definition of each P i instantiated with y to the inequality y ≥ x, and remove x by projecting. Thus, dom(P ′ ) is a union of generalised convex polytopes itself. A difference of two generalised convex polytopes is a union of generalised convex polytopes. Thus the set "maximal from Q m \ dom(P ′ )" is a union of generalised bounded convex polytopes, and for the same reasons so is P ′ \ dom(Q m ).Finally, let us show how to compute P . This amounts to computing the projection, and the set difference. For convex polytopes, efficient computing of projections is a problem studied since the 19th century. One of possible approaches, non-optimal from the complexity point of view, but easy to explain, is by traversing the vertices of the convex polytope and projecting them individually, and then taking the convex hull of those vertices. To compute a projection of a generalised convex polytope X, we first take its closure cl(X), and project the closure. Then we traverse all the facets of the projection and mark every facet to which at least one point of X projected. This can be verified by testing whether the inequalities defining the facet in conjunction with the inequalities defining X have a solution. Finally, we remove from the projection all facets which are not marked. Due to Claim 1, the difference of the projection of cl(X) and the projection of X is a union of facets. Every facet from the difference has the property that no point from X is projected to it. Thus we obtained the projection of X.Computing the set difference of two bounded generalised convex polytopes is easier: Consider we have two polytopes, given by sets I 1 and I 2 of inequalities. Then subtracting the second generalised convex polytope from the first is the union of generalised polytopes given by the inequalities I 1 ∪ {α ⊀ β}, where α ≺ β ranges over all inequalities (strict or nonstrict) in I 2 .
One-counter stochastic games. T Brázdil, V Brožek, K Etessami, Schloss Dagstuhl -Leibniz-Zentrum für Informatik. K. Lodaya and M. Mahajan8T. Brázdil, V. Brožek, and K. Etessami. One-counter stochastic games. In K. Lodaya and M. Mahajan, editors, FSTTCS, volume 8 of LIPIcs, pages 108-119. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2010.
Concurrent games with tail objectives. K Chatterjee, Theor. Comput. Sci. 388K. Chatterjee. Concurrent games with tail objectives. Theor. Comput. Sci., 388:181-198, December 2007.
Markov decision processes with multiple long-run average objectives. K Chatterjee, Proc. FSTTCS'07. FSTTCS'07SpringerK. Chatterjee. Markov decision processes with multiple long-run average objectives. In Proc. FSTTCS'07, pages 473-484. Springer, 2007.
Markov decision processes with multiple objectives. K Chatterjee, R Majumdar, T Henzinger, Proc. STACS'06. STACS'06SpringerK. Chatterjee, R. Majumdar, and T. Henzinger. Markov decision processes with multiple objectives. In Proc. STACS'06, pages 325-336. Springer, 2006.
Markov decision processes and regular events. C Courcoubetis, M Yannakakis, IEEE Transactions on. 4310Automatic ControlC. Courcoubetis and M. Yannakakis. Markov decision processes and regular events. Automatic Control, IEEE Transactions on, 43(10):1399- 1418, Oct. 1998.
Multiobjective model checking of Markov decision processes. K Etessami, M Kwiatkowska, M Vardi, M Yannakakis, LMCS. 44K. Etessami, M. Kwiatkowska, M. Vardi, and M. Yannakakis. Multi- objective model checking of Markov decision processes. LMCS, 4(4):1- 21, 2008.
Competitive Markov Decision Processes. J Filar, K Vrieze, Springer-VerlagJ. Filar and K. Vrieze. Competitive Markov Decision Processes. Springer-Verlag, 1997.
Quantitative multi-objective verification for probabilistic systems. V Forejt, M Kwiatkowska, G Norman, D Parker, H Qu, Proc. TACAS'11. TACAS'11SpringerTo appearV. Forejt, M. Kwiatkowska, G. Norman, D. Parker, and H. Qu. Quan- titative multi-objective verification for probabilistic systems. In Proc. TACAS'11, LNCS. Springer, 2011. To appear.
Solving simple stochastic tail games. H Gimbert, F Horn, M. Charikar, editor, SODASIAMH. Gimbert and F. Horn. Solving simple stochastic tail games. In M. Charikar, editor, SODA, pages 847-862. SIAM, 2010.
Multicriteria truss optimization. J Koski, Multicriteria Optimization in Engineering and in the Sciences. W. StadlerPlenum PressJ. Koski. Multicriteria truss optimization. In W. Stadler, editor, Multicriteria Optimization in Engineering and in the Sciences. Plenum Press, 1988.
Markov chains. J R Norris, Cambridge University PressJ. R. Norris. Markov chains. Cambridge University Press, 1998.
Game Theory. G Owen, Academic PressG. Owen. Game Theory. Academic Press, 1995.
On the approximability of tradeoffs and optimal access of web sources. C Papadimitriou, M Yannakakis, FOCS 00. IEEE PressC. Papadimitriou and M. Yannakakis. On the approximability of trade- offs and optimal access of web sources. In FOCS 00, pages 86-92. IEEE Press, 2000.
Markov Decision Processes. M Puterman, John Wiley and SonsM. Puterman. Markov Decision Processes. John Wiley and Sons, 1994.
Real analysis. H Royden, Prentice Hall3rd editionH. Royden. Real analysis. Prentice Hall, 3rd edition, 12 Feb. 1988.
Time-energy design space exploration for multi-layer memory architectures. R Szymanek, F Catthoor, K Kuchcinski, DATE 04. IEEER. Szymanek, F. Catthoor, and K. Kuchcinski. Time-energy design space exploration for multi-layer memory architectures. In DATE 04. IEEE, 2004.
| []
|
[
"The partial vine copula: A dependence measure and approximation based on the simplifying assumption *",
"The partial vine copula: A dependence measure and approximation based on the simplifying assumption *"
]
| [
"Fabian Spanhel [email protected] \nDepartment of Statistics\nLudwig-Maximilians-Universität München\nAkademiestr. 180799MunichGermany\n",
"Malte S Kurz [email protected] \nDepartment of Statistics\nLudwig-Maximilians-Universität München\nAkademiestr. 180799MunichGermany\n"
]
| [
"Department of Statistics\nLudwig-Maximilians-Universität München\nAkademiestr. 180799MunichGermany",
"Department of Statistics\nLudwig-Maximilians-Universität München\nAkademiestr. 180799MunichGermany"
]
| []
| Simplified vine copulas (SVCs), or pair-copula constructions, have become an important tool in high-dimensional dependence modeling. So far, specification and estimation of SVCs has been conducted under the simplifying assumption, i.e., all bivariate conditional copulas of the vine are assumed to be bivariate unconditional copulas. We introduce the partial vine copula (PVC) which provides a new multivariate dependence measure and which plays a major role in the approximation of multivariate distributions by SVCs. The PVC is a particular SVC where to any edge a j-th order partial copula is assigned and constitutes a multivariate analogue of the bivariate partial copula.We investigate to what extent the PVC describes the dependence structure of the underlying copula. We show that the PVC does not minimize the Kullback-Leibler divergence from the true copula and that the best approximation satisfying the simplifying assumption is given by a vine pseudo-copula. However, under regularity conditions, stepwise estimators of pair-copula constructions converge to the PVC irrespective of whether the simplifying assumption holds or not. Moreover, we elucidate why the PVC is the best feasible SVC approximation in practice.extensively developed under the simplifying assumption[6,7,8,9,10], with studies showing the superiority of simplified vine copula models over elliptical copulas and nested Archimedean copulas (Aas and Berg[11],Fischer et al. [12]).Although some copulas can be expressed as a simplified vine copula, the simplifying assumption is not true in general. Hobaek Haff et al.[13]point out that the simplifying assumption is in general not valid and provide examples of multivariate distributions which do not satisfy the simplifying assumption. Stöber et al.[14]show that the Clayton copula is the only Archimedean copula for which the simplifying assumption holds, while the Student-t copula is the only simplified vine copula arising from a scale mixture of normal distributions. In fact, it is very unlikely that the unknown data generating process satisfies the simplifying assumption in a strict mathematical sense. As a result, researchers have recently started to investigate new dependence concepts that are related to the simplifying assumption and arise if it does not hold. In particular, studies on the bivariate partial copula, a generalization of the partial correlation coefficient, have (re-)emerged lately[15,16,17,18,19].We introduce the partial vine copula (PVC) which constitutes a multivariate analogue of the bivariate partial copula and which generalizes the partial correlation matrix. The PVC is a particular simplified vine copula where to any edge a j-th order partial copula is assigned. It provides a new multivariate dependence measure for a d-dimensional random vector in terms of d(d − 1)/2 bivariate unconditional copulas and can be readily estimated for high-dimensional data[20]. We investigate several properties of the PVC and show to what extent the dependence structure of the underlying distribution is captured. The PVC plays a crucial role in terms of approximating a multivariate distribution by a simplified vine copula (SVC). We show that many estimators of SVCs converge to the PVC if the simplifying assumption does not hold. However, we also prove that the PVC may not minimize the Kullback-Leibler divergence from the true copula and thus may not be the best approximation in the space of simplified vine copulas. This result is rather surprising, because it implies that it may not be optimal to specify the true copulas in the first tree of a simplified vine copula approximation. Moreover, joint and stepwise estimators of SVCs may not converge to the same probability limit any more if the simplifying assumption does not hold. Nevertheless, due to the prohibitive computational burden or simply because only a stepwise model selection and estimation is possible, the PVC is the best feasible SVC approximation in practice. Moreover, the PVC is used by [20] to construct a new non-parametric estimator of a multivariate distribution that can outperform classical non-parametric approaches and by[21]to test the simplifying assumption in high-dimensional vine copulas. All in all, these facts highlight the great practical importance of the PVC for multivariate dependence modeling.The rest of this paper is organized as follows. (Simplified) vine copulas, the simplifying assumption, The partial vine copula 3 conditional and partial copulas, are discussed in Section 2. The PVC and j-th order partial copulas are introduced in Section 3. Properties of the PVC and some examples are presented in Section 4. In Section 5 we analyze the role of the PVC for simplified vine copula approximations and explain why the PVC is the best feasible approximation in practical applications. A parametric estimator for the PVC is presented in Section 6 and implications for the stepwise and joint maximum likelihood estimator of simplified vine copulas are illustrated. Section 7 contains some concluding remarks.The following notation and assumptions are used throughout the paper. We write X 1:d := (X 1 , . . . , X d ), so that F X 1:d (x 1:d ) := P(∀i = 1, . . . , d : X i ≤ x i ), and dx 1:d := dx 1 . . . dx d to denote the variables of integration in f X 1:d (x 1:d )dx 1:d . C ⊥ refers to the independence copula. X ⊥ Y means that X and Y are stochastically independent. For 1 ≤ k ≤ d, the partial derivative of g w.r.t. the k-th argument is denoted by ∂ k g(x 1:d ). We write 1 1 {A} = 1 if A is true, and 1 1 {A} = 0 otherwise. For simplicity, we assume that all random variables are real-valued and continuous. In the following, let d ≥ 3, if not otherwise specified, and C d be the space of absolutely continuous d-dimensional copulas with positive density (a.s.).The distribution function of a random vector U 1:d with uniform margins is denoted by F 1:d = C 1:d ∈ C d . We set I d l := {(i, j) : j = l, . . . , d − 1, i = 1, . . . , d − j} and S ij := i + 1 : i + j − 1 := i + 1, . . . , i + j − 1. We focus on D-vine copulas, but all results carry over to regular vine copulas (Bedford and Cooke[22], Kurowicka and Joe [23]). An overview of the used notation can be found inTable 1. All proofs are deferred to the appendix. | null | [
"https://arxiv.org/pdf/1510.06971v2.pdf"
]
| 53,603,769 | 1510.06971 | 7b6e6042dadaf7c66e7853f0c83749d250c11e0b |
The partial vine copula: A dependence measure and approximation based on the simplifying assumption *
Fabian Spanhel [email protected]
Department of Statistics
Ludwig-Maximilians-Universität München
Akademiestr. 180799MunichGermany
Malte S Kurz [email protected]
Department of Statistics
Ludwig-Maximilians-Universität München
Akademiestr. 180799MunichGermany
The partial vine copula: A dependence measure and approximation based on the simplifying assumption *
Vine copulaPair-copula constructionSimplifying assumptionConditional copulaApproximation
Simplified vine copulas (SVCs), or pair-copula constructions, have become an important tool in high-dimensional dependence modeling. So far, specification and estimation of SVCs has been conducted under the simplifying assumption, i.e., all bivariate conditional copulas of the vine are assumed to be bivariate unconditional copulas. We introduce the partial vine copula (PVC) which provides a new multivariate dependence measure and which plays a major role in the approximation of multivariate distributions by SVCs. The PVC is a particular SVC where to any edge a j-th order partial copula is assigned and constitutes a multivariate analogue of the bivariate partial copula.We investigate to what extent the PVC describes the dependence structure of the underlying copula. We show that the PVC does not minimize the Kullback-Leibler divergence from the true copula and that the best approximation satisfying the simplifying assumption is given by a vine pseudo-copula. However, under regularity conditions, stepwise estimators of pair-copula constructions converge to the PVC irrespective of whether the simplifying assumption holds or not. Moreover, we elucidate why the PVC is the best feasible SVC approximation in practice.extensively developed under the simplifying assumption[6,7,8,9,10], with studies showing the superiority of simplified vine copula models over elliptical copulas and nested Archimedean copulas (Aas and Berg[11],Fischer et al. [12]).Although some copulas can be expressed as a simplified vine copula, the simplifying assumption is not true in general. Hobaek Haff et al.[13]point out that the simplifying assumption is in general not valid and provide examples of multivariate distributions which do not satisfy the simplifying assumption. Stöber et al.[14]show that the Clayton copula is the only Archimedean copula for which the simplifying assumption holds, while the Student-t copula is the only simplified vine copula arising from a scale mixture of normal distributions. In fact, it is very unlikely that the unknown data generating process satisfies the simplifying assumption in a strict mathematical sense. As a result, researchers have recently started to investigate new dependence concepts that are related to the simplifying assumption and arise if it does not hold. In particular, studies on the bivariate partial copula, a generalization of the partial correlation coefficient, have (re-)emerged lately[15,16,17,18,19].We introduce the partial vine copula (PVC) which constitutes a multivariate analogue of the bivariate partial copula and which generalizes the partial correlation matrix. The PVC is a particular simplified vine copula where to any edge a j-th order partial copula is assigned. It provides a new multivariate dependence measure for a d-dimensional random vector in terms of d(d − 1)/2 bivariate unconditional copulas and can be readily estimated for high-dimensional data[20]. We investigate several properties of the PVC and show to what extent the dependence structure of the underlying distribution is captured. The PVC plays a crucial role in terms of approximating a multivariate distribution by a simplified vine copula (SVC). We show that many estimators of SVCs converge to the PVC if the simplifying assumption does not hold. However, we also prove that the PVC may not minimize the Kullback-Leibler divergence from the true copula and thus may not be the best approximation in the space of simplified vine copulas. This result is rather surprising, because it implies that it may not be optimal to specify the true copulas in the first tree of a simplified vine copula approximation. Moreover, joint and stepwise estimators of SVCs may not converge to the same probability limit any more if the simplifying assumption does not hold. Nevertheless, due to the prohibitive computational burden or simply because only a stepwise model selection and estimation is possible, the PVC is the best feasible SVC approximation in practice. Moreover, the PVC is used by [20] to construct a new non-parametric estimator of a multivariate distribution that can outperform classical non-parametric approaches and by[21]to test the simplifying assumption in high-dimensional vine copulas. All in all, these facts highlight the great practical importance of the PVC for multivariate dependence modeling.The rest of this paper is organized as follows. (Simplified) vine copulas, the simplifying assumption, The partial vine copula 3 conditional and partial copulas, are discussed in Section 2. The PVC and j-th order partial copulas are introduced in Section 3. Properties of the PVC and some examples are presented in Section 4. In Section 5 we analyze the role of the PVC for simplified vine copula approximations and explain why the PVC is the best feasible approximation in practical applications. A parametric estimator for the PVC is presented in Section 6 and implications for the stepwise and joint maximum likelihood estimator of simplified vine copulas are illustrated. Section 7 contains some concluding remarks.The following notation and assumptions are used throughout the paper. We write X 1:d := (X 1 , . . . , X d ), so that F X 1:d (x 1:d ) := P(∀i = 1, . . . , d : X i ≤ x i ), and dx 1:d := dx 1 . . . dx d to denote the variables of integration in f X 1:d (x 1:d )dx 1:d . C ⊥ refers to the independence copula. X ⊥ Y means that X and Y are stochastically independent. For 1 ≤ k ≤ d, the partial derivative of g w.r.t. the k-th argument is denoted by ∂ k g(x 1:d ). We write 1 1 {A} = 1 if A is true, and 1 1 {A} = 0 otherwise. For simplicity, we assume that all random variables are real-valued and continuous. In the following, let d ≥ 3, if not otherwise specified, and C d be the space of absolutely continuous d-dimensional copulas with positive density (a.s.).The distribution function of a random vector U 1:d with uniform margins is denoted by F 1:d = C 1:d ∈ C d . We set I d l := {(i, j) : j = l, . . . , d − 1, i = 1, . . . , d − j} and S ij := i + 1 : i + j − 1 := i + 1, . . . , i + j − 1. We focus on D-vine copulas, but all results carry over to regular vine copulas (Bedford and Cooke[22], Kurowicka and Joe [23]). An overview of the used notation can be found inTable 1. All proofs are deferred to the appendix.
Introduction
Copulas constitute an important tool to model dependence [1,2,3]. While it is easy to construct bivariate copulas, the construction of flexible high-dimensional copulas is a sophisticated problem. The introduction of simplified vine copulas (Joe [4]), or pair-copula constructions (Aas et al. [5]), has been an enormous advance for high-dimensional dependence modeling. Simplified vine copulas are hierarchical structures, constructed upon a sequence of bivariate unconditional copulas, which capture the conditional dependence between pairs of random variables if the data generating process satisfies the simplifying assumption. In this case, all conditional copulas of the data generating vine collapse to unconditional copulas and the true copula can be represented in terms of a simplified vine copula. Vine copula methodology and application have been Table 1. Notation for simplified D-vine copulas. U 1:d has standard uniform margins, d ≥ 3, (i, j) ∈ I d 1 , k = i, i + j.
Notation Explanation
U k|S ij F k|S ij (U k |U S ij )
, conditional probability integral transform (CPIT) of U k w.r.t. U S ij C i,i+j; S ij bivariate conditional copula of F i,i+j|S ij , i.e., C i,i+j; S ij = F U i|S ij ,U i+j|S ij |U S ij C SVC i,i+j; S ij arbitrary bivariate (unconditional) copula that is used to model C i,i+j; S ij C P i,i+j; S ij partial copula of C i,i+j; S ij , i.e., C P i,i+j; S ij = F U i|S ij ,U i+j|S ij (b) D-vine copula. 2. Simplified vine copulas, conditional copulas, and higher-order partial copulas
In this section, we discuss (simplified) vine copulas and the simplifying assumption. Thereafter, we introduce the partial copula which can be considered as a generalization of the partial correlation coefficient and as an approximation of a bivariate conditional copula.
Definition 2.1 (Simplified D-vine copula or pair-copula construction -Joe [4], Aas et al. [5])
For (i, j) ∈ I d 1 , let C SVC i,i+j; Sij ∈ C 2 with density c SVC i,i+j; Sij . For j = 1 and i = 1, . . . , d − j, we set C SVC i,i+j; Sij = C SVC i,i+1 and u SVC k|Sij = u k for k = i, i + j. For (i, j) ∈ I d 2 , define u SVC i|Sij := F SVC i|Sij (u i |u Sij ) = ∂ 2 C SVC i,i+j−1; Si,j−1 (u SVC i|Si,j−1 , u SVC i+j−1|Si,j−1 ), u SVC i+j|Sij := F SVC i+j|Sij (u i+j |u Sij ) = ∂ 1 C SVC i+1,i+j; Si+1,j−1 (u SVC i+1|Si+1,j−1 , u SVC i+j|Si+1,j−1 ). is the density of a d-dimensional simplified D-vine copula C SVC 1:d . We denote the space of d-dimensional simplified D-vine copulas by C SVC d .
From a graph-theoretic point of view, simplified (regular) vine copulas can be considered as an ordered sequence of trees, where j refers to the number of the tree and a bivariate unconditional copula C SVC i,i+j; Sij is assigned to each of the d − j edges of tree j (Bedford and Cooke [22]). The left hand side of Figure 1 shows the graphical representation of a simplified D-vine copula for d = 4, i.e., .
c
The partial vine copula 5
The bivariate unconditional copulas C SVC i,i+j; Sij are also called pair-copulas, so that the resulting model is often termed a pair-copula construction (PCC). By means of simplified vine copula models one can construct a wide variety of flexible multivariate copulas because each of the d(d − 1)/2 bivariate unconditional copulas C SVC i,i+j; Sij can be chosen arbitrarily and the resulting model is always a valid d-dimensional copula. Moreover, a pair-copula construction does not suffer from the curse of dimensions because it is build upon a sequence of bivariate unconditional copulas which renders it very attractive for high-dimensional applications. Obviously, not every multivariate copula can be represented by a simplified vine copula. However, every copula can be represented by the following (non-simplified) D-vine copula.
Definition 2.2 (D-vine copula -Kurowicka and Cooke [24])
Let U 1:d be a random vector with cdf F 1:d = C 1:d ∈ C d . For j = 1 and i = 1, . . . , d−j, we set C i,i+j; Sij = C i,i+1
and u k|Sij = u k for k = i, i + j. For (i, j) ∈ I d 2 , let C i,i+j; Sij denote the conditional copula of F i,i+j|Sij (Definition 2.5) and let u k|Sij := F k|Sij (u k |u Sij ) for k = i, i + j. The density of a D-vine copula decomposes the copula density of U 1:d into d(d − 1)/2 bivariate conditional copula densities c i,i+j; Sij according to the following factorization:
c 1:d (u 1:d ) = (i,j)∈I d 1 c i,i+j; Sij (u i|Sij , u i+j|Sij |u Sij ).
Contrary to a simplified D-vine copula in Definition 2.1, a bivariate conditional copula C i,i+j; Sij , which is in general a function of j + 1 variables, is assigned to each edge of a D-vine copula in Definition 2.2. The influence of the conditioning variables on the conditional copulas is illustrated by dashed lines in the right hand side of Figure 1. In applications, the simplifying assumption is typically imposed, i.e., it is assumed that all bivariate conditional copulas of the data generating vine copula degenerate to bivariate unconditional copulas.
Definition 2.3 (The simplifying assumption -Hobaek Haff et al. [13])
The D-vine copula in Definition 2.2 satisfies the simplifying assumption if c i,i+j; Sij (·, ·|u Sij ) does not depend on u Sij for all (i, j) ∈ I d 2 .
If the data generating copula satisfies the simplifying assumption, it can be represented by a simplified vine copula, resulting in fast and simple statistical inference. Several methods for the consistent specification and estimation of pair-copula constructions have been developed under this assumption (Hobaek Haff [25], Dißmann et al. [6]). However, in view of Definition 2.2 and Definition 2.1 it is evident that it is extremely unlikely that the data generating vine copula strictly satisfies the simplifying assumption in practical applications.
Several questions arise if the data generating process does not satisfy the simplifying assumption and a simplified D-vine copula model (Definition 2.1) is used to approximate a general D-vine copula (Definition 2.2).
First of all, what bivariate unconditional copulas C SVC i,i+j; Sij should be chosen in Definition 2.1 to model the bivariate conditional copulas C i,i+j; Sij in Definition 2.2 so that the best approximation w.r.t. a certain criterion is obtained? What simplified vine copula model do established stepwise procedures (asymptotically) specify and estimate if the simplifying assumption does not hold for the data generating vine copula?
What are the properties of an optimal approximation? Before we address these questions in Section 5, it is useful to recall the definition of the conditional and partial copula in the remainder of this section and to introduce and investigate the partial vine copula in Section 3 and Section 4 because it plays a major role in the approximation of copulas by simplified vine copulas.
Definition 2.4 (Conditional probability integral transform (CPIT))
Let U 1:d ∼ F 1:d ∈ C d , (i, j) ∈ I d 2 and k = i, i + j. We call U k|Sij := F k|Sij (U k |U Sij ) the conditional probability integral transform of U k w.r.t. U Sij .
It can be readily verified that, under the assumptions in Definition 2.4, U k|Sij ∼ U(0, 1) and U k|Sij ⊥ U Sij .
Thus, applying the random transformation F k|Sij (·|U Sij ) to U k removes possible dependencies between U k and U Sij and U k|Sij can be interpreted as the remaining variation in U k that can not be explained by U Sij .
This interpretation of the CPIT is crucial for understanding the conditional and partial copula which are related to the (conditional) joint distribution of CPITs. The conditional copula has been introduced by Patton [26] and we restate its definition here. 1 Definition 2.5 (Bivariate conditional copula -Patton [26])
Let U 1:d ∼ F 1:d ∈ C d and (i, j) ∈ I d 2 .
The (a.s.) unique conditional copula C i,i+j; Sij of the conditional distribution F i,i+j|Sij is defined by
C i,i+j; Sij (a, b|u Sij ) := P(U i|Sij ≤ a, U i+j|Sij ≤ b|U Sij = u Sij ) = F i,i+j|Sij (F −1 i|Sij (a|u Sij ), F −1 i+j|Sij (b|u Sij )|u Sij ).
Equivalently, we have that
F i,i+j|Sij (u i , u i+j |u Sij ) = C i,i+j; Sij (F i|Sij (u i |u Sij ), F i+j|Sij (u i+j |u Sij )|u Sij ),
so that the effect of a change in u Sij on the conditional distribution F i,i+j|Sij (u i , u i+j |u Sij ) can be separated into two effects. First, the values of the CPITs, (F i|Sij (u i |u Sij ), F i+j|Sij (u i+j |u Sij )), at which the conditional copula is evaluated, may change. Second, the functional form of the conditional copula C i,i+j; Sij (·, ·|u Sij ) 1 Patton's notation for the conditional copula is given by C i,i+j|S ij . Originally, this notation has also been used in the vine copula literature [5,23,27]. However, the current notation for a(n) (un)conditional copula that is assigned to an edge of a vine is given by C i,i+j; S ij and C i,i+j|S ij is used to denote F U i ,U i+j |U S ij [8,14,28]. In order to avoid possible confusions, we use C i,i+j; S ij to denote a conditional copula and C SVC i,i+j; S ij to denote an unconditional copula. may vary. In comparison to the conditional copula, which is the conditional distribution of two CPITs, the partial copula is the unconditional distribution and copula of two CPITs.
Definition 2.6 (Bivariate partial copula -Bergsma [15])
Let U 1:d ∼ F 1:d ∈ C d and (i, j) ∈ I d 2 .
The partial copula C P i,i+j; Sij of the distribution F i,i+j|Sij is defined by
C P i,i+j; Sij (a, b) := P(U i|Sij ≤ a, U i+j|Sij ≤ b).
Since U i|Sij ⊥ U Sij and U i+j|Sij ⊥ U Sij , the partial copula represents the distribution of random variables which are individually independent of the conditioning vector U Sij . This is similar to the partial correlation coefficient, which is the correlation of two random variables from which the linear influence of the conditioning vector has been removed. The partial copula can also be interpreted as the expected conditional copula,
C P i,i+j; Sij (a, b) = R j−1 C i,i+j; Sij (a, b|u Sij )dF Sij (u Sij ),
and be considered as an approximation of the conditional copula. Indeed, it is easy to show that the partial copula C P i,i+j; Sij minimizes the Kullback-Leibler divergence from the conditional copula C i,i+j; Sij in the space of absolutely continuous bivariate distribution functions. The partial copula is first mentioned by Bergsma [15] who applies the partial copula to test for conditional independence. Recently, there has been a renewed interest in the partial copula. Spanhel and Kurz [18] investigate properties of the partial copula and mention some explicit examples whereas Gijbels et al. [16,17] and Portier and Segers [19] focus on the non-parametric estimation of the partial copula.
3. Higher-order partial copulas and the partial vine copula A generalization of the partial correlation coefficient that is different from the partial copula is given by the higher-order partial copula. To illustrate this relation, let us recall the common definition of the partial correlation coefficient. Assume that all univariate margins of Y 1:d have zero mean and finite variance. For k = i, i+j, let P(Y k |Y Sij ) denote the best linear predictor of Y k w.r.t Y Sij which minimizes the mean squared error so thatẼ k|Sij = Y k −P(Y k |Y Sij ) is the corresponding prediction error. The partial correlation coefficient of Y i and Y i+j given Y Sij is then defined by ρ i,i+j;Sij = Corr[Ẽ i|Sij ,Ẽ i+j|Sij ]. An equivalent definition is given as follows. For i = 1, . . . , d − 2, let
E i|i+1 := Y i − P(Y i |Y i+1 ), and E i+2|i+1 := Y i+2 − P(Y i+2 |Y i+1 ).
(3.1)
Moreover, for j = 3, . . . , d − 1, and i = 1, . . . , d − j, define
E i|Sij := E i|Si,j−1 − P(E i|Si,j−1 |E i+j−1|Si,j−1 ), E i+j|Sij := E i+j|Si+1,j−1 − P(E i+j|Si+1,j−1 |E i+1|Si+1,j−1 ). (3.2)
It is easy to show that E k|Sij =Ẽ k|Sij for all k = i, i + j and (i, j) ∈ I d 2 . That is, E k|Sij is the error of the best linear prediction of Y k in terms of Y Sij . Thus, ρ i,i+j;Sij = Corr[E i|Sij , E i+j|Sij ]. However, the interpretation of the partial correlation coefficient as a measure of conditional dependence is different depending on whether one considers it as the correlation of (Ẽ i|Sij ,Ẽ i+j|Sij ) or (E i|Sij , E i+j|Sij ). For instance,
ρ 14;23 = Corr[Ẽ 1|23 ,Ẽ 4|23 ]
can be interpreted as the correlation between Y 1 and Y 4 after each variable has been corrected for the linear influence of Y 2:3 , i.e., Corr[g(Ẽ k|23 ), h(Y 2:3 )] = 0 for all linear functions g and h. The idea of the partial copula is to replace the prediction errors E 1|23 and E 4|23 by the CPITS U 1|23 and U 4|23 which are independent of Y 2:3 . On the other side, ρ 14;23 = Corr E 1|23 , E 4|23 is the correlation of (E 1|2 , E 4|3 ) after E 1|2 has been corrected for the linear influence of E 3|2 , and E 4|3 has been corrected for the linear influence of E 2|3 . Consequently, a different generalization of the partial correlation coefficient emerges if we do not only decorrelate the involved random variables in (3.1) and (3.2) but render them independent by replacing each expression of the form X − P(X|Z) in (3.1) and (3.2) by the corresponding CPIT F X|Z (X|Z). The joint distribution of a resulting pair of random variables is given by the j-th order partial copula and the set of these copulas together with a vine structure constitute the partial vine copula. Consider the D-vine copula C 1:d ∈ C d stated in Definition 2.2. In the first tree, we set for i = 1, . . . , d − 1 :
C PVC i,i+1 = C i,i+1
, while in the second tree, we denote for i = 1, . . . , d − 2, k = i, i + 2 :
C PVC i,i+2; i+1 = C P i,i+2; i+1 and U PVC k|i+1 = U k|i+1 = F k|i+1 (U k |U i+1 ).
In the remaining trees j = 3, . . . , d − 1, for i = 1, . . . , d − j, we define
U PVC i|Sij := F PVC i|Sij (U i |U Sij ) := ∂ 2 C PVC i,i+j−1; Si,j−1 (U PVC i|Si,j−1 , U PVC i+j−1|Si,j−1 ), U PVC i+j|Sij := F PVC i+j|Sij (U i+j |U Sij ) := ∂ 1 C PVC i+1,i+j; Si+1,j−1 (U PVC i+1|Si+1,j−1 , U PVC i+j|Si+1,j−1 ), and C PVC i,i+j; Sij (a, b) := P(U PVC i|Sij ≤ a, U PVC i+j|Sij ≤ b).
We call the resulting simplified vine copula C PVC 1:d the partial vine copula (PVC) of C 1:d . Its density is given by
c PVC 1:d (u 1:d ) := (i,j)∈I d 1 c PVC i,i+j; Sij (u PVC i|Sij , u PVC i+j|Sij ).
For k = i, i + j, we call U PVC k|Sij the (j − 2)-th order partial probability integral transform (PPIT) of U k w.r.t. U Sij and C PVC i,i+j; Sij the (j − 1)-th order partial copula of F i,i+j|Sij that is induced by C PVC 1:d .
Note that the first-order partial copula coincides with the partial copula of a conditional distribution with one conditioning variable. If j ≥ 3, we call C PVC i,i+j; Sij a higher-order partial copula. It is easy to show that, for all (i, j)
∈ I d 1 , U PVC i|Sij is the CPIT of U PVC i|Si,j−1 w.r.t. U PVC i+j−1|Si,j−1 and U PVC i+j|Sij is the CPIT of U PVC i+j|Si+1,j−1 w.r.t. U PVC i+1|Si+1,j−1 .
Thus, PPITs are uniformly distributed and higher-order partial copulas are indeed copulas. Since U PVC i|Sij is the CPIT of U PVC i|Si,j−1 w.r.t. U PVC i+j−1|Si,j−1 , it is independent of U PVC i+j−1|Si,j−1 . However, in general it is not true that U PVC i|Sij ⊥ U Sij as the following proposition clarifies. For (i, j) ∈ I d 2 and k = i, i + j, it holds:
U PVC k|Sij ⊥ U Sij ⇔ U PVC k|Sij = U k|Sij (a.s.). Note that (U PVC i|Sij , U PVC i+j|Sij ) = (U i|Sij , U i+j|Sij ) (a.s.) if and only if C PVC i,i+j; Sij = C P i,i+j; Sij .
Consequently, if a higher-order partial copula does not coincide with the partial copula, it describes the distribution of a pair of uniformly distributed random variables which are neither jointly nor individually independent of the conditioning variables of the corresponding conditional copula. Thus, if the simplifying assumption holds, then C 1:d = C PVC 1:d , i.e., higher-order partial copulas, partial copulas and conditional copulas coincide. This insight is used by [21] to develop tests for the simplifying assumption in high-dimensional vine copulas.
Let k = i, i + j, and G PVC k|Sij (t k |t Sij ) = (F PVC k|Sij ) −1 (t k |t Sij ) denote the inverse of F PVC k|Sij (·|t Sij ) w.r.t. the first argument. A (j − 1)-th order partial copula is then given by
C PVC i,i+j; Sij (a, b) = P(U PVC i|Sij ≤ a, U PVC i+j|Sij ≤ b) = E P(U PVC i|Sij ≤ a, U PVC i+j|Sij ≤ b|U Sij ) = [0,1] j−1 C i,i+j; Sij F i|Sij G PVC i|Sij (a|t Sij ) t Sij , F i+j|Sij G PVC i+j|Sij (b|t Sij ) t Sij t Sij dF Sij (t Sij ). If j ≥ 3, C PVC i,i+j; Sij depends on F i|Sij , F i+j|Sij , C i,i+j; Sij , and F Sij , i.e., it depends on C i:i+j . Moreover, C PVC i,i+j;
Sij also depends on G PVC i|Sij and G PVC i+j|Sij , which are determined by the regular vine structure. Thus, the corresponding PVCs of different regular vines may be different. In particular, if the simplifying assumption does not hold, higher-order partial copulas of different PVCs which refer to the same conditional distribution may not be identical. This is different from the partial correlation coefficient or the partial copula which do not depend on the structure of the regular vine.
In general, higher-order partial copulas do not share the simple interpretation of the partial copula because they can not be considered as expected conditional copulas. However, higher-order partial copulas can be more attractive from a practical point of view. The estimation of the partial copula of C i,i+j; Sij requires the estimation of the two j-dimensional conditional cdfs F i|Sij and F i+j|Sij to construct pseudo-observations from the CPITs (U i|Sij , U i+j|Sij ). As a result, a non-parametric estimation of the partial copula is only sensible if j is very small. In contrast, a higher-order partial copula is the distribution of two PPITs (U PVC i|Sij , U PVC i+j|Sij ) which are made up of only two-dimensional functions (Definition 3.1). Thus, the non-parametric estimation of a higher-order partial copula does not suffer from the curse of dimensionality and is also sensible for large j [20]. But also in a parametric framework the specification of the model family is much easier for a higher-order partial copula than for a conditional copula. This renders higher-order partial copulas very attractive from a modeling point of view to analyze and estimate bivariate conditional dependencies. As we show in Section 6, the PVC is also the probability limit of many estimators of pair-copula constructions and thus of great practical importance.
Properties of the partial vine copula and examples
In this section, we analyze to what extent the PVC describes the dependence structure of the data generating copula if the simplifying assumption does not hold. We first investigate whether the bivariate margins of C PVC 1:d match the bivariate margins of C 1:d and then take a closer look at conditional independence relations. By construction, the bivariate margins C PVC i,i+1 , i = 1, . . . , d−1, of the PVC given in Definition 3.1 are identical to the corresponding margins C i,i+1 , i = 1, . . . , d − 1, of C 1:d . That is because the PVC explicitly specifies these d − 1 margins in the first tree of the vine. The other bivariate margins C PVC i,i+j , where (i, j) ∈ I d 2 , are implicitly specified and given by
C PVC i,i+j (u i , u i+j ) = [0,1] j−1 C PVC i,i+j; Sij (F PVC i|Sij (u i |u Sij ), F PVC i+j (u i+j |u Sij ))dC PVC Sij (u Sij ).
The relation between the implicitly given bivariate margins of the PVC and the underlying copula are summarized in the following lemma.
Lemma 4.1 (Implicitly specified margins of the PVC)
Let C 1:d ∈ C d \C SVC d , (i, j) ∈ I d 2 ,
and τ E and ρ E denote Kendall's τ and Spearman's ρ of the copula E ∈ C 2 . In general, it holds that
C PVC i,i+j = C i,i+j , ρ C PVC i,i+j = ρ Ci,i+j , and τ C PVC i,i+j = τ Ci,i+j .
The next example provides an example of a three-dimensional PVC and illustrates the results of Let C F GM2 (θ) denote the bivariate FGM copula
C F GM2 (u 1 , u 2 ; θ) = u 1 u 2 [1 + θ(1 − u 1 )(1 − u 2 )], |θ| ≤ 1,
and C A (γ) denote the following asymmetric version of the FGM copula ( [1], Example 3.16)
C A (u 1 , u 2 ; γ) = u 1 u 2 [1 + γu 1 (1 − u 1 )(1 − u 2 )], |γ| ≤ 1. (4.1) Assume that C 12 = C A (γ), C 23 = C ⊥ , C 13; 2 (·, ·; u 2 ) = C F GM2 (·, ·; 1 − 2u 2 ) for all u 2 , so that C 1:3 (u 1:3 ) = u2 0 C F GM2 (∂ 2 C A (u 1 , t 2 ), u 3 ; 1 − 2t 2 )dt 2 .
Elementary computations show that the implicit margin is given by
C 13 (u 1 , u 3 ) = u 1 u 3 [γ(u 1 − 3u 2 1 + 2u 3 1 )(1 − u 3 ) + 3]/3,
which is a copula with quartic sections in u 1 and square sections in u 3 if γ = 0. The corresponding PVC is
C PVC 1:3 (u 1:3 ) = u2 0 C PVC 13; 2 (F 1|2 (u 1 |t 2 ), F 3|2 (u 3 |t 2 ))dt 2 C PVC 13; 2 =C ⊥ = u 3 u2 0 ∂ 2 C A (u 1 , t 2 )dt 2 and the implicit margin of C PVC 1:3 is C PVC 13 (u 1 , u 3 ) = C PVC 1:3 (u 1 , 1, u 3 ) = u 1 u 3 . Moreover, ρ C13 = −γ/1080, τ C13 = −γ/135, but ρ C PVC 13 = τ C PVC 13 = 0.
Higher-order partial copulas can also be used to construct new measures of conditional dependence. For instance, if X 1:d is a random vector with copula C 1:d ∈ C d , higher-order partial Spearman's ρ and Kendall's τ of X i and X i+j given X Sij are defined by
τ C PVC i,i+j; S ij = 4 [0,1] 2 C PVC i,i+j; Sij (a, b)dC PVC i,i+j; Sij (a, b) − 1, ρ C PVC i,i+j; S ij = 12 [0,1] 2 C PVC i,i+j; Sij (a, b)dadb − 3.
Note that all dependence measures that are derived from a higher-order partial copula are defined w.r.t. a regular vine structure and that they coincide with their conditional analogues if the simplifying assumption holds. A partial correlation coefficient of zero is commonly interpreted as an indication of conditional independence, although this can be quite misleading if the underlying distribution is not close to a Normal distribution (Spanhel and Kurz [18]). Therefore, one might wonder to what extent higher-order partial copulas can be used to check for conditional independencies. If C PVC i,i+j; Sij equals the independence copula, we say that X i and X i+j are (j-th order) partially independent given X Sij and write X i
PVC ⊥ X i+j |X Sij .
The following theorem establishes that there is in general no relation between conditional independence and higher-order partial independence.
Let d ≥ 4, (i, j) ∈ I d 1 , and C 1:d ∈ C d \C SVC d
be the copula of X 1:d . It holds that
X i ⊥ X i+2 |X i+1 ⇒ X i PVC ⊥ X i+2 |X i+1 , ∀j ≥ 3 : X i ⊥ X i+j |X Sij ⇒ X i PVC ⊥ X i+j |X Sij , and ∀j ≥ 2 : X i ⊥ X i+j |X Sij ⇐ X i PVC ⊥ X i+j |X Sij .
The next five-dimensional example illustrates higher-order partial copulas, higher-order PPITs, and the relation between partial independence and conditional independence.
Example 4.2
Consider the following exchangeable D-vine copula C 1:5 which does not satisfy the simplifying assumption:
C 12 = C 23 = C 34 = C 45 , C 13; 2 = C 24; 3 = C 35; 4 , C 14; 2:3 = C 25; 3:4 , C 12 = C ⊥ , (4.2) C 13; 2 (a, b|u 2 ) = C F GM2 (a, b ; 1 − 2u 2 ), ∀(a, b, u 2 ) ∈ [0, 1] 3 (4.3) C 14; 2:3 = C ⊥ , (4.4) C 15; 2:4 = C ⊥ , (4.5) where C i,i+j; Sij = C ⊥ means that C i,i+j; Sij (a, b|u Sij ) = ab for all (a, b, u Sij ) ∈ [0, 1] j+1 .
All conditional copulas of the vine copula in Example 4.2 correspond to the independence copula except for the second tree. Note that for all i = 1, 2, 3,
(U i , U i+1 , U i+2 ) ∼ C F GM3 (1), where C F GM3 (u 1:3 ; θ) = 3 i=1 u i + θ 3 i=1 u i (1 − u i ), |θ| ≤ 1, is the three-dimensional FGM copula.
The left panel of Figure 2 illustrates the D-vine copula of the data generating process. We now investigate the PVC of C 1:5 which is illustrated in the right panel of Figure 2. Since C 1:5 and C PVC 1:5 are exchangeable copulas, we only report the PPITs U PVC 1|2 , U PVC 1|2:3 and U PVC 1|2:4 in the following lemma. and
C PVC 12 = C ⊥ , C PVC 13; 2 = C ⊥ , C PVC 14; 2:3 (a, b) = C F GM2 (a, b ; 1/9), ∀(a, b) ∈ [0, 1] 2 C PVC 15; 2:4 = C ⊥ , 1 2 3 4 5 12 23 34 45 13|2 24|3 35|4 14|23 25|34 C ⊥ C ⊥ C ⊥ C ⊥ C 13; 2 C 24; 3 C 35; 4 C ⊥ C ⊥ C ⊥ (a) Vine copula in Example 4.2. 1 2 3 4 5 12 23 34 45 13|2 24|3 35|4 14|23 25|34 C ⊥ C ⊥ C ⊥ C ⊥ C ⊥ C ⊥ C ⊥ C PVC 14; 23 C PVC 25; 34 C PVC 15; 2:4 (b) PVC of Example 4.2.U PVC 1|2 = U 1 = U 1|2 , U PVC 1|2:3 = U 1 = U 1|2:3 = U 1 [1 + (1 − U 1 )(1 − 2U 2 )(1 − 2U 3 )], U PVC 1|2:4 = U 1 [1 + θ(1 − U 1 )(1 − 2U 4 )] = U 1|2:4 = U 1|2:3 .
Lemma 4.2 demonstrates that j-th order partial copulas may not be independence copulas, although the corresponding conditional copulas are independence copulas. In particular, under the data generating process the edges of the third tree of C 1:5 are independence copulas. Neglecting the conditional copulas in the second tree and replacing them with first-order partial copulas induces spurious dependencies in the third tree of C PVC 1:5 . The introduced spurious dependence also carries over to the fourth tree where we have (conditional) independence in fact. Nevertheless, the PVC reproduces the bivariate margins of C 1:5 pretty well. It can be readily verified that (C PVC 13 , C PVC 14 , C PVC 24 , C PVC 25 , C PVC 35 ) = (C 13 , C 14 , C 24 , C 25 , C 35 ), i.e., except for C PVC 15 , all bivariate margins of C PVC 1:5 match the bivariate margins of C 1:5 in Example 4.2. Moreover, the mutual information in the third and fourth tree are larger if higher-order partial copulas are used instead of the true conditional copulas. Thus, the spurious dependence in the third and fourth tree decreases the Kullback-Leibler divergence from C 1:5 and therefore acts as a countermeasure for the spurious (conditional) independence in the second tree. Lemma 4.2 also reveals that U 1|2:4 is a function of U 2 and U 3 , i.e. the true conditional distribution function F 1|2:4 depends on u 2 and u 3 . In contrast, F PVC 1|2:4 , the resulting model for F 1|2:4 which is implied by the PVC, depends only on u 4 . That is, the implied conditional distribution function of the PVC depends on the conditioning variable which actually has no effect.
Approximations based on the partial vine copula
The specification and estimation of SVCs is commonly based on procedures that asymptotically minimize the Kullback-Leibler divergence (KLD) in a stepwise fashion. For instance, if a parametric vine copula model is used, the step-by-step ML estimator (Hobaek Haff [29,25]), where one estimates tree after tree and sequentially minimizes the estimated KLD conditional on the estimates from the previous trees, is often employed in order to select and estimate the parametric pair-copula families of the vine. But also the non-parametric methods of Kauermann and Schellhase [9] and Nagler and Czado [20] proceed in a stepwise manner and asymptotically minimize the KLD of each pair-copula separately under appropriate conditions.
In this section, we investigate the role of the PVC when it comes to approximating non-simplified vine copulas.
Let C 1:d ∈ C d and C SVC 1:d ∈ C SVC d .
The KLD of C SVC 1:d from the true copula C 1:d is given by
D KL (C 1:d ||C SVC 1:d ) = E log c 1:d (U 1:d ) c SVC 1:d (U 1:d ) ,
where the expectation is taken w.r.t. the true distribution C 1:d . We now decompose the KLD into the Kullback-Leibler divergences related to each of the d − 1 trees. For this purpose, let j = 1, . . . , d − 1 and define
T j := (C SVC i,i+j; Sij ) i=1,...,d−j : C SVC i,i+j; Sij ∈ C 2 for 1 ≤ i ≤ d − j ,
so that T 1:j = × j k=1 T k represents all possible SVCs up to and including the j-th tree. Let T j ∈ T j , T 1:j−1 ∈ T 1:j−1 . The KLD of the SVC associated with T 1:d−1 is given by
D KL (C 1:d ||T 1:d−1 ) = d−1 j=1 D (j) KL (T j (T 1:j−1 )), (5.1) where D (1) KL (T 1 (T 1:0 ))) := D (1) KL (T 1 ) := d−1 i=1 E log c i,i+1 (U i , U i+1 ) c SVC i,i+1 (U i , U i+1 )
denotes the KLD related to the first tree, and for the remaining trees j = 2, . . . , d − 1, the related KLD is
D (j) KL (T j (T 1:j−1 )) := d−j i=1 E log c i,i+j; Sij (U i|Sij , U i+j|Sij |U Sij ) c SVC i,i+j; Sij (U SVC i|Sij , U SVC i+j|Sij )
.
For instance, if d = 3, the KLD can be decomposed into the KLD related to the first tree D
KL and to the second tree D .
Note that the KLD related to tree j depends on the specified copulas in the lower trees because they determine at which values the copulas in tree j are evaluated. The following theorem shows that, if one sequentially minimizes the KLD related to each tree, then the optimal SVC is the PVC. can be used to further minimize the KLD. Stöber et al. [14] note in their appendix that if C 1:3 is a FGM copula and the copulas in the first tree are correctly specified, then the KLD from the true distribution has an extremum at C SVC 13; 2 = C ⊥ = C PVC 13; 2 . If C 13; 2 belongs to a parametric family of bivariate copulas whose parameter depends on u 2 , then C PVC 13; 2 is in general not a member of the same copula family with a constant parameter, see Spanhel and Kurz [18]. Together with Theorem 5.1 it follows that the proposed simplified vine copula approximations of Hobaek Haff et al. [13] and Stöber et al. [14] can be improved if the first-order partial copula is chosen in the second tree, and not a copula of the same parametric family as the conditional copula but with a constant dependence parameter such that the KLD is minimized.
Besides its interpretation as generalization of the partial correlation matrix, the PVC can also be interpreted as the SVC that minimizes the KLD tree-by-tree. This sequential minimization neglects that the KLD related to a tree depends on the copulas that are specified in the former trees. For instance, if d = 3, the KLD of the first tree D KL (T 2 (T 1 )) is not taken into account. Therefore, we now analyze whether the PVC also globally minimizes the KLD. Note that specifying the wrong margins in the first tree T 1 , e.g., (C SVC 12 , C SVC 23 ) = (C 12 , C 23 ), increases D If the simplifying assumption does not hold for C 1:d , then C PVC 1:d might not be a global minimum. That is, It is an open problem whether and when the PVC can be the global minimizer of the KLD. Unfortunately, the simplified vine copula approximation that globally minimizes the KLD is not tractable. However, if the simplified vine copula approximation that minimizes the KLD does not specify the true copulas in the first tree, the random variables in the higher tree are not CPITs. Thus, it is not guaranteed that these random variables are uniformly distributed and we could further decrease the KLD by assigning pseudo-copulas (Fermanian and Wegkamp [30]) to the edges in the higher trees. It can be easily shown that the resulting best approximation is then a pseudo-copula. Consequently, the best approximation satisfying the simplifying assumption is in general not an SVC but a simplified vine pseudo-copula if one considers the space of regular vines where each edge corresponds to a bivariate cdf.
∃C 1:d ∈ C d \C SVC d such that arg min C SVC 1:d ∈C SVC d D KL (C 1:d ||C SVC 1:d ) = C PVC 1:d ,(5.
While the PVC may not be the best approximation in the space of SVCs, it is the best feasible SVC approximation in practical applications. That is because the stepwise specification and estimation of an SVC is also feasible for (very) large dimensions which is not true for a joint specification and estimation.
For instance, if all pair-copula families of a parametric vine copula are chosen simultaneously and the selection is done by means of information criteria, we have to estimate K d(d−1)/2 different models, where d is the dimension and K the number of possible pair-copula families that can be assigned to each edge.
On the contrary, a stepwise procedure only requires the estimation of Kd(d − 1)/2 models. To illustrate the computational burden, consider the R-package VineCopula [31] where K = 40. For this number of pair-copula families, a joint specification requires the estimation of 64,000 (d = 3) or more than four billion (d = 4) models whereas only 120 (d = 3) or 240 (d = 4) models are needed for a stepwise specification. For many nonparametric estimation approaches (kernels [20], empirical distributions [32]), only the sequential estimation of an SVC is possible. The only exception is the spline-based approach of Kauermann and Schellhase [9].
However, due to the large number of parameters and the resulting computational burden, a joint estimation is only feasible for d ≤ 5 [33].
Convergence to the partial vine copula
If the data generating process satisfies the simplifying assumption, consistent stepwise procedures for the specification and estimation of parametric and non-parametric simplified vine copula models asymptotically minimize the KLD from the true copula. Theorem 5.1 implies that this is not true in general if the data generating process does not satisfy the simplifying assumption. An implication of this result for the application of SVCs is pointed out in the next corollary.
Corollary 6.1
Denote the sample size by N . Let C 1:d ∈ C d be the data generating copula and C SVC 1:d (θ) ∈ C SVC d , θ ∈ Θ, be a parametric SVC so that ∃ 1 θ PVC ∈ Θ : C SVC 1:d (θ PVC ) = C PVC 1:d . The pseudo-true parameters which minimize the KLD from the true distribution are assumed to exist (see White Letθ S denote the (semi-parametric) step-by-step ML estimator andθ J denote the (semi-parametric) joint ML estimator defined in Hobaek Haff [29,25]. Under regularity conditions (e.g., Condition 1 and Condition 2 in [35]) and for N → ∞, it holds that:
(i)θ S p → θ PVC . (ii)θ J p → θ . (iii) ∃C 1:d ∈ C d \C SVC d such thatθ S p → θ .
Corollary 6.1 shows that the step-by-step and joint ML estimator may not converge to the same limit (in probability) if the simplifying assumption does not hold for the data generating vine copula. For this reason, we investigate in the following the difference between the step-by-step and joint ML estimator in finite samples. Note that the convergence of kernel-density estimators to the PVC has been recently established by Nagler and Czado [20]. However, in this case, only a sequential estimation of a simplified vine copula is possible and thus the best feasible approximation in the space of simplified vine copulas is given by the PVC.
Difference between step-by-step and joint ML estimates
We compare the step-by-step and the joint ML estimator under the assumption that the pair-copula families of the PVC are specified for the parametric vine copula model. For this purpose, we simulate data from two three-dimensional copulas C 1:3 with sample sizes N = 500, 2500, 25000, perform a step-by-step and joint ML estimation, and repeat this 1000 times. For ease of exposition and because the qualitative results are not different, we consider copulas where C 12 = C 23 and only present the estimates for (θ 12 , θ 13;2 ).
Example 6.1 (PVC of the Frank copula)
Let C Fr (θ) denote the bivariate Frank copula with dependence parameter θ and C P-Fr (θ) be the partial Frank copula [18] with dependence parameter θ. Let C 1:3 be the true copula with (C 12 , C 23 , C 13; 2 ) = (C Fr (5.74), C Fr (5.74), C P-Fr (5.74)), i.e., C 1:3 = C PVC 1:3 , and C SVC 1:3 (θ) = (C Fr (θ 12 ), C Fr (θ 23 ), C P-Fr (θ 13;2 )) be the parametric SVC that is fitted to data generated from C 1:3 . Example 6.1 presents a data generating process which satisfies the simplifying assumption, implying θ PVC = θ . It is the PVC of the three-dimensional Frank copula with Kendall's τ approximately equal to 0.5. Figure 3 shows the corresponding box plots of joint and step-by-step ML estimates and their difference. The left panel confirms the results of Hobaek Haff [29,25]. Although the joint ML estimator is more efficient, the loss in efficiency for the step-by-step ML estimator is negligible and both estimators converge to the true parameter value. Moreover, the right panel of Figure 3 shows that the difference between joint and step-bystep ML estimates is never statistically significant at a 5% level. Since the computational time for a step-by- step ML estimation is much lower than for a joint ML estimation [29], the step-by-step ML estimator is very attractive for estimating high-dimensional vine copulas that satisfy the simplifying assumption. Moreover, the step-by-step ML estimator is then inherently suited for selecting the pair-copula families in a stepwise manner. However, if the simplifying assumption does not hold for the data generating vine copula, the step-by-step and joint ML estimator can converge to different limits (Corollary 6.1), as the next example demonstrates.
Example 6.2 (Frank copula)
Let C 1:3 be the Frank copula with dependence parameter θ = 5.74, i.e., C 1:3 = C PVC 1:3 , and C SVC 1:3 = (C Fr (θ 12 ), C Fr (θ 23 ), C P-Fr (θ 13;2 )) be the parametric SVC that is fitted to data generated from C 1:3 . Example 6.2 is identical to Example 6.1, with the only difference that the conditional copula is varying in such a way that the resulting three-dimensional copula is a Frank copula. Although the Frank copula does not satisfy the simplifying assumption, it is pretty close to a copula for which the simplifying assumption holds, because the variation in the conditional copula is strongly limited for Archimedean copulas (Mesfioui and
Quessy [36]
). Nevertheless, the right panel of Figure 4 shows that the step-by-step and joint ML estimates for θ 12 are significantly different at the 5% level if the sample size is 2500 observations. The difference between step-by-step and joint ML estimates for θ 13; 2 is less pronounced, but also highly significant for sample sizes with 2500 observations or more. Thus, only in Example 6.1 the step-by-step ML estimator is a consistent estimator of a simplified vine copula model that minimizes the KLD from the underlying copula, whereas the joint ML estimator is a consistent minimizer in both examples. A third example where the distance between the data generating copula and the PVC and thus the difference between the step-by-step and joint ML estimates is more pronounced is given in Appendix A.9.
Conclusion
We introduced the partial vine copula (PVC) which is a particular simplified vine copula that coincides with the data generating copula if the simplifying assumption holds. The PVC can be regarded as a generalization of the partial correlation matrix where partial correlations are replaced by j-th order partial copulas.
Consequently, it provides a new dependence measure of a d-dimensional distribution in terms of d(d − 1)/2 bivariate unconditional copulas. While a higher-order partial copula of the PVC is related to the partial copula, it does not suffer from the curse of dimensionality and can be estimated for high-dimensional data [20]. We analyzed to what extent the dependence structure of the underlying distribution is reproduced by the PVC. In particular, we showed that a pair of random variables may be considered as conditionally (in)dependent according to the PVC although this is not the case for the data generating process.
We also revealed the importance of the PVC for the modeling of high-dimensional distributions by means of simplified vine copulas (SVCs). Up to now, the estimation of SVCs has almost always been based on the assumption that the data generating process satisfies the simplifying assumption. Moreover, the implications that follow if the simplifying assumption is not true have not been investigated. We showed that the PVC is the SVC approximation that minimizes the Kullback-Leibler divergence in a stepwise fashion. Since almost all estimators of SVCs proceed sequentially, it follows that, under regularity conditions, many estimators of SVCs converge to the PVC also if the simplifying assumption does not hold. However, we also proved that the PVC may not minimize the Kullback-Leibler divergence from the true copula and thus may not be the best SVC approximation in theory. Nevertheless, due to the prohibitive computational burden or simply because only a stepwise model specification and estimation is possible, the PVC is the best feasible SVC approximation in practice.
The analysis in this paper showed the relative optimality of the PVC when it comes to approximating multivariate distributions by SVCs. Obviously, it is easy to construct (theoretical) examples where the PVC does not provide a good approximation in absolute terms. But such examples do not provide any information about the appropriateness of the simplifying assumption in practice. To investigate whether the simplifying assumption is true and the PVC is a good approximation in applications, one can use Lemma 3.1 to develop tests for the simplifying assumption, see Kurz and Spanhel [21]. Moreover, even in cases where the simplifying assumption is strongly violated, an estimator of the PVC can yield an approximation that is superior to competing approaches. Recently, it has been demonstrated in Nagler and Czado [20] that the structure of the PVC can be used to obtain a constrained kernel-density estimator that can be much closer to the data generating process than the classical unconstrained kernel-density estimator, even if the distance between the PVC and the data generating copula is large.
pair-copula, Computational Statistics & Data Analysis 84 (2015) 1-13.
[33] G. Kauermann
U PVC k|Sij = U k|Sij (a.s.) ⇒ U PVC k|Sij ⊥ U Sij is true because U k|Sij is a CPIT. For the converse, let Let A := × i+j−1 k=i+1 [0, u k ] and consider P (U PVC k|Sij ≤ a, U Sij ≤ u Sij ) = A F k|Sij (F PVC k|Sij ) −1 (a|t Sij )|t Sij dC Sij (t Sij ). (A.1)
Since U PVC k|Sij ∼ U (0, 1) it follows that if U PVC k|Sij ⊥ U Sij then P (U PVC k|Sij ≤ a, U Sij ≤ u Sij ) = aC Sij (u Sij ) for all (a, u Sij ) ∈ [0, 1] j . This implies that
P (U PVC k|Sij ≤ a, U Sij ≤ u Sij ) = A a dC Sij (t Sij )
equals the right hand side of (A.1) for all (a, u Sij ) ∈ [0, 1] j . It follows that the integrands must be identical (a.s.) as well and F k|Sij (F PVC k|Sij ) −1 (a|t Sij ) = a for all a ∈ [0, 1] and almost every (u Sij ) ∈ [0, 1] j . Thus F k|Sij = F PVC k|Sij (a.s.) which is equivalent to U PVC k|Sij = U k|Sij (a.s.).
A.2. Proof of Lemma 4.1
Let C 1:3 ∈ C SVC 3 be the SVC given in Example 4.1. We define C 1:d as follows. Let C 1,d−1; 2:d−2 = C 12 , C 2,d; 3:d−1 = C 23 , C 1,d; 2:d−1 = D 1,3; 2 , where D 1,3; 2 is the corresponding conditional copula in Example 4.1 and C i,i+j; Sij = E k,l ∈ C 2 , (k, l) ∈ I d 1 means that C i,i+j; Sij (a, b|u Sij ) = E k,l (a, b) for all (a, b, u Sij ) ∈ [0, 1] j+1 . Moreover, let C i,i+j; Sij = C ⊥ for (i, j) ∈ I d 1 \{(1, d − 2), (2, d − 2), (1, d − 1)}. The conclusion now follows from Example 4.1.
A.3. Proof of Theorem 4.1
W.l.o.g. assume that the margins of X 1:d are uniform. Let C F GM3 (u 1:
3 ; θ) = 3 i=1 u i +θ 3 i=1 u i (1−u i ), |θ| ≤ 1, be the three-dimensional FGM copula, d ≥ 4, and (i, j) ∈ I d 1 . It is obvious that C i,i+2; i+1 = C ⊥ ⇒ C PVC i,i+2; i+1 = C ⊥ is true. Let J ∈ {2, .
. . , d − 2} be fixed. Assume that C 1:d has the following D-vine copula representation of the non-simplified form
C 1,1+J; 2:J = ∂ 3 C F GM3 (u 1 , u 1+J , u 2 ; 1) C 2,2+J; 3:J+1 = ∂ 3 C F GM3 (u 2 , u 2+J , u 1+J ; 1)
and C i,i+j; Si,j = C ⊥ for all other (i, j) ∈ I d 1 . Using the same arguments as in the proof of Lemma 4.2 we obtain
C PVC i,i+J; i+1:J−1 = C ⊥ , i = 1, 2, C PVC 1,2+J; 2:J+1 = C F GM2 (1/9).
This proves that C i,i+2; i+1 = C ⊥ ⇐ C PVC i,i+2; i+1 = C ⊥ is not true in general and that, for j ≥ 3, neither the
statement C i,i+j; Sij = C ⊥ ⇒ C PVC i,i+j; Sij = C ⊥ nor the statement C i,i+j; Sij = C ⊥ ⇐ C PVC i,i+j; Sij = C ⊥ is true in general.
A.4. Proof of Lemma 4.2
We show a more general result and set C i,i+2; i+1 (u i , u i+2 |u i+1 ) = C F GM2 (u i , u i+2 ; g(u i+1 )) in For i = 1, 2, 3, the copula in the second tree of the PVC is given by
C PVC i,i+2; i+1 (a, b) = P(U i|i+1 ≤ a, U i+2|i+1 ≤ b) = [0,1] C i,i+2; i+1 (a, b|u i+1 )du i+1 (4.3) = ab 1 + (1 − a)(1 − b) [0,1] g(u i+1 )du i+1 (A.2) = ab, (A.3)
which is the independence copula. For i = 1, 2, k = i, i + 3, the true CPIT of U k w.r.t. U i+1:i+2 is a function of U i+1:i+2 because
U i|i+1:i+2 = U i [1 + g(U i+1 )(1 − U i )(1 − 2U i+2 )], (A.4) U i+3|i+1:i+2 = U i+3 [1 + g(U i+2 )(1 − U i+3 )(1 − 2U i+1 )]. (A.5)
However, for i = 1, 2, k = i, i + 3, the PPIT of U k w.r.t. U i+1:i+2 is not a function of U i+1:i+2 because
U PVC i|i+1:i+2 = F PVC i|i+1:i+2 (U i |U i+1:i+2 ) = F U i|i+1 |U i+2|i+1 (U i|i+1 |U i+2|i+1 ) = ∂ 2 C PVC i,i+2; i+1 (U i|i+1 , U i+2|i+1 ) (A.3) = U i|i+1 (4.2) = U i , (A.6)
and, by symmetry,
U PVC i+3|i+1:i+2 = U i+3 . (A.7)
For i = 1, 2, the joint distribution of these first-order PPITs is a copula in the third tree of the PVC which is given by
C PVC i,i+3; i+1:i+2 (a, b) = P(U PVC i|i+1:i+2 ≤ a, U PVC i+3|i+1:i+2 ≤ b) (A.6),(A.7) = P(U i ≤ a, U i+3 ≤ b) = C i,i+3 (a, b) (A.8) (4.4) = [0,1] 2 F i|i+1:i+2 (a|u i+1:i+2 )F i+3|i+1:i+2 (b|u i+1:i+2 )du i+1:i+2 (A.4),(A.5) = ab[1 + (1 − a)(1 − b) [0,1] g(u i+1 )(1 − 2u i+1 )du i+1 [0,1] g(u i+2 )(1 − 2u i+2 )du i+2 ] = ab[1 + θ(1 − a)(1 − b)] = C F GM2 (θ),
where θ := 4( [0,1] ug(u)du) 2 > 0, by the properties of g. Thus, a copula in the third tree of the PVC is a bivariate FGM copula whereas the true conditional copula is the independence copula.
The CPITs of U 1 or U 5 w.r.t.
= F U1|U4 (U 1 |U 4 ) = U 1|4 = ∂ 2 C 14 (U 1 , U 4 ) (A.8) = U 1 [1 + θ(1 − U 1 )(1 − 2U 4 )], (A.11) U PVC 5|2:4 = U 5 [1 + θ(1 − U 5 )(1 − 2U 2 )]. (A.12)
For the copula in the fourth tree of the PVC it holds
= P(U 1|4 ≤ a, U 5|2 ≤ b) = P(U 1 ≤ F −1 1|4 (a|U 4 ), U 5 ≤ F −1 5|2 (b|U 2 )) = [0,1] 3 F 15|2:4 (F −1 1|4 (a|u 4 ), F −1 5|2 (b|u= [0,1] 3 F −1 1|4 (a|u 4 )[1 + g(u 2 )(1 − F −1 1|4 (a|u 4 ))(1 − 2u 3 )] × F −1 5|2 (b|u 2 )[1 + g(u 4 )(1 − F −1 5|2 (b|u 2 ))(1 − 2u 3 )] × [1 + g(u 3 )(1 − 2u 2 )(1 − 2u 4 )]du 2:4 = [0,1] 2 F −1 1|4 (a|u 4 )F −1 5|2 (b|u 2 ) 1 + [0,1] (1 − 2u 3 ) 2 du 3 (1 − F −1 1|4 (a|u 4 ))(1 − F −1 5|2 (b|u 2 ))g(u 4 )g(u 2 ) + [0,1] (1 − 2u 3 )g(u 3 )du 3 (1 − F −1 5|2 (b|u 2 ))(1 − 2u 2 )(1 − 2u 4 )g(u 4 ) + [0,1] (1 − 2u 3 )g(u 3 )du 3 (1 − F −1 1|4 (a|u 4 ))(1 − 2u 4 )(1 − 2u 2 )g(u 2 ) du 2 du 4 , where we used that [0,1] (1 − 2u 3 )du 3 = 0, [0,1] g(u 3 )du 3 (A.2) = 0 and [0,1] (1 − 2u 3 ) 2 g(u 3 )du 3 (A.2)
= 0. By setting γ := −2 [0,1] ug(u)du we can write the copula function as Evaluating the density shows that C PVC 15; 2:4 is not the independence copula.
C PVC 15; 2:4 (a, b) = [0,1] 2 F −1 1|4 (a|u 4 )F −1 5|2 (b|u 2 ) 1 + 1 3 (1 − F −1 1|4 (a|u 4 ))(1 − F −1 5|2 (b|u 2 ))g(u 4 )g(u 2 ) + γ(1 − F −1 5|2 (b|u 2 ))(1 − 2u 2 )(1 − 2u 4 )g(u 4 ) + γ(1 − F −1 1|4 (a|u 4 ))(1 − 2u 4 )(1 − 2u 2 )g(u 2 ) du 2 du 4 = 1 0 F −1 1|4 (a|u 4 )du 4 1 0 F −1 5|2 (b|u 2 )du 2 + γ 1 0 (1 − 2u 2 )(F −1 5|2 (b|u 2 ) − (F −1 5|2 (b|u 2 )) 2 )du 2 1 0 (1 − 2u 4 )g(u 4 )F −1 1|4 (a|u 4 )du 4 + γ 1 0 (1 − 2u 4 )(F −1 1|4 (a|u 4 ) − (F −1 1|4 (a|u 4 )) 2 )du 4 1 0 (1 − 2u 2 )g(u 2 )F −1 5|2 (b|u 2 )du 2 + 1 3 1 0 g(u 4 )(F −1 1|4 (a|u 4 ) − (F −1 1|4 (a|u 4 )) 2 )du 4 1 0 g(u 2 )(F −1 5|2 (b|u 2 ) − (F −1 5|2 (b|u 2 )) 2 )du 2 If (U, V ) ∼ C F GM2 (θ),F −1 U |V (u|v) = 1 + h(v) − (1 + h(v)) 2 − 4h(v)u 2h(v) , with h(v) := θ(1 − 2v), which implies ∂ ∂u F −1 U |V (u|v) = 1 (1 + h(v)) 2 − 4h(v)u =: G(u, v),+ 1 γ 1 − 1 0 G(b, u 2 )du 2 1 0 (1 − 2u 4 )g(u 4 )G(a, u 4 )du 4 + 1 γ 1 − 1 0 G(a, u 4 )du 4 1 0 (1 − 2u 2 )g(u 2 )G(b, u 2 )du 2 + 1 3 1 0 g(u 4 ) h(u 4 ) [1 − G(a, u 4 )] du 4 1 0 g(u 2 ) h(u 2 ) [1 − G(b, u 2 )] du 2 = 1 4θ 2 log(σ(a)) log(σ(b)) + 1 γ 1 − 1 2θ log(σ(b)) 1 0 (1 − 2u 4 )g(u 4 )G(a, u 4 )du 4 + 1 γ 1 − 1 2θ log(σ(a)) 1 0 (1 − 2u 2 )g(u 2 )G(b, u 2 )du 2 + 1 3 1 0 g(u 4 ) h(u 4 ) [1 − G(a, u 4 )] du 4 1 0 g(u 2 ) h(u 2 ) [1 − G(b, u 2 )] du 2 , where σ(i) = (1 + θ) 2 − 4θi + 1 − 2i + θ (1 − θ) 2 + 4θi + 1 − 2i − θ for i ∈ {a, b}.
A.5. Proof of Theorem 5.1
The KLD related to tree j, D
KL (T j (T 1:j−1 )), is minimized when the negative cross entropy related to tree j is maximized. The negative cross entropy related to tree j is given by
H (j) (T j (T 1:j−1 )) := d−j i=1 E log c SVC i,i+j; Sij (F SVC i|Sij (U i |U Sij ), F SVC i+j|Sij (U i+j |U Sij )) =: d−j i=1 H (j) i (c SVC i,i+j; Sij , F SVC i|Sij , F SVC i+j|Sij ).
Obviously, to maximize H (j) (T j (T 1:j−1 )) w.r.t. T j we can maximize each H
(j) i (c SVC i,i+j; Sij , F SVC i|Sij , F SVC i+j|Sij ) in- dividually for all i = 1, . . . , d − j. If j = 1, then H (j) i (c SVC i,i+j; Sij , F PVC i|Sij , F PVC i+j|Sij ) = E log c i,i+1 (U i , U i+1 ) c SVC i,i+1 (U i , U i+1 ) which is maximized for C SVC i,i+1 = C i,H (n) i (c SVC i,i+n; Si,n , F PVC i|Si,n , F PVC i+n|Si,n ) = E log c SVC i,i+n; Si,n F PVC i|Si,n (U i |U Si,n ), F PVC i+n|Si,n (U i+n |U Si,n )
is maximized for all i = 1, . . . , d − n. Using the substitution u i = (F PVC i|Si,n ) −1 (t i |u Si,n ) = G i|Si,n (t i |u Si,n ) and × c i,i+n; Si,n F i|Si,n G i|Si,n (t i |u Si,n ) u Si,n , F i|Si,n G i+n|Si,n (t i+n |u Si,n ) u Si,n u Si,n × k=i,i+n f k|Si,n G k|Si,n (t k |u Si,n ) u Si,n k=i,i+n f PVC k|Si,n G k|Si,n (t k |u Si,n ) u Si,n c Si,n (u Si,n )du Si,n dt i dt i+n
u i+n = (F PVC i+n|Si,n ) −1 (t i+n |u Si,n ) = G i+n|Si,n (t i+n |u Si,n ),= [0,1] 2 log c SVC i,i+n; Si,n (t i , t i+n ) × [0,1] n−1 c i,i+n;
Si,n F i|Si,n G i|Si,n (t i |u Si,n ) u Si,n , F i|Si,n G i+n|Si,n (t i+n |u Si,n ) u Si,n u Si,n × k=i,i+n f k|Si,n G k|Si,n (t k |u Si,n ) u Si,n k=i,i+n f PVC k|Si,n G k|Si,n (t k |u Si,n ) u Si,n c Si,n (u Si,n )du Si,n dt i dt i+n
= [0,1] 2 log c SVC i,i+n; Si,n (t i , t i+n )c PVC i,i+n; Si,n (t i , t i+n )dt i dt i+n ,
which is maximized for c SVC i,i+n; Si,n = c PVC i,i+n; Si,n = c PVC i,i+(j+1); Si,j+1 by Gibbs' inequality.
A.6. Proof of Theorem 5.2
Equation (5.3) is obvious, since C PVC 1:d is the data generating process. Equation (5.5) immediately follows from the equations (5.1) and (5.4). Using the same arguments as in Appendix A.2, the validity of (5.4) for d = 3 implies the validity of (5.4) for d ≥ 3. However, even for d = 3, the KLD is a triple integral and does not exhibit an analytical expression if the data generating process is a non-simplified vine copula. Thus, the hard part is to show that there exists a data generating copula which does not satisfy the simplifying assumption and for which the PVC does not minimize the KLD. We prove equation (5.4) for d = 3 by means of the following example.
Example A.1
Let g : [0, 1] → [−1, 1] be a measurable function. Consider the data generating process
C 1:3 (u 1:3 ) = u2 0 C F GM2 u 1 , u 3 ; g(z) dz,
i.e., the two unconditional bivariate margins (C 12 , C 23 ) are independence copulas and the conditional copula is a FGM copula with varying parameter g(u 2 ). The first-order partial copula is also a FGM copula given by
C SVC 13; 2 (u 1 , u 3 ; θ PVC 13;2 ) = u 1 u 3 [1 + θ PVC 13;2 (1 − u 1 )(1 − u 3 )], θ PVC 13;2 := 1 0 g(u 2 )du 2 .
We set C SVC
23
= C 23 , C SVC 13; 2 = C PVC 13; 2 , and specify a parametric copula C SVC 12 (θ 12 ), θ 12 ∈ Θ 12 ⊂ R, with conditional cdf F SVC 1|2 (u 1 |u 2 ; θ 12 ) and such that C SVC 12 (0) corresponds to the independence copula. Thus, (C SVC 12 (0), C 23 , C PVC 13; 2 ) = (C 12 , C 23 , C PVC 13; 2 ). We also assume that c SVC 12 (u 1 , u 2 ; θ 12 ) and ∂ θ12 c SVC 12 (u 1 , u 2 ; θ 12 ) are both continuous on (u 1 , u 2 , θ 12 ) ∈ (0, 1) 2 × Θ 12 .
We now derive necessary and sufficient conditions such that It depends on the data generating process whether the condition in Lemma A.1 is satisfied and D KL (C 1:3 ||C SVC 12 (0), C 23 , C PVC 13; 2 ) is an extremum or not as we illustrate in the following. If θ PVC 13;2 = 0, then K(u 1 ; θ PVC 13;2 ) = 0 for all u 1 ∈ (0, 1), or if g does not depend on u 2 , then h(u 1 ; g) = 0 for all u 1 ∈ (0, 1). Thus, the integrand in (A.16) is zero and we have an extremum if one of these conditions is true. Assuming θ PVC 13;2 = 0 and that g depends on u 2 , we see from (A.16) that g and C SVC 12 determine whether we have an extremum at θ 12 = 0. Depending on the copula family that is chosen for C SVC 12 , it may be possible that the copula family alone determines whether D KL (C 1:3 ||C SVC 12 (0), C 23 , C PVC 13; 2 ) is an extremum. For instance, if C SVC 12 is a FGM copula we obtain
h(u 1 ; g) = u 1 (1 − u 1 ) 1 0 (1 − 2u 2 )g(u 2 )du 2 so that h(0.5 + u 1 ; g) = h(0.5 − u 1 ; g), ∀u 1 ∈ (0, 0.5).
This symmetry of h across 0.5 implies that (A.16) is satisfied for all functions g.
If we do not impose any constraints on the bivariate copulas in the first tree of the simplified vine copula approximation, then D KL (C 1:3 ||C SVC 12 (0), C 23 , C PVC 13; 2 ) may not even be a local minimizer of the KLD. For instance, if C SVC 12 is the asymmetric FGM copula given in (4.1), we find that
h(u 1 ; g) = u 2 1 (1 − u 1 ) 1 0 (1 − 2u 2 )g(u 2 )du 2 .
If Λ := 1 0 (1 − 2u 2 )g(u 2 )du 2 = 0, e.g., g is a non-negative function which is increasing, say g(u 2 ) = u 2 , then, depending on the sign of Λ, either h(0.5 + u 1 ; g) > h(0.5 − u 1 ; g), ∀u 1 ∈ (0, 0.5), or h(0.5 + u 1 ; g) < h(0.5 − u 1 ; g), ∀u 1 ∈ (0, 0.5), so that the integrand in (A.16) is either strictly positive or negative and thus D KL (C 1:3 ||C 12 , C 23 , C PVC 13; 2 ) can not be an extremum. Since θ 12 ∈ [−1, 1], it follows that D KL (C 1:3 ||C SVC 12 (0), C 23 , C PVC 13; 2 ) is not a local minimum. As a result, we can, relating to the PVC, further decrease the KLD from the true copula if we adequately specify "wrong" copulas in the first tree and choose the first-order partial copula in the second tree of the simplified vine copula approximation.
+ (1 − 2u 1 )(1 − 2u 3 ) 1 0 ∂ θ12 F SVC 1|2 (u 1 |u 2 ; θ 12 ) θ12=0 g(u 2 )du 2 = (1 − 2u 1 )(1 − 2u 3 )h(u 1 ; g),
where the second equality follows because Note that if θ PVC 13;2 = 0, then K(u 1 ; θ PVC 13;2 ) = 0 for all u 1 ∈ (0, 1), or if g does not depend on u 2 , then h(u 1 ; g) = 0 for all u 1 ∈ (0, 1), so in both cases the integrand is zero and we have an extremum.
A.8. Proof of Corollary 6.1 Corollary 6.1 (i) and (ii) follow directly from Theorem 1 in Spanhel and Kurz [35], which states the asymptotic distribution of approximate rank Z-estimators if the data generating process is not nested in the parametric model family. Corollary 6.1 (iii) follows then from Theorem 5.2 and Theorem 5.1.
A.9. An example where the difference betweenθ S andθ J is more pronounced
Example A.2
Let C BB1 (θ, δ) denote the BB1 copula with dependence parameter (θ, δ) and C Sar (α) be the Sarmanov copula with cdf C(u, v; α) = uv 1 + (3α + 5α 2 i=u,v (1 − 2i)) i=u,v (1 − i) for |α| ≤ √ 7/5. The partial Sarmanov copula is given by C P-Sar (u, v; a, b) = uv 1 + (3a + 5b i=u,v (1 − 2i)) i=u,v (1 − i) , where |a| ≤ √ 7/5 and a 2 ≤ b ≤ ( √ 1 − 3a 2 + 1)/5. Define S(u 2 ) = (1 + exp(u 2 )) −1 and f (u 2 ) = 1 − 2S(10u 2 − 0.5)) + 2(1 − 2u 2 )S(−5) so that g(u 2 ) = 0.1 √ 7 + 1 (1 − f (u 2 )) − 0.2. Let C 1:3 be the true copula with (C 12 , C 23 , C 13; 2 ) = (C BB1 (2, 2), C BB1 (2, 2), C Sar (g(u 2 )) and C SVC 1:3 = (C BB1 (2, 2), C BB1 (2, 2), C P-Sar (a, b)) be the parametric SVC that is fitted to data generated from C 1:3 .
Note that g is a sigmoid function, with (g(0), g(1)) = (−0.2, √ 7/5), so that Spearman's rho of the conditional copula C Sar (g(u 2 )) varies in the interval (g(0), g(1)) = (−0.2, √ 7/5) because ρ C Sar = α. Figure 5 shows that the difference between step-by-step and joint ML estimates for the two parameters of the first copula in the first tree is already (individually) significant at the 5% level if the sample size is 500 observations. Thus, the difference between step-by-step and joint ML estimates can be relevant for moderate sample sizes if the variation in the conditional copula is strong enough. Once again, the difference between step-by-step and joint ML estimates is less pronounced for the parameters of C SVC 13;2 but it also becomes highly significant with sufficient sample size. if the data is generated from C 1:3 in Example A.2 and the pair-copula families of the SVC are given by the corresponding PVC. The dotted line indicates the pseudo-true parameter and zero, respectively. The end of the whiskers is 0.953 times the inter-quartile range, corresponding to approximately 95% coverage if the data is generated by a normal distribution.
F 1 :
1d or C 1:d cdf and copula of U 1:d C d space of d-dimensional copulas with positive density C SVC d space of d-dimensional simplified D-vine copulas with positive density I d 1 I d 1 := {(i, j) : j = 1, . . . , d − 1, i = 1, . . . , d − j}, the conditioned set of a D-vine copula density S ij S ij := i + 1 : i + j − 1 := i + 1, . . . , i + j − 1, the conditioning set of an edge in a D-vine
(
U k |U S ij ), (j −2)-th order partial probability integral transform (PPIT) of U k w.r.t. U S ij C PVC 1:d Partial vine copula (PVC) of C 1:d , if d = 3, then c PVC 1:3 (u 1:3 ) = c 12 (u 1 , u 2 ) c 23 (u 2 , u 3 ) c PVC 13; 2 (u 1|2 , u 3|2 )
Figure 1 :
1(Simplified) D-vine copula representation if d = 4. The influence of conditioning variables on the conditional copulas is indicated by dashed lines.
Definition 3 . 1 (
31Partial vine copula (PVC) and j-th order partial copulas)
Lemma 4. 1 .
1Other examples of PVCs in three dimensions are given in Spanhel and Kurz[18].
independence and j-th order partial independence)
Let C 1:5 be defined as in Example 4.2. Then C PVC 12 = C PVC 23 = C PVC 34 = C PVC 45 , C PVC 13; 2 = C PVC 24; 3 = C PVC 35; 4 , C PVC 14; 2:3 = C PVC 25; 3:4 ,
Figure 2 :
2The non-simplified D-vine copula given in Example 4.2 and its PVC. The influence of conditioning variables on the conditional copulas is indicated by dashed lines.
D
KL (C 1:3 ||T 1:2 ) = D KL (C 1:3 ||(T 1 , T 2 )) = D
log c 12 (U 1:2 )c 23 (U 2:3 ) c SVC 12 (U 1:2 )c SVC 23 (U 2:3 ) + E log c 13; 2 ∂ 2 C 12 (U 1:2 ), ∂ 1 C 23 (U 2:3 )|U 2 c SVC 13; 2 ∂ 2 C SVC 12 (U 1:2 ), ∂ 1 C SVC 23 (U 2:3 )
Theorem 5. 1 (− 1
11Tree-by-tree KLD minimization using the PVC) Let C 1:d ∈ C d be the data generating copula and T PVC j := (C PVC i,i+j; Sij ) i=1,...,d−j , so that T PVC 1:j := × j k=1 T PVC k collects all copulas of the PVC up to and including the j-th tree. It holds that ∀j = 1, . . . , d Theorem 5.1, if the true copulas are specified in the first tree, one should choose the firstorder partial copulas in the second tree, the second-order partial copulas in the third tree etc. to minimize the KLD tree-by-tree. Theorem 5.1 also remains true if we replace C 2 in the definition of T j by the space of absolutely continuous bivariate cdfs. The PVC ensures that random variables in higher trees are uniformly distributed since the resulting random variables in higher trees are higher-order PPITs. If one uses a different approximation, such as the one used by Hobaek Haff et al.[13] and Stöber et al.[14], then the random variables in higher trees are not necessarily uniformly distributed and pseudo-copulas (Fermanian and Wegkamp[30])
( 1 )
1KL (T 1 ) is minimized over the copulas (C SVC 12 , C SVC 23 ) in the first tree T 1 , but the effect of the chosen copulas in the first tree T 1 on the KLD related to the second tree D
( 1 )
1KL (T 1 ) in any case. Thus, without any further investigation, it is absolutely indeterminate whether the definite increase in D(1) KL (T 1 ) can be overcompensated by a possible decrease in D
( 2 )
2KL (T 2 (T 1 )) if another approximation is chosen. The next theorem shows that the PVC is in general not the global minimizer of the KLD.
Theorem
DD
KL (C 1:d ||T 1:d−1 ) = (T PVC 1 , T 2 , . . . , T d−1 ). (5.5)Theorem 5.2 states that, if the simplifying assumption does not hold, the KLD may not be minimized by choosing the true copulas in the first tree, first-order partial copulas in the second tree and higher-order partial copulas in the remaining trees (see(5.4)). It follows that, if the objective is the minimization of the KLD, it may not be optimal to specify the true copulas in the first tree, no matter what bivariate copulas are specified in the other trees (see(5.5)). This rather puzzling result can be explained by the fact that, if the simplifying assumption does not hold, then the approximation error of the implicitly modeled bivariate margins is not minimized (see Lemma 4.1). For instance, if d = 3, a departure from the true copulas (C 12 , C 23 ) in the first tree increases the KLD related to the first tree, but it can decrease the KLD of the implicitly modeled margin C SVC 13 from C 13 . As a result, the increase in D(1)KL can be overcompensated by a larger decrease in D(2) KL , so that the KLD can be decreased. Theorem 5.2 does not imply that the PVC never minimizes the KLD from the true copula. For instance, if d = 3 and if C PVC 13; 2 = C ⊥ , then D KL (C 1:3 ||C PVC 1:3 ) is an extremum, which directly follows from equation KL (C 1:3 ||(T 1 , (C ⊥ )))
[34] for sufficient conditions) and denoted by θ = arg min θ∈Θ D KL (C 1:d ||C SVC 1:d (θ)).
Figure 3 :
3Box plots of joint (J) and sequential (S) ML estimates and their difference for sample sizes N = 500, 2500, 25000, if the data is generated from C 1:3 in Example 6.1 and the pair-copula families of the SVC are given by the corresponding PVC. The dotted line indicates the pseudo-true parameter and zero, respectively. The end of the whiskers is 0.953 times the inter-quartile range, corresponding to approximately 95% coverage if the data is generated by a normal distribution.
Figure 4 :
4Box plots of joint (J) and sequential (S) ML estimates and their difference for sample sizes N = 500, 2500, 25000, if the data is generated from C 1:3 in Example 6.2 and the pair-copula families of the SVC are given by the corresponding PVC. The dotted line indicates the pseudo-true parameter and zero, respectively. The end of the whiskers is 0.953 times the inter-quartile range, corresponding to approximately 95% coverage if the data is generated by a normal distribution.
→ [−1, 1] is a non-constant measurable function such that ∀u ∈ [0.5, 1] : g(0.5 + u) = −g(0.5 − u). (A.2)
=
U 2:4 are given by U 1|2:4 = F 1|2:4 (U 1 |U 2:4 ) = ∂ 2 C 14; 2:3 (U 1|2:3 , U 4|2:3 |U 2:3 )U 1 [1 + g(U 2 )(1 − U 1 )(1 − 2U 3 )], (A.9) U 5|2:4 = U 5 [1 + g(U 4 )(1 − U 5 )(1 − 2U 3 )], (A.10)whereas the corresponding second-order PPITs are given by U PVC 1|2:4 = F PVC 1|2:4 (U 1 |U 2:4 ) = F
C
PVC 15; 2:4 (a, b) = P(U PVC 1|2:4 ≤ a, U PVC 5|2:4 ≤ b) (A.11),(A.12)
2 )|u 2:4 )c 2:4 (u 2:4 )du 2:4 = [0,1] 3 C 15; 2:4 (F 1|2:4 (F −1 1|4 (a|u 4 )|u 2:4 ), F 5|2:4 (F −1 5|2 (b|u 2 )|u 2:4 )|u 2:4 )c 2:4 (u 2:4 )du 2:4
(F −1 U |V (u|v)) 2 = 1 h(v) [(1 + h(v))G(u, v) − 1] . (A.14)For the density of the copula in the fourth tree of the PVC it follows
If we set g(u) := 1 − 2u, then θ = 1/9 and γ = 1− 9i + 5 − 9i √ 16 + 9i + 4 − 9i for i ∈ {a, b} and I a,b := {(a, b), (b, a)}.
1
1i+1 by Gibbs' inequality. Thus, if j = 1 ≤ j ≤ d − 2.To minimize the KLD related to tree j + 1 =: n w.r.t. T n , conditional on T 1:n−1 = T PVC 1:n−1 , we have to maximize the negative cross entropy which is maximized if
D∂
KL (C 1:3 ||C SVC 12 (θ 12 ), C 23 , C PVC 13; 2 ) := D KL (C 1:3 ||((C SVC 12 (θ 12 ), C 23 ), (C PVC 13; 2 ))) attains an extremum at θ 12 = 0. Lemma A.1 (Extremum of the KLD in Example A.1)Let C 1:3 be given as in Example A.1. For u 1 ∈ (0, 1), we defineh(u 1 ; g) θ12 F SVC 1|2 (u 1 |u 2 ; θ 12 ) log c SVC 13; 2 F SVC 1|2 (u 1 |u 2 ; θ 12 ), u 3 ; θ PVC 13;2 θ12=0c 1:3 (u 1:3 )du 2 du 3 . Then, ∀u 1 ∈ (0, 0.5) : K(0.5 + u 1 ; θ PVC 13;2 ) > 0 ⇔ θ PVC 13;2 > 0, and D KL (C 1:3 ||C SVC 12 (θ 12 ), C 23 , C PVC 13; 2 ) has an extremum at θ 12 = 0 if and only if ∂ θ12 D KL (C 1:3 ||C SVC 12 (θ 12 ), C 23 , C PVC u 1 ; θ PVC 13;2 )[h(0.5 + u 1 ; g) − h(0.5 − u 1 ; g)]du 1 = 0. (A.16)Proof. See Appendix A.7.
∂ 0 Fff
0θ12 F SVC 1|2 (u 1 |u 2 ; θ 12 )du 2 = ∂ θ12 1 SVC 1|2 (u 1 |u 2 ; θ 12 )du 2 = ∂ θ12 u 1 = 0.Thus, integrating out u 2 , we obtain∂ θ12 E[log c SVC 1:3 (U 1:3 ; θ 12 )] 1 , u 3 ; θ PVC 13; 2 )(1 − 2u 1 )(1 − 2u 3 )h(u 1 ; g)du 1 du 3 (u 1 , u 3 ; θ PVC 13;2 )h(u 1 ; g)du 1 du 3 , (A.17) where f (u 1 , u 3 ; θ PVC 13;2 ) := m(u 1 , u 3 ; θ PVC 13; 2 )(1 − 2u 1 )(1 − 2u 3 ). We note that ∀u 1 ∈ (0, 0.5), u 3 ∈ (0, 1):f (0.5 + u 1 , u 3 ; θ PVC 13;2 ) > 0 ⇔ θ PVC 13;2 > 0, f (0.5 − u 1 , u 3 ; θ PVC 13;2 ) = −f (0.5 + u 1 , 1 − u 3 ; θ PVC 13;2 ). So, if u 1 ∈ ((0.5 − u 1 , 1 − u 3 ; θ PVC 13;2 )du 3 = − 1 0 f (0.5 + u 1 , u 3 ; θ PVC 13;2 )du 3 .Thus, if we define K(u 1 ; θ PVC 13;2 ) :=1 0 f (u 1 , u 3 ; θ PVC 13;2 )du 3 we have that ∀u 1 ∈ (0, 0.5): K(0.5 + u 1 ; θ PVC 13;2 ) > 0 ⇔ θ PVC 13;2 > 0, K(0.5 − u 1 ; θ PVC 13;2 ) = −K(0.5 + u 1 ; θ PVC 13;2 ). (A.18) Plugging this into our integral (A.17) yields ∂ θ12 E[log c SVC 1:3 (U 1:3 ; θ 12 , θ PVC 13;2 )] u 1 ; θ PVC 13;2 )[h(0.5 + u 1 ; g) − h(0.5 − u 1 ; g)]du 1 .
Figure 5 :
5Box plots of joint (J) and sequential (S) ML estimates and their difference for sample sizes N = 500, 2500, 25000,
5.2 (Global KLD minimization if C 1:d ∈ C SVC d or C 1:d ∈ C d \C SVCd
)
If C 1:d ∈ C SVC
d , i.e., the simplifying assumption holds for C 1:d , then
arg min
C SVC
1:d ∈C SVC
d
D KL (C 1:d ||C SVC
1:d ) = C PVC
1:d .
(5.3)
log c SVC i,i+n; Si,n (t i , t i+n )we obtain
H
(n)
i (c SVC
i,i+n; Si,n , F PVC
i|Si,n , F PVC
i+n|Si,n ) =
[0,1] n+1
AcknowledgementsWe would like to thank Harry Joe and Claudia Czado for comments which helped to improve this paper.We also would like to thank Roger Cooke and Irène Gijbels for interesting discussions on the simplifyingA.7. Proof of Lemma A.1The KLD attains an extremum if and only if the negative cross entropy attains an extremum. The negative cross entropy is given by where ∂ 1 c SVC 13; 2 (u, v; θ PVC 13;2 ) is the partial derivative w.r.t. u and we have used Leibniz's integral rule to perform the differentiation under the integral sign for the second last equality which is valid since the integrand and its partial derivative w.r.t. θ 12 are both continuous in u 1:3 and θ 12 on (0, 1) 3 × (−1, 1).To compute the integral we observe that
An introduction to copulas, Springer series in statistics. R B Nelsen, SpringerNew YorkR. B. Nelsen, An introduction to copulas, Springer series in statistics, Springer, New York, 2006.
Multivariate models and dependence concepts. H Joe, Chapman & HallLondonH. Joe, Multivariate models and dependence concepts, Chapman & Hall, London, 1997.
Quantitative risk management, Princeton series in finance. A J Mcneil, R Frey, P Embrechts, Princeton Univ. PressPrinceton NJA. J. McNeil, R. Frey, P. Embrechts, Quantitative risk management, Princeton series in finance, Prince- ton Univ. Press, Princeton NJ, 2005.
Families of m-Variate Distributions with Given Margins and m(m-1)/2 Bivariate Dependence Parameters. H Joe, Lecture Notes-Monograph Series. 28H. Joe, Families of m-Variate Distributions with Given Margins and m(m-1)/2 Bivariate Dependence Parameters, Lecture Notes-Monograph Series 28 (1996) 120-141.
K Aas, C Czado, A Frigessi, H Bakken, Pair-Copula Constructions of Multiple Dependence. 44K. Aas, C. Czado, A. Frigessi, H. Bakken, Pair-Copula Constructions of Multiple Dependence, Insur- ance: Mathematics and Economics 44 (2009) 182-198.
Selecting and estimating regular vine copulae and application to financial returns. J Dißmann, E C Brechmann, C Czado, D Kurowicka, Computational Statistics & Data Analysis. 59J. Dißmann, E. C. Brechmann, C. Czado, D. Kurowicka, Selecting and estimating regular vine copulae and application to financial returns, Computational Statistics & Data Analysis 59 (2013) 52-69.
Vine constructions of Lévy copulas. O Grothe, S Nicklas, Journal of Multivariate Analysis. 119O. Grothe, S. Nicklas, Vine constructions of Lévy copulas, Journal of Multivariate Analysis 119 (2013) 1-15.
H Joe, H Li, A K Nikoloulopoulos, Tail dependence functions and vine copulas. 101H. Joe, H. Li, A. K. Nikoloulopoulos, Tail dependence functions and vine copulas, Journal of Multi- variate Analysis 101 (2010) 252-270.
Flexible pair-copula estimation in D-vines using bivariate penalized splines. G Kauermann, C Schellhase, Statistics and Computing. G. Kauermann, C. Schellhase, Flexible pair-copula estimation in D-vines using bivariate penalized splines, Statistics and Computing (2013) 1-20.
Vine copulas with asymmetric tail dependence and applications to financial return data. A K Nikoloulopoulos, H Joe, H Li, Computational Statistics & Data Analysis. 56A. K. Nikoloulopoulos, H. Joe, H. Li, Vine copulas with asymmetric tail dependence and applications to financial return data, Computational Statistics & Data Analysis 56 (2012) 3659-3673.
Models for construction of multivariate dependence -a comparison study. K Aas, D Berg, The European Journal of Finance. 15K. Aas, D. Berg, Models for construction of multivariate dependence -a comparison study, The European Journal of Finance 15 (2009) 639-659.
An empirical analysis of multivariate copula models. M Fischer, C Köck, S Schlüter, F Weigert, Quantitative Finance. 9M. Fischer, C. Köck, S. Schlüter, F. Weigert, An empirical analysis of multivariate copula models, Quantitative Finance 9 (2009) 839-854.
On the simplified pair-copula construction -Simply useful or too simplistic?. I Haff, K Aas, A Frigessi, Journal of Multivariate Analysis. 101I. Hobaek Haff, K. Aas, A. Frigessi, On the simplified pair-copula construction -Simply useful or too simplistic?, Journal of Multivariate Analysis 101 (2010) 1296-1310.
Simplified pair copula constructions-Limitations and extensions. J Stöber, H Joe, C Czado, Journal of Multivariate Analysis. 119J. Stöber, H. Joe, C. Czado, Simplified pair copula constructions-Limitations and extensions, Journal of Multivariate Analysis 119 (2013) 101-118.
Testing conditional independence for continuous random variables. W Bergsma, W. Bergsma, Testing conditional independence for continuous random variables, 2004. URL: http: //eprints.pascal-network.org/archive/00000824/.
Estimation of a Copula when a Covariate Affects only Marginal Distributions. I Gijbels, M Omelka, N Veraverbeke, Scandinavian Journal of Statistics. 42I. Gijbels, M. Omelka, N. Veraverbeke, Estimation of a Copula when a Covariate Affects only Marginal Distributions, Scandinavian Journal of Statistics 42 (2015) 1109-1126.
Partial and average copulas and association measures. I Gijbels, M Omelka, N Veraverbeke, Electronic Journal of Statistics. 9I. Gijbels, M. Omelka, N. Veraverbeke, Partial and average copulas and association measures, Electronic Journal of Statistics 9 (2015) 2420-2474.
The partial copula: Properties and associated dependence measures. F Spanhel, M S Kurz, Statistics & Probability Letters. 119F. Spanhel, M. S. Kurz, The partial copula: Properties and associated dependence measures, Statistics & Probability Letters 119 (2016) 76 -83.
On the weak convergence of the empirical conditional copula under a simplifying assumption. F Portier, J Segers, arXiv:1511.06544ArXiv e-printsF. Portier, J. Segers, On the weak convergence of the empirical conditional copula under a simplifying assumption, ArXiv e-prints (2015). arXiv:1511.06544.
Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas. T Nagler, C Czado, Journal of Multivariate Analysis. 151T. Nagler, C. Czado, Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas, Journal of Multivariate Analysis 151 (2016) 69 -89.
Testing the simplifying assumption in high-dimensional vine copulas. M S Kurz, F Spanhel, arXiv:1706.02338M. S. Kurz, F. Spanhel, Testing the simplifying assumption in high-dimensional vine copulas, ArXiv e-prints (2017). arXiv:1706.02338.
Vines: A New Graphical Model for Dependent Random Variables. T Bedford, R M Cooke, The Annals of Statistics. 30T. Bedford, R. M. Cooke, Vines: A New Graphical Model for Dependent Random Variables, The Annals of Statistics 30 (2002) 1031-1068.
Dependence modeling. D. Kurowicka, H. JoeSingaporeWorld ScientificD. Kurowicka, H. Joe (Eds.), Dependence modeling, World Scientific, Singapore, 2011.
Uncertainty analysis with high dimensional dependence modelling. D Kurowicka, R Cooke, Wiley, ChichesterD. Kurowicka, R. Cooke, Uncertainty analysis with high dimensional dependence modelling, Wiley, Chichester, 2006.
Parameter estimation for pair-copula constructions. I , Hobaek Haff, Bernoulli. 19I. Hobaek Haff, Parameter estimation for pair-copula constructions, Bernoulli 19 (2013) 462-491.
Modelling Asymmetric Exchange Rate Dependence. A J Patton, International Economic Review. 47A. J. Patton, Modelling Asymmetric Exchange Rate Dependence, International Economic Review 47 (2006) 527-556.
Beyond simplified pair-copula constructions. E F Acar, C Genest, J Nešlehová, Journal of Multivariate Analysis. 110E. F. Acar, C. Genest, J. Nešlehová, Beyond simplified pair-copula constructions, Journal of Multi- variate Analysis 110 (2012) 74-90.
Factor copula models for multivariate data. P Krupskii, H Joe, Journal of Multivariate Analysis. 120P. Krupskii, H. Joe, Factor copula models for multivariate data, Journal of Multivariate Analysis 120 (2013) 85-101.
Comparison of estimators for pair-copula constructions. I , Hobaek Haff, Journal of Multivariate Analysis. 110I. Hobaek Haff, Comparison of estimators for pair-copula constructions, Journal of Multivariate Analysis 110 (2012) 91-105.
Time-dependent copulas. J.-D Fermanian, M H Wegkamp, Journal of Multivariate Analysis. 110J.-D. Fermanian, M. H. Wegkamp, Time-dependent copulas, Journal of Multivariate Analysis 110 (2012) 19-29.
U Schepsmeier, J Stoeber, E C Brechmann, B Graeler, T Nagler, T Erhardt, VineCopula: Statistical Inference of Vine Copulas. r package version 2.0.5U. Schepsmeier, J. Stoeber, E. C. Brechmann, B. Graeler, T. Nagler, T. Erhardt, VineCopula: Statistical Inference of Vine Copulas, 2016. URL: https://CRAN.R-project.org/package=VineCopula, r package version 2.0.5.
Nonparametric estimation of pair-copula constructions with the empirical. I Haff, J Segers, I. Hobaek Haff, J. Segers, Nonparametric estimation of pair-copula constructions with the empirical
| []
|
[
"Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection",
"Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection"
]
| [
"Student Member, IEEEAmbra Demontis ",
"Student Member, IEEEMarco Melis ",
"Senior Member, IEEEBattista Biggio ",
"Member, IEEEDavide Maiorca ",
"Daniel Arp ",
"Konrad Rieck ",
"Igino Corona ",
"Senior Member, IEEEGiorgio Giacinto ",
"Fellow, IEEEFabio Roli "
]
| []
| []
| To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection. However, its security against well-crafted attacks has not only been recently questioned, but it has been shown that machine learning exhibits inherent vulnerabilities that can be exploited to evade detection at test time. In other words, machine learning itself can be the weakest link in a security system. In this paper, we rely upon a previously-proposed attack framework to categorize potential attack scenarios against learning-based malware detection tools, by modeling attackers with different skills and capabilities. We then define and implement a set of corresponding evasion attacks to thoroughly assess the security of Drebin, an Android malware detector. The main contribution of this work is the proposal of a simple and scalable secure-learning paradigm that mitigates the impact of evasion attacks, while only slightly worsening the detection rate in the absence of attack. We finally argue that our secure-learning approach can also be readily applied to other malware detection tasks. | 10.1109/tdsc.2017.2700270 | [
"https://arxiv.org/pdf/1704.08996v1.pdf"
]
| 6,350,280 | 1704.08996 | a37f47deb561c3985de248026443112153f7fcd9 |
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection
Student Member, IEEEAmbra Demontis
Student Member, IEEEMarco Melis
Senior Member, IEEEBattista Biggio
Member, IEEEDavide Maiorca
Daniel Arp
Konrad Rieck
Igino Corona
Senior Member, IEEEGiorgio Giacinto
Fellow, IEEEFabio Roli
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection
1Index Terms-Android Malware DetectionStatic AnalysisSecure Machine LearningComputer Security !
To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection. However, its security against well-crafted attacks has not only been recently questioned, but it has been shown that machine learning exhibits inherent vulnerabilities that can be exploited to evade detection at test time. In other words, machine learning itself can be the weakest link in a security system. In this paper, we rely upon a previously-proposed attack framework to categorize potential attack scenarios against learning-based malware detection tools, by modeling attackers with different skills and capabilities. We then define and implement a set of corresponding evasion attacks to thoroughly assess the security of Drebin, an Android malware detector. The main contribution of this work is the proposal of a simple and scalable secure-learning paradigm that mitigates the impact of evasion attacks, while only slightly worsening the detection rate in the absence of attack. We finally argue that our secure-learning approach can also be readily applied to other malware detection tasks.
INTRODUCTION
D URING the last decade, machine learning has been increasingly applied in security-related tasks, in response to the increasing variability and sophistication of modern attacks [1], [3], [6], [27], [33]. One relevant feature of machine-learning approaches is their ability to generalize, i.e., to potentially detect never-before-seen attacks, or variants of known ones. However, as first pointed out by Barreno et al. [4], [5], machine-learning algorithms have been designed under the assumption that training and test data follow the same underlying probability distribution, which makes them vulnerable to well-crafted attacks violating this assumption. This means that machine learning itself can be the weakest link in the security chain [2]. Subsequent work has confirmed this intuition, showing that machine-learning techniques can be significantly affected by carefully-crafted attacks exploiting knowledge of the learning algorithm; e.g., skilled attackers can manipulate data at test time to evade detection, or inject poisoning samples into the training data to mislead the learning algorithm and subsequently cause misclassification errors [7], [11], [25], [32], [34], [42]- [45].
In this paper, instead, we show that one can leverage machine learning to improve system security, by following an adversary-aware approach in which the machine-learning algorithm is designed from the ground up to be more resistant against evasion. We further show that designing adversaryaware learning algorithms according to this principle, as advocated in [9], [10], does not necessarily require one to trade classification accuracy in the absence of carefullycrafted attacks for improving security.
• A.
We consider Android malware detection as a case study for our approach. The relevance of this task is witnessed by the fact that Android has become the most popular mobile operating system, with more than a billion users around the world, while the number of malicious applications targeting them has also grown simultaneously: anti-virus vendors detect thousands of new malware samples daily, and there is still no end in sight [28], [50]. Here we focus our analysis on Drebin (Sect. 2), i.e., a machine-learning approach that relies on static analysis for an efficient detection of Android malware directly on the mobile device [3].
Notably, in this work we do not consider attacks that can completely defeat static analysis [31], like those based on packer-based encryption [47] and advanced code obfuscation [17], [23], [24], [35], [36]. The main reason is that such techniques may leave detectable traces, suggesting the use of a more appropriate system for classification; e.g., the presence of system routines that perform dynamic loading of libraries or classes, potentially hiding embedded malware, demands for the use of dynamic analysis for a more reliable classification. For this reason, in this paper we aim to improve the security of Drebin against stealthier attacks, i.e., carefully-crafted malware samples that evade detection without exhibiting significant evidence of manipulation.
To perform a well-crafted security analysis of Drebin and, more generally, of Android malware detection tools against such attacks, we exploit an adversarial framework (Sect. 3) based on previous work on adversarial machine learning [4], [5], [9], [10], [25]. We focus on the definition of different classes of evasion attacks, corresponding to attack scenarios in which the attacker exhibits an increasing capability of manipulating the input data, and level of knowledge about arXiv:1704.08996v1 [cs.CR] 28 A linear classifier is then trained on an available set of labeled application, to discriminate between malware and benign applications. During classification, unseen applications are evaluated by the classifier. If its output f (x) ≥ 0, they are classified as malware, and as benign otherwise.
x 2 φ(z) z x f (x)
Drebin also provides an interpretation of its decision, by highlighting the most suspicious (or benign) features that contributed to the decision [3].
the targeted system. To simulate evasion attacks in which the attacker does not exploit any knowledge of the targeted system, we consider some obfuscation techniques that are not specifically targeted against Drebin, by running an analysis similar to that reported in [30]. To this end, we make use of the commercial obfuscation tool DexGuard, 1 which has been originally designed to make reverse-engineering of benign applications more difficult. The obfuscation techniques exploited by this tool are discussed in detail in Sect. 4. Note that, even if considering obfuscation attacks is out of the scope of this work, DexGuard only partially obfuscates the content of Android applications. For this reason, the goal of this analysis is simply to empirically assess whether the static analysis performed by Drebin remains effective when Android applications are not thoroughly obfuscated, or when obfuscation is not targeted.
The main contribution of this work is the proposal of an adversary-aware machine-learning detector against evasion attacks (Sect. 5), inspired from the proactive design approach advocated in the area of adversarial machine learning [9], [10]. The secure machine-learning algorithm proposed in this paper is completely novel. With respect to previous techniques for secure learning [8], [15], [21], [26], it is able to retain computational efficiency and scalability on large datasets (as it exploits a linear classification function), while also being well-motivated from a more theoretical perspective. We empirically evaluate our method on real-world data (Sect. 6), including an adversarial security evaluation based on the simulation of the proposed evasion attacks. We show that our method outperforms state-of-the-art classification algorithms, including secure ones, without losing significant accuracy in the absence of well-crafted attacks, and can even guarantee some degree of robustness against DexGuardbased obfuscations. We finally discuss the main limitations of our work (Sect. 7), and future research challenges, including how to apply the proposed approach to other malware detection tasks (Sect. 8).
ANDROID MALWARE DETECTION
In this section, we give some background on Android applications. We then discuss Drebin and its main limitations.
Android Background
Android is the most used mobile operating system. Android applications are in the apk format, i.e., a zipped archive containing two files: the Android manifest and classes.dex. 1. https://www.guardsquare.com/dexguard Additional xml and resource files are respectively used to define the application layout, and to provide additional functionality or multimedia content. As Drebin only analyzes the Android manifest and classes.dex files, below we provide a brief description of their characteristics.
Android Manifest. The manifest file holds information about the application structure. Such structure is organized in application components, i.e., parts of code that perform specific actions; e.g., one component might be associated to a screen visualized by the user (activity) or to the execution of audio in the background (services). The actions of each component are further specified through filtered intents; e.g., when a component sends data to other applications, or is invoked by a browser. Special types of components are entry points, i.e., activities, services and receivers that are loaded when requested by a specific filtered intent (e.g., an activity is loaded when an application is launched, and a service is activated when the device is turned on). The manifest also contains the list of hardware components and permissions requested by the application to work (e.g., Internet access).
Dalvik Bytecode (dexcode). The classes.dex file contains the compiled source code of an application. It contains all the user-implemented methods and classes. Classes.dex might contain specific API calls that can access sensitive resources such as personal contacts (suspicious calls). Moreover, it contains all system-related, restricted API calls whose functionality require permissions (e.g., using the Internet). Finally, this file can contain references to network addresses that might be contacted by the application.
Drebin
Drebin conducts multiple steps and can be executed directly on the mobile device, as it performs a lightweight static analysis of Android applications. The extracted features are used to embed applications into a high-dimensional vector space and train a classifier on a set of labeled data. An overview of the system architecture is given in Fig. 1. In the following, we describe the single steps in more detail.
Feature Extraction
Initially, Drebin performs a static analysis of a set of available Android applications, 2 to construct a suitable feature space. All features extracted by Drebin are presented as strings and organized in 8 different feature sets, as listed in Table 1. Android applications are then mapped onto the 2. We use here a modified version of Drebin that performs a static analysis based on the Androguard tool, available at: https://github.com/androguard/androguard. feature space as follows. Let us assume that an Android application (i.e., an apk file) is represented as an object z ∈ Z, being Z the abstract space of all apk files. We then denote with Φ : Z → X a function that maps an apk file z to a d-dimensional feature vector
x = (x 1 , . . . , x d ) ∈ X = {0, 1} d ,
where each feature is set to 1 (0) if the corresponding string is present (absent) in the apk file z. An application encoded in feature space may thus look like the following:
x = Φ(z) → · · · 0 1 · · · 1 0 · · · · · · S 2
permission::SEND_SMS permission::READ_SMS · · · S 5 api_call::getDeviceId api_call::getSubscriberId · · ·
Learning and Classification
Once Android applications are represented as feature vectors, Drebin learns a linear Support Vector Machine (SVM) classifier [18], [41] to discriminate between the class of benign and malicious samples. Linear classifiers are generally expressed in terms of a linear function f : X → R, given as:
f (x) = w x + b ,(1)
where w ∈ R d denotes the vector of feature weights, and b ∈ R is the so-called bias. These parameters, to be optimized during training, identify a hyperplane in feature space, which separates the two classes. During classification, unseen applications are then classified as malware if f (x) ≥ 0, and as benign otherwise. During training, we are given a set of labeled samples
D = {(x i , y i )} n i=1
, where x i denotes an application in feature space, and y i ∈ {−1, +1} its label, being −1 and +1 the benign and malware class, respectively. The SVM learning algorithm is then used to find the parameters w, b of Eq. (1), by solving the following optimization problem:
min w,b L(D, f ) = 1 2 w w R(f ) +C n i=1 max(0, 1 − y i f (x i )) L(f,D) ,(2)
where L(f, D) denotes a loss function computed on the training data (exhibiting higher values if samples in D are not correctly classified by f ), R(f ) is a regularization term to avoid overfitting (i.e., to avoid that the classifier overspecializes its decisions on the training data, losing generalization capability on unseen data), and C is a trade-off parameter. As shown by the above problem, the SVM exploits an 2 regularizer on the feature weights and the so-called hinge loss as the loss function. This allows the SVM algorithm to learn a hyperplane that separates the two classes with the highest margin [18], [41]. Note that the above formulation is quite general, as it represents different learning algorithms, depending on the chosen regularizer and loss function [16].
Limitations and Open Issues
Although Drebin has shown to be capable of detecting malware with high accuracy, it exhibits intrinsic vulnerabilities that might be exploited by an attacker to evade detection. Since Drebin has been designed to run directly on the mobile device, its most obvious limitation is the lack of a dynamic analysis. Unfortunately, static analysis has clear limitations, as it is not possible to analyze malicious code that is downloaded or decrypted at runtime, or code that is thoroughly obfuscated [17], [23], [24], [31], [35], [36], [47]. For this reason, considering such attacks would be irrelevant for the scope of our work. Our focus is rather to understand and to improve the security properties of learning algorithms against specifically-targeted attacks, in which the amount of manipulations performed by the attacker is limited. The rationale is that the manipulated malware samples should not only evade detection, but it should also be difficult to detect traces of their adversarial manipulation. Although these limitations have been also discussed in [3], the effect of carefully-targeted attacks against Drebin has never been studied before. For this reason, in the following, we introduce an attack framework to provide a systematization of different, potential evasion attacks under limited adversarial manipulations. Then, we present a systematic evaluation of these attacks on Drebin, and a novel learning algorithm to alleviate their effects in practice.
ATTACK MODEL AND SCENARIOS
To perform a thorough security assessment of learningbased malware detection systems, we rely upon an attack model originally defined in [9], [10]. It is grounded on the popular taxonomy of Barreno et al. [4], [5], [25], which categorizes potential attacks against machine-learning algorithms along three axes: security violation, attack specificity and attack influence. The attack model exploits this taxonomy to define a number of potential attack scenarios that may be incurred by the system during operation, in terms of explicit assumptions on the attacker's goal, knowledge of the system, and capability of manipulating the input data.
Attacker's Goal
It is defined in terms of the desired security violation and the so-called attack specificity.
Security violation. Security can be compromised by violating system integrity, if malware samples are undetected; system availability, if benign samples are misclassified as malware; or privacy, if the system leaks confidential information about its users.
Attack specificity. It can be targeted or indiscriminate, depending on whether the attacker is interested in having some specific samples misclassified (e.g., a specific malware sample to infect a particular device), or if any misclassified sample meets her goal (e.g., if the goal is to launch an indiscriminate attack campaign).
Attacker's Knowledge
The attacker may have different levels of knowledge of the targeted system [4], [5], [9], [10], [25], [43]. In particular, she may know completely, partially, or do not have any information at all about: (i) the training data D; (ii) the feature extraction/selection algorithm Φ, and the corresponding feature set X , i.e., how features are computed from data, and selected; (iii) the learning algorithm L(D, f ), along with the decision function f (x) (Eq. 1) and, potentially, even its (trained) parameters w and b. In some applications, the attacker may also exploit feedback on the classifier's decisions to improve her knowledge of the system, and, more generally, her attack strategy [5], [9], [10], [25].
Attacker's Capability
It consists of defining the attack influence and how the attacker can manipulate data.
Attack Influence. It can be exploratory, if the attacker only manipulates data at test time, or causative, if she can also contaminate the training data (e.g., this may happen if a system is periodically retrained on data collected during operation that can be modified by an attacker) [5], [10], [25].
Data Manipulation. It defines how samples (and features)
can be modified, according to application-specific constraints; e.g., which feature values can be incremented or decremented without compromising the exploitation code embedded in the apk file. In many cases, these constraints can be encoded in terms of distances in feature space, computed between the source malware data and its manipulated versions [7], [15], [19], [21], [29], [40]. We refer the reader to Sect. 3.6 for a discussion on how Drebin features can be modified.
Attack Strategy
The attack strategy defines how the attacker implements her activities, based on the hypothesized goal, knowledge, and capabilities. To this end, we characterize the attacker's knowledge in terms of a space Θ that encodes knowledge of the data D, the feature space X , and the classification function f . Accordingly, we can represent the scenario in which the attacker has perfect knowledge of the attacked system as a vector θ = (D, X , f ) ∈ Θ. We characterize the attacker's capability by assuming that an initial set of samples A is given, and that it is modified according to a space of possible modifications Ω(A). Given the attacker's knowledge θ ∈ Θ and a set of manipulated attacks A ∈ Ω(A) ⊆ Z, the attacker's goal can be characterized in terms of an objective function W(A , θ) ∈ R which evaluates the extent to which the manipulated attacks A meet the attacker's goal. The optimal attack strategy can be thus given as:
A = arg max A ∈Ω(A) W(A ; θ) .(3)
Under this formulation, one can characterize different attack scenarios. The two main ones often considered in adversarial machine learning are referred to as classifier evasion and poisoning [4], [5], [7], [9]- [11], [25], [45]. In the remainder of this work we focus on classifier evasion, while we refer the reader to [10], [45] for further details on classifier poisoning.
Evasion Attacks
In an evasion attack, the attacker manipulates malicious samples at test time to have them misclassified as benign by a trained classifier, without having influence over the training data. The attacker's goal thus amounts to violating system integrity, either with a targeted or with an indiscriminate attack, depending on whether the attacker is targeting a specific machine or running an indiscriminate attack campaign. More formally, evasion attacks can be written as:
z = arg min z ∈Ω(z)f (Φ(z )) = arg min z ∈Ω(z)ŵ x ,(4)
where x = Φ(z ) is the feature vector associated to the modified attack sample z , andŵ is the weight vector estimated by the attacker (e.g., from the surrogate classifier f ). With respect to Eq. (3), one can consider here one sample at a time, as they can be independently modified.
The above equation essentially tells the attacker which features should be modified to maximally decrease the value of the classification function, i.e., to maximize the probability of evading detection [7], [10]. Note that, depending on the manipulation constraints Ω(z) (e.g., if the feature values are bounded), the set of features to be manipulated is generally different for each malicious sample.
In the following, we consider different evasion scenarios, according to the framework discussed in the previous sections. In particular, we discuss five distinct attack scenarios, sorted for increasing level of attacker's knowledge. Note that, when the attacker knows more details of the targeted system, her estimate of the classification function becomes more reliable, thus facilitating the evasion task (in the sense of requiring less manipulations to the malware samples).
Zero-effort Attacks
This is the standard scenario in which malware data is neither obfuscated nor modified at all. From the viewpoint of the attacker's knowledge, this scenario is characterized by an empty knowledge-parameter vector θ = ().
DexGuard-based Obfuscation Attacks
As another attack scenario in which the attacker does not exploit any knowledge of the attacked system, for which θ = (), we consider a setting similar to that reported in [30]. In particular, we assume that the attacker attempts to evade detection by performing invasive code transformations on the classes.dex file, using the commercial Android obfuscation tool DexGuard. Note that this tool is designed to ensure protection against disassembling/decompiling attempts in benign applications, and not to obfuscate the presence of malicious code; thus, despite the introduction of many changes in the executable code, it is not clear whether and to what extent the obfuscations implemented by this tool may be effective against a learningbased malware detector like Drebin, i.e., how they will affect the corresponding feature values and classification output. The obfuscations implemented by DexGuard are described more in detail in Sect. 4.
Mimicry Attacks
Under this scenario, the attacker is assumed to be able to collect a surrogate dataset including malware and benign samples, and to know the feature space. Accordingly, θ = (D, X ). In this case, the attack strategy amounts to manipulating malware samples to make them as close as possible to the benign data (in terms of conditional probability distributions or, alternatively, distance in feature space). To this end, in the case of Drebin (which uses binary feature values), we can assume that the attacker still aims to minimize Eq. (4), but estimates each component ofŵ independently for each feature asŵ k = p(x k = 1|y = +1) − p(x k = 1|y = −1), k = 1, . . . , d. This will indeed induce the attacker to add (remove) first features which are more frequently present (absent) in benign files, making the probability distribution of malware samples closer to that of the benign data. It is worth finally remarking that this is a more sophisticated mimicry attack than those commonly used in practice, in which an attacker is usually assumed to merge a malware application with a benign one [43], [50].
Limited-Knowledge (LK) Attacks
In addition to the previous case, here the attacker knows the learning algorithm L used by the targeted system, and can learn a surrogate classifier on the available data. The knowledge-parameter vector can be thus encoded as θ = (D, X ,f ), beingf the surrogate classifier used to approximate the true f . In this case, the attacker exploits the estimate ofŵ obtained from the surrogate classifierf to construct the evasion samples, according to Eq. (4).
Perfect-Knowledge (PK) Attacks
This is the worst-case setting in which also the targeted classifier is known to the attacker, i.e., θ = (D, X , f ). Although it is not very likely to happen in practice that the attacker gets to know even the trained classifier's parameters (i.e., w and b in Eq. 1), this setting is particularly interesting as it provides an upper bound on the performance degradation incurred by the system under attack, and can be used as reference to evaluate the effectiveness of the system under the other simulated attack scenarios.
Malware Data Manipulation
As stated in Sect. 3.3, one has to discuss how the attacker can manipulate malware applications to create the corresponding evasion attack samples. To this end, we consider two main settings in our evaluation, detailed below.
Feature Addition. Within this setting, the attacker can independently inject (i.e., set to 1) every feature.
Feature Addition and Removal. This scenario simulates a more powerful attacker that can inject every feature, and also remove (i.e., set to 0) features from the dexcode.
These settings are motivated by the fact that malware has to be manipulated to evade detection, but its semantics and intrusive functionality must be preserved. In this respect, feature addition is generally a safe operation, in particular, when injecting manifest features (e.g., adding permissions does not influence any existing application functionality). With respect to the dexcode, one may also safely introduce information that is not actively executed, by adding code after return instructions (dead code) or with methods that are never called by any invoke type instructions. Listing 1 shows an example where a URL feature is introduced by adding a method that is never invoked in the code.
.method public addUrlFeature()V .locals 2 const-string v1, "http://www.example.com" invoke-direct {v0, v1}, Ljava/net/URL;-><init>(Ljava/lang/String;)V return-void .end method Listing 1. Smali code to add a URL feature.
However, this only applies when such information is not directly executed by the application, and could be stopped at the parsing level by analyzing only the methods belonging to the application call graph. In this case, the attacker would be enforced to change the executed code, and this requires considering additional and stricter constraints. For example, if she wants to add a suspicious API call to a dexcode method that is executed by the application, she should adopt virtual machine registers that have not been used before by the application. Moreover, the attacker should pay attention to possible artifacts or undesired functionalities that are brought by the injected calls, which may influence the semantics of the original program. Accordingly, injecting a large number of features may not always be feasible.
Feature removal is even a more complicated operation. Removing permissions from the manifest is not possible, as this would limit the application functionality. The same holds for intent filters. Some application component names can be changed but, as stated in Sect. 4, this operation is not easy to be automatically performed: the attacker must ensure that the application component names in the dexcode are changed accordingly, and must not modify any of the entry points. Furthermore, the feasible changes may only slightly affect the whole manifest structure (as shown in our experiments with automated obfuscation tools). With respect to the dexcode, multiple ways can be exploited to remove its features; e.g., it is possible to hide IP addresses (if they are stored as strings) by encrypting them with the introduction of additional functions, and decrypting them at runtime. Of course, this should be done by avoiding the addition of features that are already used by the system (e.g., function calls that are present in the training data).
With respect to suspicious and restricted API calls, the attacker should encrypt the method or the class invoking them. However, this could introduce other calls that might increase the suspiciousness of the application. Moreover, one mistake at removing such API references might completely destroy the application functionality. The reason is that Android uses a verification system to check the integrity of an application during execution (e.g., it will close the application, if a register passed as a parameter to an API call contains a wrong type), and chances of compromising this behavior increase if features are deleted carelessly.
For the aforementioned reasons, performing a finegrained evasion attack that changes a lot of features may be very difficult in practice, without compromising the mali-cious application functionality. In addition, another problem for the attacker is getting to know precisely which features should be added or removed, which makes the construction of evasion attack samples even more complicated.
DEXGUARD-BASED OBFUSCATION ATTACKS
Although commercial obfuscators are designed to protect benign applications against reverse-engineering attempts, it has been recently shown that they can also be used to evade anti-malware detection systems [30]. We thus use DexGuard, a popular obfuscator for Android, to simulate attacks in which no specific knowledge of the targeted system is exploited, as discussed in Sect. 3.5.2. Recall that, although considering obfuscation attacks is out of the scope of this work, the obfuscation techniques implemented by DexGuard do not completely obfuscate the code. For this reason, we aim to understand whether this may make static analysis totally ineffective, and how it affects our strategy to improve classifier security. A brief description of the DexGuard-based obfuscation attacks is given below. Trivial obfuscation. This strategy changes the names of implemented application packages, classes, methods and fields, by replacing them with random characters. Trivial obfuscation also performs negligible modifications to some manifest features by renaming some application components that are not entry points (see Sect. 2.1). As the application functionality must be preserved, Trivial obfuscation does not rename any system API or method imported from native libraries. Given that Drebin mainly extracts information from system APIs, we expect that its detection capability will be only barely affected by this obfuscation.
String Encryption. This strategy encrypts strings defined in the dexcode with the instruction const-string. Such strings can be visualized during the application execution, or may be used as variables. Thus, even if they are retrieved through an identifier, their value must be preserved during the program execution. For this reason, an additional method is added to decrypt them at runtime, when required. This obfuscation tends to remove URL features (S8) that are stored as strings in the dexcode. Features corresponding to the decryption routines extracted by Drebin (S7) are instead not affected, as the decryption routines added by DexGuard do not belong to the system APIs.
Reflection. This obfuscation technique uses the Java Reflection API to replace invoke-type instructions with calls that belong to the Java.lang.Reflect class. The main effect of this action is destroying the application call graph. However, this technique does not affect the system API names, as they do not get encrypted during the process. It is thus reasonable to expect that most of the features extracted by Drebin will remain unaffected.
Class Encryption. This is the most invasive obfuscation strategy, as it encrypts all the application classes, except entry-point ones (as they are required to load the application externally). The encrypted classes are decrypted at runtime by routines that are added during the obfuscation phase. Worth noting, the class encryption performed by DexGuard does not completely encrypt the application. For example, classes belonging to the API components contained in the manifest are not encrypted, as this would most likely compromise the application functionality. For the same reason, the manifest itself is preserved. Accordingly, it still possible to extract static features using Drebin, and analyze the application. Although out of the scope of our work, it is still worth remarking here that using packers (e.g., [47]) to perform full dynamic loading of the application classes might completely evade static analysis.
Combined Obfuscations. The aforementioned strategies can also be combined to produce additional obfuscation techniques. As in [30], we will consider three additional techniques in our experiments, by respectively combining (i) trivial and string encryption, (ii) adding reflection to them, and (iii) adding class encryption to the former three.
ADVERSARIAL DETECTION
In this section, we introduce an adversary-aware approach to improve the robustness of Drebin against carefullycrafted data manipulation attacks. As for Drebin, we aim to develop a simple, lightweight and scalable approach. For this reason, the use of non-linear classification functions with computationally-demanding learning procedures is not suitable for our application setting. We have thus decided to design a linear classification algorithm with improved security properties, as detailed in the following.
Securing Linear Classification
As in previous work [8], [26], we aim to improve the security of our linear classification system by enforcing learning of more evenly-distributed feature weights, as this would intuitively require the attacker to manipulate more features to evade detection. Recall that, as discussed in Sect. 3.6, if a large number of features has to be manipulated to evade detection, it may not even be possible to construct the corresponding malware sample without compromising its malicious functionality. With respect to the work in [8], [26], where different heuristic implementations were proposed to improve the so-called evenness of feature weights (see Sect. 6), we propose here a more principled approach, derived from the idea of bounding classifier sensitivity to feature changes.
We start by defining a measure of classifier sensitivity as:
∆f (x, x ) = f (x) − f (x ) x − x = w (x − x ) x − x ,(5)
which evaluates the decrease of f when a malicious sample x is manipulated as x , with respect to the required amount of modifications, given by x − x . Let us assume now, without loss of generality, that w has unary 1 -norm and that features are normalized in [0, 1]. 3 We also assume that, for simplicity, the 1 -norm is used to evaluate x−x . Under these assumptions, it is not difficult to see that ∆f ∈ 1 d , 1 , where the minimum is attained for equal absolute weight values (regardless of the amount of modifications made to x), and the maximum is attained when only one weight is not null, confirming the intuition that more evenly-distributed feature weights should improve classifier security under attack. This can also be shown by selecting x, x to maximize ∆f (x, x ):
∆f (x, x ) ≤ 1 K K k=1 |w (k) | ≤ max j=1,...,d |w j | = w ∞ .(6)
Here, K = x − x corresponds to the number of modified features and |w (1) |, . . . , |w (d) | denote the weights sorted in descending order of their absolute values, such that we have
|w (1) | ≥ . . . ≥ |w (d) |.
The last inequality shows that, to minimize classifier sensitivity to feature changes, one can minimize the ∞ -norm of w. This in turn tends to promote solutions which exhibit the same absolute weight values (a well-known effect of ∞ regularization [13]). This is a very interesting result which has never been pointed out in the field of adversarial machine learning. We have shown that regularizing our learning algorithm by penalizing the ∞ norm of the feature weights w can improve the security of linear classifiers, yielding classifiers with more evenly-distributed feature weights. This has only been intuitively motivated in previous work, and implemented with heuristic approaches [8], [26]. As we will show in Sect. 6, being derived from a more principled approach, our method is not only capable of finding more evenly-distributed feature weights with respect to the heuristic approaches in [8], [26], but it is also able to outperform them in terms of security.
It is also worth noting that our approach preserves convexity of the objective function minimized by the learning algorithm. This gives us the possibility of deriving computationally-efficient training algorithms with (potentially strong) convergence guarantees. As an alternative to considering an additional term to the learner's objective function L, one can still control the ∞ -norm of w by adding a box constraint on it. This is a well-known property of convex optimization [13]. As we may need to apply different upper and lower bounds to different feature sets, depending on how their values can be manipulated, we prefer to follow the latter approach.
Secure SVM Learning Algorithm
According to the previous discussion, we define our Secure SVM learning algorithm (Sec-SVM) as:
min w,b 1 2 w w + C n i=1 max (0, 1 − y i f (x i )) ,(7)s.t. w lb k ≤ w k ≤ w ub k , k = 1, . . . , d .(8)
Note that this optimization problem is identical to Problem (2), except for the presence of a box constraint on w. The lower and upper bounds on w are defined by the vectors w lb = (w lb 1 , . . . , w lb d ) and w ub = (w ub 1 , . . . , w ub d ), which should be selected with a suitable procedure (see Sect. 5.3). For notational convenience, in the sequel we will also denote the constraint given by Eq. (8) compactly as w ∈ W ⊆ R d .
The corresponding learning algorithm is given as Algorithm 1. It is a constrained variant of Stochastic Gradient Descent (SGD) that also considers a simple line-search procedure to tune the gradient step size during the optimization. SGD is a lightweight gradient-based algorithm for efficient learning on very large-scale datasets, based on approximating the subgradients of the objective function using a single sample or a small subset of the training data, randomly chosen at each iteration [12], [49]. In our case, the subgradients of the objective function (Eq. 7) are given as:
∇ w L w + C i∈S ∇ i x i , (9) ∇ b L C i∈S ∇ i ,(10)
where S denotes the subset of the training samples used to compute the approximation, and ∇ i is the gradient of the hinge loss with respect to f (x i ), which equals −y i , if y i f (x i ) < 1, and 0 otherwise. One crucial issue to ensure quick convergence of SGD is the choice of the initial gradient step size η (0) , and of a proper decaying function s(t), i.e., a function used to gradually reduce the gradient step size during the optimization process. As suggested in [12], [49], these parameters should be chosen based on preliminary experiments on a subset of the training data. Common choices for the function s(t) include linear and exponential decaying functions.
We conclude this section by pointing out that our formulation is quite general; one may indeed select different combinations of loss and regularization functions to train different, secure variants of other linear classification algorithm. Our Sec-SVM learning algorithm is only an instance that considers the hinge loss and 2 regularization, as the standard SVM [18], [41]. It is also worth remarking that, as the lower and upper bounds become smaller in absolute value, our method tends to yield (dense) solutions with weights equal to the upper or to the lower bound. A similar effect is obtained when minimizing the ∞ norm directly [13].
We conclude from this analysis that there is an implicit trade-off between security and sparsity: while a sparse learning model ensures an efficient description of the learned decision function, it may be easily circumvented by just manipulating a few features. By contrast, a secure learning model relies on the presence of many, possibly redundant, features that make it harder to evade the decision function, yet at the price of a dense representation.
Parameter Selection
To tune the parameters of our classifiers, as suggested in [10], [48], one should not only optimize accuracy on a set of collected data, using traditional performance evaluation techniques like cross validation or bootstrapping. More properly, one should optimize a trade-off between accuracy and security, by accounting for the presence of potential, unseen attacks during the validation procedure. Here we optimize this trade-off, denoted with r(f µ , D), as:
µ = arg max µ r(f µ , D) = A(f µ , D) + λS(f µ , D) ,(11)
where we denote with f µ the classifier learned with parameters µ (e.g., for our Sec-SVM, µ = {C, w lb , w ub }), with A a measure of classification accuracy in the absence of attack (estimated on D), with S an estimate of the classifier security under attack (estimated by simulating attacks on D), and with λ a given trade-off parameter.
Classifier security can be evaluated by considering distinct attack settings, or a different amount of modifications to the attack samples. In our experiments, we will optimize Algorithm 1 Sec-SVM Learning Algorithm Input: D = {x i , y i } n i=1 , the training data; C, the regularization parameter; w lb , w ub , the lower and upper bounds on w; |S|, the size of the sample subset used to approximate the subgradients; η (0) , the initial gradient step size; s(t), a decaying function of t; and ε > 0, a small constant. Output: w, b, the trained classifier's parameters. 1: Set iteration count t ← 0. 2: Randomly initialize v (t) = (w (t) , b (t) ) ∈ W × R. 3: Compute the objective function L(v (t) ) using Eq. (7). 4: repeat 5: Compute (∇ w L, ∇ b L) using Eqs. (9)-(10). 6: Increase the iteration count t ← t + 1.
7:
Set η (t) ← γ η (0) s(t), performing a line search on γ.
8:
Set
w (t) ← w (t−1) − η (t) ∇ w L.Set b (t) ← b (t−1) − η (t) ∇ b L.
11:
Set v (t) = (w (t) , b (t) ).
12:
Compute the objective function L(v (t) ) using Eq. (7). 13:
until |L(v (t) ) − L(v (t−1) )| < ε 14: return: w = w (t) , and b = b (t) .
security in a worst-case scenario, i.e., by simulating a PK evasion attack with both feature addition and removal. We will then average the performance under attack over an increasing number of modified features m ∈ [1, M]. More specifically, we will measure security as:
S = 1 M M m=1 A(f µ , D k ) ,(12)
where D k is obtained by modifying a maximum of m features in each malicious sample in the validation set, 4 as suggested by the PK evasion attack strategy.
EXPERIMENTAL ANALYSIS
In this section, we report an experimental evaluation of our proposed secure learning algorithm (Sec-SVM) by testing it under different evasion scenarios (see Sect. 3.5).
Classifiers. We compare our Sec-SVM approach with the standard Drebin implementation (denoted with SVM), and with a previously-proposed technique that improves security of linear classifiers by using a Multiple Classifier System (MCS) architecture to obtain a linear classifier with more evenly-distributed feature weights [8], [26]. To this end, multiple linear classifiers are learned by sampling uniformly from the training set (a technique known as bagging [14]) and by randomly subsampling the feature set, as suggested by the random subspace method [22]. The classifiers are then combined by averaging their outputs, which is equivalent to using a linear classifier whose weights and bias are the average of the weights and biases of the base classifiers, respectively. With this simple trick, the computational complexity at test time remains thus equal to that of a single linear classifier [8]. As we use linear SVMs as the base classifiers, we denote this approach with MCS-SVM. We finally consider a version of our Sec-SVM trained using only manifest features that we call Sec-SVM (M). The reason is to verify whether considering only features, which can not removed, limits closely mimicking benign data and thereby yields a more secure system. Datasets. In our experiments, we use two distinct datasets. The first (referred to as Drebin) includes the data used in [3], and consists of 121, 329 benign applications and 5, 615 malicious samples, labeled using the VirusTotal service. A sample is labeled as malicious if it is detected by at least five anti-virus scanners, whereas it is labeled as benign if no scanner flagged it as malware. The second (referred to as Contagio) includes the data used in [30], and consists of about 1, 500 malware samples, obtained from the MalGenome 5 and the Contagio Mobile Minidump 6 datasets. Such samples have been obfuscated with the seven obfuscation techniques described in Sect. 4, yielding a total of about 10, 500 samples. Training-test splits. We average our results on 10 independent runs. In each repetition, we randomly select 60,000 applications from the Drebin dataset, and split them into two equal sets of 30,000 samples each, respectively used as the training set and the surrogate set (as required by the LK and mimicry attacks discussed in Sect. 3.5). As for the test set, we use all the remaining samples from Drebin.
In some attack settings (detailed below), we replace the malware data from Drebin in each test set with the malware samples from Contagio. This enables us to evaluate the extent to which a classifier (trained on some data) preserves its performance in detecting malware from different sources. 7 Feature selection. When running Drebin on the given datasets, more than one million of features are found. For computational efficiency, we retain the most discriminant d features, for which |p(x k = 1|y = +1) − p(x k = 1|y = −1)|, k = 1, . . . , d, exhibits the highest values (estimated on training data). In our case, using only d = 10, 000 features does not significantly affect the accuracy of Drebin. This is consistent with the recent findings in [37], as it is shown that only a very small fraction of features is significantly discriminant, and usually assigned a non-zero weight by Drebin (i.e., by the SVM learning algorithm). For the same reason, the sets of selected features turned out to be the same in each run. Their sizes are reported in Table 2. Parameter setting. We run some preliminary experiments on a subset of the training set and noted that changing C did not have a significant impact on classification accuracy for all the SVM-based classifiers (except for higher values, which cause overfitting). Thus, also for the sake of a fair comparison among different SVM-based learners, we set • if x (k) = 1 andŵ (k) > 0 (and the feature is not in the manifest sets S1-S4), then x (k) is set to zero;
• if x (k) = 0 andŵ (k) < 0, then x (k) is set to one;
• else x (k) is left unmodified.
If the maximum number of modified features has been reached, the for loop is clearly stopped in advance.
Experimental Results
We present our results by reporting the performance of the given classifiers against (i) zero-effort attacks, (ii) obfuscation attacks, and (iii) advanced evasion attacks, including PK, LK and mimicry attacks, with both feature addition, and feature addition and removal (see Sects. 3.5-3.6).
Zero-effort attacks. Results for the given classifiers in the absence of attack are reported in the ROC curves of Fig. 2. They report the Detection Rate (DR, i.e., the fraction of correctly-classified malware samples) as a function of the False Positive Rate (FPR, i.e., the fraction of misclassified benign samples) for each classifier. We consider two different cases: (i) using both training and test samples from Drebin (left plot); and (ii) training on Drebin and testing on Contagio (right plot), as previously discussed. Notably, MCS-SVM achieves the highest DR (higher than 96% at 1% FPR) in both settings, followed by SVM and Sec-SVM, which only slightly worsen the DR. Sec-SVM (M) performs instead significantly worse. In Fig. 3, we also report the absolute weight values (sorted in descending order) of each classifier, to show that Sec-SVM classifiers yield more evenlydistributed weights, also with respect to MCS-SVM.
DexGuard-based obfuscation attacks. The ROC curves reported in Fig. 4 show the performance of the given classifiers, trained on Drebin, against the DexGuard-based obfuscation attacks (see Sect. 3.5.2 and Sect. 4) on the Contagio malware. Here, Sec-SVM performs similarly to MCS-SVM, while SVM and Sec-SVM (M) typically exhibit lower detection rates. Nevertheless, as these obfuscation attacks do not completely obfuscate the malware code, and the feature changes induced by them are not specifically targeted against any of the given classifiers, the classification performances are not significantly affected. In fact, the DR at 1% FPR is never lower than 90%. As expected (see Sect. 4), strategies such as Trivial, String Encryption and Reflection do not affect the system performances significantly, as Drebin only considers system-based API calls, which are not changed by the aforementioned obfuscations. Among these attacks, Class Encryption is the most effective strategy, as it is the only one that more significantly modifies the S5 and S7 feature sets (in particular, the first one), as it can be seen in Fig. 5. Nevertheless, even in this case, as manifest-related features are not affected by DexGuard-based obfuscations, Drebin still exhibits good detection performances.
Advanced evasion. We finally report results for the PK, LK, and mimicry attacks in Fig. 6, considering both feature addition, and feature addition and removal. As we are not removing manifest-related features, Sec-SVM (M) is clearly tested only against feature-addition attacks. Worth noting, Sec-SVM can drastically improve security compared to the other classifiers, as its performance decreases more gracefully against an increasing number of modified features, especially in the PK and LK attack scenarios. In the PK case, while the DR of Drebin (SVM) drops to 60% after modifying only two features, the DR of the Sec-SVM decreases to the same amount only when fifteen feature values are changed. This means that our Sec-SVM approach can improve classifier security of about ten times, in terms of the amount of modifications required to create a malware sample that evades detection. The underlying reason is that Sec-SVM provides more evenly-distributed feature weights, as shown in Fig. 3. Note that Sec-SVM and Sec-SVM (M) exhibit a maximum absolute weight value of 0.5 (on average). This means that, in the worst case, modifying a single feature yields an average decrease of the classification function equal to 0.5, while for MCS-SVM and SVM this decrease is approximately 1 and 2.5, respectively. It is thus clear that, to achieve a comparable decrease of the classification function (i.e., a comparable probability of evading detection), more features should be modified in the former cases. Finally, it is also worth noting that mimicry attacks are less effective, as expected, as they exploit an inferior level of knowledge of the targeted system. Despite this, an interesting insight on the behavior of such attacks is reported in Fig. 7. After modifying a large number of features, the mimicry attack tends to produce a distribution that is very close to that of the benign data (even without removing any manifestrelated feature Fig. 7. Fraction of features equal to one in each set (averaged on 10 runs) for benign (first plot), non-obfuscated (second plot) and DexGuard-based obfuscated malware in Drebin, using PK (third plot) and mimicry (fourth plot) attacks. It is clear that the mimicry attack produces malware samples which are more similar to the benign data than those obtained with the PK attack.
TABLE 3 Top 5 modified features by the PK evasion attack with feature addition (A) and removal (R), for SVM, MCS-SVM, and Sec-SVM (highlighted in bold). The probability of a feature being equal to one in malware data is denoted with p. For each classifier and each feature, we then report two values (averaged on 10 runs): (i) the probability q that the feature is modified by the attack (left), and (ii) its relevance (right), measured as its absolute weight divided by w 1 . If the feature is not modified within the first 200 changes, we report that the corresponding values are only lower than the minimum ones observed. In the last column, we also report whether the feature has been added (↑) or removed (↓) by the attack. can separate benign and malware data with satisfying accuracy. The vulnerability of the system may be thus regarded as intrinsic in the choice of the feature representation, rather than in how the classification function is learned. This clearly confirms the importance of designing features that are more difficult to manipulate for an attacker. Feature manipulation. To provide some additional insights, in Table 3 we report the top 5 modified features by the PK attack with feature addition and removal for SVM, MCS-SVM, and Sec-SVM. For each classifier, we select the top 5 features by ranking them in descending order of the probability of modification q . This value is computed as follows. First, the probability q of modifying the k th feature in a malware sample, regardless of the maximum number of admissible modifications, is computed as:
q = E x∼p(x|y=+1) {x k = x k } = p ν (1 − p) 1−ν ,(13)
where E denotes the expectation operator, p(x|y = +1) the distribution of malware samples, x k and x k are the k th feature values before and after manipulation, and p is the probability of observing x k = 1 in malware. Note that ν = 1 if x k = 1, x k does not belong to the manifest sets S1-S4, and the associated weightŵ k > 0, while ν = 0 ifŵ k < 0 (otherwise the probability of modification q is zero). This formula denotes compactly that, if a feature can be modified, then it will be changed with probability p (in the case of deletion) or 1 − p (in the case of insertion). Then, to consider that features associated to the highest absolute weight values are modified more frequently by the attack, with respect to an increasing maximum number m of modifiable features, we compute q = E m {q}. Considering m = 1, . . . , d, with uniform probability, each feature will be modified with probability q = q (d−r)/d, with r = 0 for the feature x (1) assigned to the highest absolute weight value, r = 1 for the second ranked feature x (2) , etc. In general, for the k th -ranked feature x (k) , r = k − 1, for k = 1, . . . , d.
Thus, q decreases depending on the feature ranking, which in turn depends on the feature weights and the probability p of the feature being present in malware. Regarding Table 3, note first how the probability of modifying the top features, along with their relevance (i.e., their absolute weight value with respect to w 1 ), decreases from SVM to MCS-SVM, and from MCS-SVM to Sec-SVM. These two observations are clearly connected. The fact that the attack modifies features with a lower probability depends on the fact that weights are more evenly distributed. To better understand this phenomenon, imagine the limit case in which all features are assigned the same absolute weight value. It is clear that, in this case, the attacker could randomly modify any subset of features and obtain the same effect on the classification output; thus, on average, each feature will have the same probability of being modified.
The probability of modifying a feature, however, does not only depend on the weight assigned by the classifier, but also on the probability of being present in malware data, as mentioned before. For instance, if a (non-manifest) feature is present in all malware samples, and it has been assigned a very high positive weight, it will be always removed; conversely, if it only rarely occurs in malware, then it will be deleted only from few samples. This behavior is clearly exhibited by the top features modified by Sec-SVM. In fact, since this classifier basically assigns the same absolute weight value to almost all features, the top modified ones are simply those appearing more frequently in malware. More precisely, in our experiments this classifier, as a result of our parameter optimization procedure, assigns a higher (absolute) weight to features present in malware, and a lower (absolute) weight to features present in benign data (i.e., |w ub k | > |w lb k |, k = 1, . . . , d). This is why, conversely to SVM and MCS-SVM, the attack against Sec-SVM tends to remove features, rather then injecting them. To conclude, it is nevertheless worth pointing out that, in general, the most frequently-modified features clearly depend on the data distribution (i.e., on class imbalance, feature correlations, etc.), and not only on the probability of being more frequent in malware. In our analysis, this dependency is intrinsically captured by the dependency of q on the feature weights learned by the classifier.
Robustness and regularization.
Interestingly, a recent theoretical explanation behind the fact that more features should be manipulated to evade our Sec-SVM can also be found in [46]. In particular, in that work Xu et al. have shown that the regularized SVM learning problem, as given in Eq. (7), is equivalent to a non-regularized, robust optimization problem, in which the input data is corrupted by a worst-case 2 (spherical) noise. Note that this noise is dense, as it tends to slightly affect all feature values. More generally, Xu et al. [46] have shown that the regularization term depends on the kind of hypothesized noise over the input data. Our evasion attacks are sparse, as the attacker aims to minimize the number of modified features, and thus they significantly affect only the most discriminant ones. This amounts to consider an 1 worst-case noise over the input data. In this case, Xu et al. [46] have shown that the optimal regularizer is the ∞ norm of w. In our Sec-SVM, the key idea is to add a box constraint on w, as given in Eq. (8), which is essentially equivalent to consider an additional ∞ regularizer on w, consistently with the findings in [46].
LIMITATIONS AND OPEN ISSUES
Despite the very promising results achieved by our Sec-SVM, it is clear that such an approach exhibits some intrinsic limitations. First, as Drebin performs a static code analysis, it is clear that also Sec-SVM can be defeated by more sophisticated encryption and obfuscation attacks. However, it is also worth remarking that this is not a vulnerability of the learning algorithm itself, but rather of the chosen feature representation, and for this reason we have not considered these attacks in our work. A similar behavior is observed when a large number of features is modified by our evasion attacks, and especially in the case of mimicry attacks (see Sect. 6), in which the manipulated malware samples almost exactly replicate benign data (in terms of their feature vectors). This is again possible due to an intrinsic vulnerability of the feature representation, and no learning algorithm can clearly separate such data with satisfying accuracy. Nevertheless, this problem only occurs when malware samples are significantly modified and, as pointed out in Sect. 3.6, it might be very difficult for the attacker to do that without compromising their intrusive functionality, or without leaving significant traces of adversarial manipulation. For example, the introduction of changes such as reflective calls requires a careful manipulation of the Dalvik registers (e.g., verifying that old ones are correctly re-used and that new ones can be safely employed). A single mistake in the process can lead to verification errors, and the application might not be usable anymore (we refer the reader to [23], [24] for further details). Another limitation of our approach may be its unsatisfying performance under PK and LK attacks, but this can be clearly mitigated with simple countermeasures to prevent that the attacker gains sufficient knowledge of the attacked system, such as frequent system re-training and diversification of training data collection [9]. To summarize, although our approach is clearly not bulletproof, we believe that it significantly improves the security of the baseline Drebin system (and of the standard SVM algorithm).
CONCLUSIONS AND FUTURE WORK
Recent results in the field of adversarial machine learning and computer security have confirmed the intuition pointed out by Barreno et al. [4], [5], [25], namely, that machine learning itself can introduce specific vulnerabilities in a security system, potentially compromising the overall system security. The underlying reason is that machine-learning techniques have not been originally designed to deal with intelligent and adaptive attackers, who can modify their behavior to mislead the learning and classification algorithms.
The goal of this work has been, instead, to show that machine learning can be used to improve system security, if one follows an adversary-aware approach that proactively anticipates the attacker. To this end, we have first exploited a general framework for assessing the security of learningbased malware detectors, by modeling attackers with different goals, knowledge of the system, and capabilities of manipulating the data. We have then considered a specific case study involving Drebin, an Android malware detection tool, and shown that the performance of Drebin can be significantly downgraded in the presence of skilled attackers that can carefully manipulate malware samples to evade classifier detection. The main contribution of this work has been to define a novel, theoretically-sound learning algorithm to train linear classifiers with more evenly-distributed feature weights. This approach allows one to improve system security (in terms of requiring a much higher number of careful manipulations to the malware samples), without significantly affecting computational efficiency.
A future development of our work, which may further improve classifier security, is to extend our approach for secure learning to nonlinear classifiers, e.g., using nonlinear kernel functions. Although nonlinear kernels can not be directly used in our approach (due to the presence of a linear constraint on w), one may exploit a trick known as the empirical kernel mapping. It consists of first mapping samples onto an explicit (approximate) kernel space, and then learning a linear classifier on that space [39]. We would like to remark here that also investigating the trade-off between sparsity and security highlighted in Sect. 5.2 may provide interesting insights for future work. In this respect, the recent findings in [46] related to robustness and regularization of learning algorithms (briefly summarized at the end of Sect. 6) may provide inspiring research directions.
Another interesting future extension of our approach may be to explicitly consider, for each feature, a different level of robustness against the corresponding adversarial manipulations. In practice, however, the agnostic choice of assuming equal robustness for all features may be preferred, as it may be very difficult to identify features that are more difficult to manipulate. If categorizing features according to their robustness to adversarial manipulations is deemed feasible, instead, then this knowledge may be incorporated into the learning algorithm, such that higher (absolute) weight values are assigned to more robust features.
It is finally worth remarking that we have also recently exploited the proposed learning algorithm to improve the security of PDF and Javascript malware detection systems against sparse evasion attacks [20], [38]. This witnesses that our proposal does not only provide a first, concrete example of how machine learning can be exploited to improve security of Android malware detectors, but also of how our design methodology can be readily applied to other learning-based malware detection tasks.
Fig. 1 .
1A schematic representation of the architecture of Drebin. First, applications are represented as vector in a d-dimensional feature space.
(t) onto the feasible (box) domain W.
C = 1
1for all classifiers and repetitions. For the MCS-SVM classifier, we train 50 base linear SVMs on random subsets of 80% of the training samples and 50% of the features, as this ensures a sufficient diversification of the base classifiers, providing more evenly-distributed feature weights. The bounds of the Sec-SVM are selected through a 5. http://www.malgenomeproject.org/ 6. http://contagiominidump.blogspot.com/ 7. Note however that a number of malware samples in Contagio are also included in the Drebin dataset.
cross-validation, following the procedure explained in Sect. 5.3. In particular, we set each element of w ub (w lb ) as w ub (w lb ), and optimize the two scalar values (w ub , w lb ) ∈ {0.1, 0.5, 1} × {−1, −0.5, −0.1}. As for the performance measure A(f µ , D) (Eq. 11), we consider the Detection Rate (DR) at 1% False Positive Rate (FPR), while the security measure S(f µ , D) is simply given by Eq.(12). We set λ = 10 −2 in Eq.(11) to avoid worsening the detection of both benign and malware samples in the absence of attack to an unnecessary extent. Finally, as explained in Sect. 5.2, the parameters of Algorithm 1 are set by running it on a subset of the training data, to ensure quick convergence, as η (0) = 0.5, γ ∈ {10, 20, . . . , 70} and s(t) = 2 −0.01t / √ n.Evasion attack algorithm.We discuss here the algorithm used to implement our advanced evasion attacks. For linear classifiers with binary features, the solution to Problem (4) can be found as follows. First, the estimated weightsŵ have to be sorted in descending order of their absolute values, along with the feature values x of the initial malicious sample. This means that, if the sorted weights and features are denoted respectively withŵ(1) , . . . ,ŵ (d) and x (1) , . . . , x (d) , then |ŵ (1) | ≥ . . . ≥ |ŵ (d) |. Then, for k = 1, . . . , d:
Fig. 2 .
2Mean ROC curves on Drebin (left) and Contagio (right) data, for classifiers trained on Drebin data.
Fig. 3 .
3Absolute weight values in descending order (i.e., |w (1) | ≥ . . . ≥ |w (d) |), for each classifier (averaged on 10 runs). Flatter curves correspond to more evenly-distributed weights, i.e., more secure classifiers.
Fig. 4 .
4Mean ROC curves for all classifiers against different obfuscation techniques, computed on the Contagio data.
Fig. 5 .Fig. 6 .
56Fraction of features equal to one in each set (averaged on 10 runs), for non-obfuscated (leftmost plot) and obfuscated malware in Contagio, with different obfuscation techniques. While obfuscation deletes dexcode features (S5-S8), the manifest (S1-S4) remains mostly intact. Detection Rate (DR) at 1% False Positive Rate (FPR) for each classifier under the Perfect-Knowledge (left), Limited-Knowledge (middle), and Mimicry (right) attack scenarios, against an increasing number of modified features. Solid (dashed) lines are obtained by simulating attacks with feature addition (feature addition and removal).
[email protected]), G. Giacinto ([email protected]) and F. Roli ([email protected]) are with the Dept. of Electrical and Electronic Eng., University of Cagliari, Piazza d'Armi, 09123 Cagliari, Italy. • D. Arp ([email protected]) and K. Rieck ([email protected]) are with the Institute of System Security, Technische Universität Braunschweig, Rebenring 56, 38106 Braunschweig, Germany.Demontis
([email protected]),
M.
Melis
([email protected]), B. Biggio ([email protected]),
D.
Maiorca
([email protected]),
I.
Corona
(ig-
ino.
Apr 2017Classifier
Feature
Extraction
x 1
x 2
...
x d
Android app (apk)!
Class labels (malware, benign)!
malware!
benign!
Decision
(Explanation)
Training
x 1
TABLE 1
1Overview of feature sets.Feature sets
manifest
S 1
Hardware components
S 2
Requested permissions
S 3
Application components
S 4
Filtered intents
dexcode
S 5
Restricted API calls
S 6
Used permission
S 7
Suspicious API calls
S 8
Network addresses
). This means that, in terms of their feature vectors, benign and malware samples become very similar. Under these circumstances, no machine-learning techniqueS1 S2 S3 S4 S5 S6 S7 S8
Feature Sets
0.00
0.05
0.10
0.15
0.20
0.25
0.30
Drebin (benign)
S1 S2 S3 S4 S5 S6 S7 S8
Feature Sets
0.00
0.05
0.10
0.15
0.20
0.25
0.30
Drebin (malware)
S1 S2 S3 S4 S5 S6 S7 S8
Feature Sets
0.00
0.05
0.10
0.15
0.20
0.25
0.30
PK
S1 S2 S3 S4 S5 S6 S7 S8
Feature Sets
0.00
0.05
0.10
0.15
0.20
0.25
0.30
Mimicry
. Note that this is always possible without affecting system performance, by dividing f by w 1 , and normalizing feature values on a compact domain before classifier training.
. Note that, as in standard performance evaluation techniques, data is split into distinct training-validation pairs, and then performance is averaged on the distinct validation sets. As we are considering evasion attacks, training data is not affected during the attack simulation, and only malicious samples in the validation set are thus modified.
Ambra Demontis (S'16) received the M.Sc. degree in Information Technology with honors from the University of Cagliari, Italy, in 2014. She is now a Ph.D. student in Department of Electrical and Electronic Engineering, University of Cagliari. Her current research interests include machine learning, computer security and biometrics. She is a student member of the IEEE and of the IAPR. In 2011, he visited the University of Tübingen, Germany, and worked on the security of machine learning to training data poisoning. His research interests include secure machine learning, multiple classifier systems, kernel methods, biometrics and computer security. Dr. Biggio serves as a reviewer for several international conferences and journals. He is a senior member of the IEEE and member of the IAPR. His research interests are in the area of pattern recognition and its application to computer security, and image classification and retrieval. During his career Giorgio Giacinto has published more than 120 papers on international journals, conferences, and books. He is a senior member of the ACM and the IEEE. He has been involved in the scientific coordination of several research projects in the fields of pattern recognition and computer security, at the local, national and international level. Since 2012, he has been the organizer of the Summer School on Computer Security and Privacy "Building Trust in the Information Age" (http://comsec.diee. unica.it/summer-school).Fabio Roli (F'12) received his Ph.D. in Electronic Engineering from the University of Genoa, Italy. He was a research group member of the University of Genoa (88-94). He was adjunct professor at the University of Trento ('93-'94). In 1995, he joined the Department of Electrical and Electronic Engineering of the University of Cagliari, where he is now professor of Computer Engineering and head of the research laboratory on pattern recognition and applications. His research activity is focused on the design of pattern recognition systems and their applications. He was a very active organizer of international conferences and workshops, and established the popular workshop series on multiple classifier systems. Dr. Roli is Fellow of the IEEE and of the IAPR.
DroidAPIMiner: Mining API-level features for robust malware detection in android. Y Aafer, W Du, H Yin, Proc. of Int. Conf. on Sec. and Privacy in Comm. Net. (SecureComm). of Int. Conf. on Sec. and Privacy in Comm. Net. (SecureComm)Y. Aafer, W. Du, and H. Yin, "DroidAPIMiner: Mining API-level features for robust malware detection in android," in Proc. of Int. Conf. on Sec. and Privacy in Comm. Net. (SecureComm), 2013.
The weakest link revisited. I Arce, IEEE S&P. 12information Sec.I. Arce, "The weakest link revisited [information Sec.]," IEEE S&P., vol. 1, no. 2, pp. 72-76, Mar 2003.
Drebin: Efficient and explainable detection of android malware in your pocket. D Arp, M Spreitzenbarth, M Hübner, H Gascon, K Rieck, Proc. of the 21st NDSS. of the 21st NDSSD. Arp, M. Spreitzenbarth, M. Hübner, H. Gascon, and K. Rieck, "Drebin: Efficient and explainable detection of android malware in your pocket," in Proc. of the 21st NDSS, 2014.
The security of machine learning. M Barreno, B Nelson, A Joseph, J Tygar, Mach. Learning. 81M. Barreno, B. Nelson, A. Joseph, and J. Tygar, "The security of machine learning," Mach. Learning, vol. 81, pp. 121-148, 2010.
Can machine learning be secure. M Barreno, B Nelson, R Sears, A D Joseph, J D Tygar, Proc. of ASIACCS. of ASIACCSM. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, "Can machine learning be secure?" in Proc. of ASIACCS, pp. 16-25, 2006.
A methodology for empirical analysis of permission-based security models and its application to android. D Barrera, H G Kayacik, P C Van Oorschot, A Somayaji, Proc. of CCS. of CCSD. Barrera, H. G. Kayacik, P. C. van Oorschot, and A. Somayaji, "A methodology for empirical analysis of permission-based security models and its application to android," in Proc. of CCS, 2010.
Evasion attacks against machine learning at test time. B Biggio, I Corona, D Maiorca, B Nelson, N Šrndić, P Laskov, G Giacinto, F Roli, European Conf. on Mach. Learning and Princ. and Pract. of Know. Disc. in Datab. (ECML PKDD). B. Biggio, I. Corona, D. Maiorca, B. Nelson, N.Šrndić, P. Laskov, G. Giacinto, and F. Roli, "Evasion attacks against machine learning at test time," in European Conf. on Mach. Learning and Princ. and Pract. of Know. Disc. in Datab. (ECML PKDD), pp. 387-402, 2013.
Multiple classifier systems for robust classifier design in adversarial environments. B Biggio, G Fumera, F Roli, Int'l J. Mach. Learn. and Cybernetics. 11B. Biggio, G. Fumera, and F. Roli, "Multiple classifier systems for robust classifier design in adversarial environments," Int'l J. Mach. Learn. and Cybernetics, vol. 1, no. 1, pp. 27-41, 2010.
Pattern recognition systems under attack: Design issues and research challenges. Int'l J. Patt. Rec. Artif. Intell. 2871460002--, "Pattern recognition systems under attack: Design issues and research challenges," Int'l J. Patt. Rec. Artif. Intell., vol. 28, no. 7, p. 1460002, 2014.
Security evaluation of pattern classifiers under attack. IEEE Trans. Know. and Data Eng. 264--, "Security evaluation of pattern classifiers under attack," IEEE Trans. Know. and Data Eng., vol. 26, no. 4, pp. 984-996, 2014.
Poisoning attacks against support vector Machines. B Biggio, B Nelson, P Laskov, 29th ICML. B. Biggio, B. Nelson, and P. Laskov, "Poisoning attacks against support vector Machines," in 29th ICML, pp. 1807-1814, 2012.
Large-scale machine learning with stochastic gradient descent. L Bottou, COMPSTAT'2010. SpringerL. Bottou, "Large-scale machine learning with stochastic gradient descent," in COMPSTAT'2010. Springer, pp. 177-186, 2010.
Convex Optimization. S Boyd, L Vandenberghe, Cambridge Uni. PressS. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Uni. Press, 2004.
Bagging predictors. L Breiman, Mach. Learn. 242L. Breiman, "Bagging predictors," Mach. Learn., vol. 24, no. 2, pp. 123-140, 1996.
Static prediction games for adversarial learning problems. M Brückner, C Kanzow, T Scheffer, JMLR. 13M. Brückner, C. Kanzow, and T. Scheffer, "Static prediction games for adversarial learning problems," JMLR, vol. 13, pp. 2617-2654, September 2012.
Another look at statistical learning theory and regularization. V Cherkassky, Y Ma, Neural Networks. 227V. Cherkassky and Y. Ma, "Another look at statistical learning theory and regularization," Neural Networks, vol. 22, no. 7, pp. 958 -969, 2009.
A taxonomy of obfuscating transformations. C Collberg, C Thomborson, D Low, Tech. Rep. 148Dept. of Computer Science, University of AucklandC. Collberg, C. Thomborson, and D. Low, "A taxonomy of obfus- cating transformations," Dept. of Computer Science, University of Auckland, Tech. Rep. 148, July 1997.
Support-vector networks. C Cortes, V Vapnik, Mach. Learning. 20C. Cortes and V. Vapnik, "Support-vector networks," Mach. Learning, vol. 20, pp. 273-297, 1995.
Adversarial classification. N Dalvi, P Domingos, S Mausam, D Sanghai, Verma, ACM SIGKDD. N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, "Adversarial classification," in ACM SIGKDD, pp. 99-108, 2004.
On Sec. and sparsity of linear classifiers for adversarial settings. A Demontis, P Russu, B Biggio, G Fumera, F Roli, Joint IAPR Int'l Workshop on Struct., Synt., and Stat. Pattern Recognition. ChamSpringer Int. PubA. Demontis, P. Russu, B. Biggio, G. Fumera, and F. Roli, "On Sec. and sparsity of linear classifiers for adversarial settings," in Joint IAPR Int'l Workshop on Struct., Synt., and Stat. Pattern Recognition, Cham: Springer Int. Pub, pp. 322-332, 2016.
Nightmare at test time: robust learning by feature deletion. A Globerson, S T Roweis, in 23rd ICMLA. Globerson and S. T. Roweis, "Nightmare at test time: robust learning by feature deletion," in 23rd ICML, pp. 353-360, 2006.
The random subspace method for constructing decision forests. T K Ho, IEEE Trans. PAMI. 208T. K. Ho, "The random subspace method for constructing decision forests," IEEE Trans. PAMI, vol. 20, no. 8, pp. 832-844, 1998.
Evaluating analysis tools for android apps: Status quo and robustness against obfuscation. J Hoffmann, T Rytilahti, D Maiorca, M Winandy, G Giacinto, T Holz, TR-HGI-2016-003Proc. of CODASPY. of CODASPY24Horst Görtz Institute for IT Sec.Technical ReportEvaluating analysis tools for android apps: Status quo and robustness against obfuscationJ. Hoffmann, T. Rytilahti, D. Maiorca, M. Winandy, G. Giacinto, and T. Holz, "Evaluating analysis tools for android apps: Status quo and robustness against obfuscation," in Proc. of CODASPY, pp. 139-141, 2016. [24] --, "Evaluating analysis tools for android apps: Status quo and robustness against obfuscation," in Technical Report TR-HGI-2016-003, Horst Görtz Institute for IT Sec., 2016.
Adversarial Machine learning. L Huang, A D Joseph, B Nelson, B Rubinstein, J D Tygar, 4th ACM Workshop on Artificial Intelligence and Sec. (AISec). L. Huang, A. D. Joseph, B. Nelson, B. Rubinstein, and J. D. Tygar, "Adversarial Machine learning," in 4th ACM Workshop on Artificial Intelligence and Sec. (AISec), pp. 43-57, 2011.
Feature weighting for improved classifier robustness. A Kolcz, C H Teo, Sixth Conf. on Email and Anti-Spam (CEAS). A. Kolcz and C. H. Teo, "Feature weighting for improved classifier robustness," in Sixth Conf. on Email and Anti-Spam (CEAS), 2009.
Marvin: Efficient and Comprehensive Mobile App Classification Through Static and Dynamic Analysis. M Lindorfer, M Neugschwandtner, C Platzer, Proc. of the 39th Annual Int. Computers, Software & Applications Conf. (COMPSAC). of the 39th Annual Int. Computers, Software & Applications Conf. (COMPSAC)M. Lindorfer, M. Neugschwandtner, and C. Platzer, "Marvin: Efficient and Comprehensive Mobile App Classification Through Static and Dynamic Analysis," in Proc. of the 39th Annual Int. Computers, Software & Applications Conf. (COMPSAC), 2015.
Andrubis -1,000,000 Apps Later: A View on Current Android Malware Behaviors. M Lindorfer, M Neugschwandtner, L Weichselbaum, Y Fratantonio, V Van Der Veen, C Platzer, Proc. of the the 3rd Int. Workshop on Building Analysis Datasets and Gath. Exp. Returns for Sec. (BADGERS). of the the 3rd Int. Workshop on Building Analysis Datasets and Gath. Exp. Returns for Sec. (BADGERS)M. Lindorfer, M. Neugschwandtner, L. Weichselbaum, Y. Fratantonio, V. van der Veen, and C. Platzer, "Andrubis -1,000,000 Apps Later: A View on Current Android Malware Behaviors," in Proc. of the the 3rd Int. Workshop on Building Analysis Datasets and Gath. Exp. Returns for Sec. (BADGERS), 2014.
Adversarial learning. D Lowd, C Meek, Proc. of 11th ACM SIGKDD KDD. of 11th ACM SIGKDD KDDD. Lowd and C. Meek, "Adversarial learning," in Proc. of 11th ACM SIGKDD KDD, pp. 641-647, 2005.
Stealth attacks. D Maiorca, D Ariu, I Corona, M Aresu, G Giacinto, Comput. Secur. 51D. Maiorca, D. Ariu, I. Corona, M. Aresu, and G. Giacinto, "Stealth attacks," Comput. Secur., vol. 51, pp. 16-31, Jun. 2015.
Limits of static analysis for malware detection. A Moser, C Kruegel, E Kirda, Proc. of ACSAC. of ACSACA. Moser, C. Kruegel, and E. Kirda, "Limits of static analysis for malware detection," in Proc. of ACSAC, 2007.
Paragraph: Thwarting signature learning by training maliciously. J Newsome, B Karp, D Song, RAID. J. Newsome, B. Karp, and D. Song, "Paragraph: Thwarting signature learning by training maliciously," in RAID, pp. 81-105, 2006.
Using probabilistic generative models for ranking risks of android apps. H Peng, C Gates, B Sarma, N Li, Y Qi, R Potharaju, C Nita-Rotaru, I Molloy, Proc. of CCS. of CCSH. Peng, C. Gates, B. Sarma, N. Li, Y. Qi, R. Potharaju, C. Nita-Rotaru, and I. Molloy, "Using probabilistic generative models for ranking risks of android apps," in Proc. of CCS, 2012.
Misleading worm signature generators using deliberate noise injection. R Perdisci, D Dagon, W Lee, P Fogla, M Sharif, Proc. of IEEE S&P. of IEEE S&PR. Perdisci, D. Dagon, W. Lee, P. Fogla, and M. Sharif, "Misleading worm signature generators using deliberate noise injection," in Proc. of IEEE S&P, pp. 17-31, 2006.
Uranine: Real-time Privacy Leakage Monitoring without System Modification for Android. V Rastogi, Z Qu, J Mcclurg, Y Cao, Y Chen, Springer Int. PubChamV. Rastogi, Z. Qu, J. McClurg, Y. Cao, and Y. Chen, Uranine: Real-time Privacy Leakage Monitoring without System Modification for Android. Cham: Springer Int. Pub., pp. 256-276, 2015.
*droid: Assessment and evaluation of android application analysis tools. B Reaves, J Bowers, S A Gorski, Iii , O Anise, R Bobhate, R Cho, H Das, S Hussain, H Karachiwala, N Scaife, B Wright, K Butler, W Enck, P Traynor, ACM Comput. Surv. 493B. Reaves, J. Bowers, S. A. Gorski III, O. Anise, R. Bobhate, R. Cho, H. Das, S. Hussain, H. Karachiwala, N. Scaife, B. Wright, K. Butler, W. Enck, and P. Traynor, "*droid: Assessment and evaluation of android application analysis tools," ACM Comput. Surv., vol. 49, no. 3, pp. 55:1-55:30, oct. 2016.
Experimental study with real-world data for android app Security analysis using machine learning. S Roy, J Deloach, Y Li, N Herndon, D Caragea, X Ou, V P Ranganath, H Li, N Guevara, Proc. of ACSAC. of ACSACIn pressS. Roy, J. DeLoach, Y. Li, N. Herndon, D. Caragea, X. Ou, V. P. Ranganath, H. Li, and N. Guevara, "Experimental study with real-world data for android app Security analysis using machine learning," in Proc. of ACSAC 2016, In press.
Secure kernel machines against evasion attacks. P Russu, A Demontis, B Biggio, G Fumera, F Roli, 9th ACM Workshop on Artificial Intel. and Sec. (AISec). P. Russu, A. Demontis, B. Biggio, G. Fumera, and F. Roli, "Secure kernel machines against evasion attacks," in 9th ACM Workshop on Artificial Intel. and Sec. (AISec). pp. 59-69, 2016.
Input space versus feature space in kernel-based methods. B Schölkopf, S Mika, C J C Burges, P Knirsch, K.-R Muller, G Rätsch, A J Smola, IEEE Trans. on Neural Networks. 105B. Schölkopf, S. Mika, C. J. C. Burges, P. Knirsch, K.-R. Muller, G. Rätsch, and A. J. Smola, "Input space versus feature space in kernel-based methods." IEEE Trans. on Neural Networks, vol. 10, no. 5, pp. 1000-1017, 1999.
Convex learning with invariances. C H Teo, A Globerson, S Roweis, A Smola, NIPS 20. C. H. Teo, A. Globerson, S. Roweis, and A. Smola, "Convex learning with invariances," in NIPS 20, pp. 1489-1496, 2008.
The nature of statistical learning theory. V N Vapnik, SpringerV. N. Vapnik, The nature of statistical learning theory. Springer, 1995.
Limits of learning-based signature generation with adversaries. S Venkataraman, A Blum, D Song, Proc. of NDSS. of NDSSS. Venkataraman, A. Blum, and D. Song, "Limits of learning-based signature generation with adversaries," in Proc. of NDSS, 2008.
Practical evasion of a learning-based classifier: A case study. N Šrndic, P Laskov, Proc. of IEEE S&P. of IEEE S&PN.Šrndic and P. Laskov, "Practical evasion of a learning-based classifier: A case study," in Proc. of IEEE S&P, pp. 197-211, 2014.
Man vs. Machine: Practical adversarial detection of malicious crowdsourcing workers. G Wang, T Wang, H Zheng, B Y Zhao, 23rd USENIX Sec. Symp. G. Wang, T. Wang, H. Zheng, and B. Y. Zhao, "Man vs. Machine: Practical adversarial detection of malicious crowdsourcing workers," in 23rd USENIX Sec. Symp., 2014.
Is feature selection secure against training data poisoning. H Xiao, B Biggio, G Brown, G Fumera, C Eckert, F Roli, JMLR W&CP -Proc. 32nd ICML. 37H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli, "Is feature selection secure against training data poisoning?" in JMLR W&CP -Proc. 32nd ICML, vol. 37, pp. 1689-1698, 2015.
Robustness and regularization of support vector Machines. H Xu, C Caramanis, S Mannor, JMLR. 10H. Xu, C. Caramanis, and S. Mannor, "Robustness and regularization of support vector Machines," JMLR, vol. 10, pp. 1485-1510, July 2009.
AppSpear: Bytecode Decrypting and DEX Reassembling for Packed Android Malware. W Yang, Y Zhang, J Li, J Shu, B Li, W Hu, D Gu, Springer Int. PublChamW. Yang, Y. Zhang, J. Li, J. Shu, B. Li, W. Hu, and D. Gu, AppSpear: Bytecode Decrypting and DEX Reassembling for Packed Android Malware. Cham: Springer Int. Publ., pp. 359-381, 2015.
Adversarial feature selection against evasion attacks. F Zhang, P Chan, B Biggio, D Yeung, F Roli, IEEE Trans. Cyb. 463F. Zhang, P. Chan, B. Biggio, D. Yeung, and F. Roli, "Adversarial feature selection against evasion attacks," IEEE Trans. Cyb., vol. 46, no. 3, pp. 766-777, 2016.
Solving large scale linear prediction problems using SGD algorithms. T Zhang, in 21st ICMLT. Zhang, "Solving large scale linear prediction problems using SGD algorithms," in 21st ICML, pp. 116-123, 2004.
Dissecting android malware: Characterization and evolution. Y Zhou, X Jiang, IEEE S&P. Y. Zhou and X. Jiang, "Dissecting android malware: Characterization and evolution," in IEEE S&P, 2012.
| [
"https://github.com/androguard/androguard."
]
|
[
"VERY AMPLENESS AND PROJECTIVE NORMALITY ON CERTAIN CALABI-YAU AND HYPERKÄHLER VARIETIES",
"VERY AMPLENESS AND PROJECTIVE NORMALITY ON CERTAIN CALABI-YAU AND HYPERKÄHLER VARIETIES"
]
| [
"Jayan Mukherjee ",
"Debaditya Raychaudhury "
]
| []
| []
| In this article we produce new results on effective very ampleness and projective normality on certain K X trivial varieties. In the first part we produce an effective projective normality result on ample line bundles on regular fourfolds with trivial canonical bundle. In the second part we emphasize on the projective normality of powers of ample and globally generated line bundles on two classes of known examples (upto deformation) of projective hyperkähler varieties. | null | [
"https://arxiv.org/pdf/1902.00649v3.pdf"
]
| 102,350,519 | 1902.00649 | 4d467d822b76637053a24d63e1b5f052900c9066 |
VERY AMPLENESS AND PROJECTIVE NORMALITY ON CERTAIN CALABI-YAU AND HYPERKÄHLER VARIETIES
6 Apr 2019
Jayan Mukherjee
Debaditya Raychaudhury
VERY AMPLENESS AND PROJECTIVE NORMALITY ON CERTAIN CALABI-YAU AND HYPERKÄHLER VARIETIES
6 Apr 2019arXiv:1902.00649v3 [math.AG]
In this article we produce new results on effective very ampleness and projective normality on certain K X trivial varieties. In the first part we produce an effective projective normality result on ample line bundles on regular fourfolds with trivial canonical bundle. In the second part we emphasize on the projective normality of powers of ample and globally generated line bundles on two classes of known examples (upto deformation) of projective hyperkähler varieties.
Introduction
Part 1: Regular, K X Trivial Varieties. Geometry of linear series on K X trivial varieties is a topic that has motivated a lot of research. The question of what multiple of an ample bundle is very ample was extensively studied by many mathematicians including Gallego, Oguiso, Peternell, Purnaprajna, Saint-Donat (see [9], [26], [27]). Saint-Donat proves the following theorem on a K3 surface (see [27]) which is defined as a smooth projective surface S with K S = 0 and H 1 (O S ) = 0.
Theorem A. Let S be a smooth projective K3 surface and let B be an ample line bundle on S . Then B ⊗n is very ample (in fact projectively normal) for n ≥ 3.
Gallego and Purnaprajna proved the following generalization of Saint-Donat's result on projective normality for smooth, projective, regular (H 1 (O X ) = 0) threefold with trivial canonical bundle (see [9]).
Theorem B. Let X be a smooth, projective threefold with K X = 0 and H 1 (O X ) = 0. Suppose B be an ample line bundle on X. Then B ⊗n is projectively normal for n ≥ 8. If B 3 > 1 then A ⊗n is projectively normal if n ≥ 6.
In order to prove the theorem above, Gallego and Purnaprajna gave a classification theorem for a regular, K X trivial threefold that maps onto a variety of minimal degree by a complete linear series of an ample and globally generated line bundle. Varieties that appear as covers of varieties of minimal degree play an important role in the geometry of algebraic varieties. They are extremal cases in a variety of geometric situations from algebraic curves to higher dimensional varieties (see [9], [10], [11], [14]). In this article, we prove the following classification theorem where we study the situation when a smooth regular K X trivial fourfold X maps to a variety of minimal degree by the complete linear system of an ample and base point free line bundle B.
Theorem 1. (See Theorem 2.3) Let X be a smooth regular K X trivial fourfold . Let π be the morphism induced by an ample and base point free line bundle B on X with h 0 (B) = r + 1 and let n be the degree of π. If π maps X to a variety of minimal degree Y then n ≤ 24(r − 1) r − 3 and one of the following happens.
(1) Y = P 4 .
(2) Y is a smooth quadric hypersurface in P 5 .
(3) Y is a smooth rational normal scroll of dimension 4 in P 6 or P 7 and X is fibered over P 1 and the general fibre is a smooth threefold G with K G = 0. (4) Y is a smooth rational normal scroll in P r for r ≥ 8 and X is fibered over P 1 and the general fibre is a three-fold G with K G = 0 and the degree n of π satisfies 2 ≤ n ≤ 18. (5) Y is a singular four-fold which is either a triple cone over a rational normal curve or a double cone over the Veronese surface in P 5 .
This result can be thought of as an analogue to the classification theorem obtained by Gallego and Purnaprajna that we mentioned before. As a consequence of the above theorem and Fujita's conjecture in dimension four that has been proved by Kawamata (see [16]), we are able to give an effective projective normality result and hence very ampleness result on smooth K X trivial fourfolds which can be thought of as a generalization of Theorem A and Theorem B.
Theorem 2. (See Theorem 2.4) Let X be a smooth fourfold with trivial canonical bundle and let A be an ample line bundle on X then (i) nA is very ample and embeds X as a projectively normal variety for n ≥ 16.
(ii) If H 1 (O X ) = 0 then nA is very ample and embeds X as a projectively normal variety for n ≥ 15.
We note that standard methods of Castelnuovo-Mumford regularity (see Lemma 1.1.5) and Theorem 1.3, [9] yields in the situation above the following; nA satisfies projective normality for n ≥ 21.
Part 2: Hyperkähler Varieties. Note that the definition of K3 surface is equivalent to having a holomorphic symplectic form on S . However in higher dimensions these two notions do not coincide which is clear from the fact that existence of a holomorphic symplectic form on a Kähler manifold demands that its dimension is even whereas there are examples of smooth projective algebraic varieties in odd dimensions with trivial canonical bundle and H 1 (O X ) = 0, for example smooth hypersurfaces of degree n + 1 in P n . So essentially we can have two different kinds of generalizations of a K3 surface. At this point we make a few definitions before stating a theorem of Beauville and Bogomolov that summarizes the importance of the study of the classes of varieties we just discussed.
Definition 0.1. A compact Kähler manifold M of dimension n ≥ 3 is called Calabi-Yau if it has trivial canonical bundle and the hodge numbers h p,0 (M) vanish for all 0 < p < n.
Definition 0.2. A compact Kähler manifold M is called hyperkähler if it is simply connected and its space of global holomorphic two forms is spanned by a symplectic form.
The following theorem is due to Beauville-Bogomolov (see [1], [2]).
Theorem C. Every smooth projective variety with c 1 (X) = 0 in H 2 (X, R) admits a finite cover isomorphic to a product of Abelian varieties, Calabi-Yau varieties and hyperkähler varieties.
Hence one can see that Calabi-Yau and hyperkähler varieties can be thought of as "building blocks" of varieties with trivial canonical bundle.
In the paper mentioned before (see [27]) Saint-Donat proves the following theorem for ample and base point free line bundles on K3 surfaces.
Theorem D. Let S be a smooth projective K3 surface and let B be an ample and base point free line bundle on S . Then (i) B ⊗2 is very ample and |B ⊗2 | embeds S as a projectively normal variety unless the morphism given by the complete linear system |B| maps S , 2 : 1 onto P 2 .
(ii) B is very ample and |B| embeds S as a projectively normal variety unless the morphism given by the complete linear system |B| maps S , 2 : 1 onto P 2 or to a variety of minimal degree.
Gallego and Purnaprajna proved the folowing generalization of Saint Donat's result (see [9]).
Theorem E. Let X be a smooth, projective, regular, K X trivial threefold and let B be an ample and base point free line bundle on X. Then (i) B ⊗3 is very ample and |B ⊗3 | embeds X as a projectively normal variety unless the morphism given by the complete linear system |B| maps X, 2 : 1 onto P 3 .
(ii) B ⊗2 is very ample and |B ⊗2 | embeds X as a projectively normal variety unless the morphism given by the complete linear system |B| maps X, 2 : 1 onto P 3 or to a variety of minimal degree.
They also proved that 4B is projectively normal on smooth, projective, regular, K X trivial fourfolds when the morphism induced by the complete linear series of an ample and globally generated line bundle B is birational onto its image and h 0 (B) ≥ 7 (see Theorem 1.11, [9]). Niu proved an analogue of Theorem E in dimension four. In fact he proved a general result for smooth, projective, regular, K X trivial varieties in all dimensions (see [23]) with an additional assumption of
H 2 (O X ) = 0.
We see that it is a natural question to ask whether and to what extent these theorems generalize to the other class of higher dimensional analogues of K3 surfaces, namely hyperkähler varieties. Before stating our results we briefly recall the known examples of hyperkähler varieties.
There are many families of examples for Calabi-Yau varieties but only few classes of examples for hyperkähler varieties are known. Beauville first produced examples of two distinct deformation classes of compact hyperkähler manifolds in all even dimensions greater than or equal to 2 (see [1]). The first example is the Hilbert scheme S [n] of length n subschemes on a K3 surface S . The second one is the generalized Kummer variety K n (T ) which is the fibre over the 0 of an abelian variety T under the morphism φ • ψ (see the diagram below)
T [n+1] T (n+1) T ψ φ where T [n+1]
Hilbert scheme of length n + 1 subschemes on the abelian variety T , T (n+1) is the symmetric product, ψ is the Hilbert chow morphism and φ is the addition on T . Two other distinct deformation classes of hyperkahler manifolds are given by O'Grady in dimensions 6 and 10 which appear as desingularizations of certain modulii spaces of sheaves over symplectic surfaces (see [24], [25]). All other known examples are deformation equivalent to one of these.
We prove structure theorems for two known deformation classes of polarized hyperkähler four, six, eight and tenfolds (X, L) where L maps to variety of minimal degree. We use the results to prove new results on very ampleness and projective normality for ample and globally generated line bundles. These are analogues of Saint-Donat's result on K3 surfaces. We state below the results for dimension four in detail. Table 2 at the end of Section 3 gives the results for higher dimensions.
Our first result on hyperkähler varieties deformation equivalent to a Hilbert scheme of two points on a K3 surface is precisely as follows.
Theorem 3. (See Theorem 3.1.1) Suppose L be a base point free line bundle on a projective hyperkähler manifold X which is deformation equivalent to the Hilbert scheme of 2 points on a K3 surface S . Assume that the morphism given by L is generically finite onto its image. Then (i) The degree d of the generically finite morphism given by L is bounded above by 23.
(ii) If the morphism maps to a variety of minimal degree then the one of the following happens: (a) X maps 6 : 1 to a quadric in P 5 . (b) X maps 8 : 1 to a singular rational normal scroll of degree 6 in P 9 which is the triple cone over the rational normal curve in P 6 .
The following is an analogous result for hyperkähler varieties deformaion equivalent to a generalized Kummer variety. With these classification theorems we proceed to prove generalizations of Saint-Donat's theorem on the above classes of hyperkähler manifolds. We first state the result for hyperkähler varieties deformation equivalent to K3 [2] .
Corollary 5. (See Corollary 3.1.3) Let X be a projective hyperkähler fourfold deformation equivalent to Hilbert scheme of two points on a K3 surface. Let B be an ample and base point free line bundle on X. Then (1) B ⊗n is very ample and embeds X as a projectively normal variety for n ≥ 4.
(2) B ⊗n is very ample and embeds X as a projectively normal variety for n ≥ 3 unless the complete linear series of |B| maps X to a variety of minimal degree, i.e, unless one of the two cases in Theorem 3 happens.
The result for generalized Kummer varieties is the following.
Corollary 6. (See Corollary 3.1.5) Let X be a projective hyperkähler fourfold deformation equivalent to a generalized Kummer variety. Let B be an ample and base point free line bundle on X.
Then B ⊗n is very ample and embeds X as a projectively normal variety for n ≥ 3.
At this point we recall the Fujita's very ampleness conjecture which states that for a smooth projective fourfold X with canonical bundle K X , we have that K X + 6B is very ample for any ample line bundle B. Hence for K X trivial varieties we get that according to the conjecture, 6B is very ample. Here we prove that 3B is very ample if we take B to be ample and globally generated.
There are examples of X = S [2] , where S is a K3 surface, that maps onto a variety of minimal degree by the complete linear series of an ample and globally generated line bundle where the degree of the morphism is 6 (see Example 3.1.6) and we believe that under the same hypothesis it can not map 8:1 onto a variety of minimal degree. The above theorems crucially use two key characteristics of a hyperkähler variety which are the existence of a primitive integral quadratic form on the second integral cohomology group of the variety and Matsushita's theorem on fibre space structure of a hyperkähler manifold (see [19]).
Preliminaries and Notations
Throughout this article, X will always denote a smooth, projective variety over C. K or K X will denote its canonical bundle. We will use the multiplicative and the additive notation of line bundles interchangeably. Thus, for a line bundle L, L ⊗r and rL are the same. We have used the notation L −r for (L * ) ⊗r . We will use L r to denote the intersection product.
1.1. Background on projectve normality. For a globally generated line bundle L on a smooth projective variety X, we have the following short exact sequence. .
0 M L H 0 (L) ⊗ O X L 0 ( * )
We have the following necessary and sufficient condition for the N p property of an ample and base point free line bundle on X. Theorem 1.1.1. Let L be an ample, globally generated line bundle on X. If the group H 1 ( p ′ +1 M L ⊗ L ⊗k ) vanishes for all 0 ≤ p ′ ≤ p and for all k ≥ 1, the L satisfies the property N p . If in addition H 1 (L ⊗r ) = 0 for all r ≥ 1, then the above is a necessary and sufficient condition for L to satisfy N p .
Since we are working over a field of characteristic zero, we will always show that H 1 (M ⊗p ′ +1 L ⊗ L ⊗k ) = 0 in order to prove that L satisfy the N p property.
We have made use of the following observation of Gallego and Purnaprajna (see for instance [9]) to show projective normality.
H 0 (E) ⊗ H 0 (L 1 ⊗ L 2 ⊗ ... ⊗ L r ) ψ − → H 0 (E ⊗ L 1 ⊗ ... ⊗ L r ) and the following maps H 0 (E) ⊗ H 0 (L 1 ) α 1 − → H 0 (E ⊗ L 1 ), H 0 (E ⊗ L 1 ) ⊗ H 0 (L 2 ) α 2 − → H 0 (E ⊗ L 1 ⊗ L 2 ), ... H 0 (E ⊗ L 1 ⊗ ... ⊗ L r−1 ) ⊗ H 0 (L r ) α r − → H 0 (E ⊗ L 1 ⊗ ... ⊗ L r ).
If α 1 , α 2 ,..., α r are surjective then ψ is also surjective.
The technique we use to show projective normalty of an ample and globally generated line bundle on a variety is to use Koszul resolution to restrict the bundle on a smooth curve section and then showing the surjectivity of an appropriate multiplication map. It is worth mentioning that Koszul resolution is the special case of a particular complex, known as Skoda complex which we define below. Definition 1.1.3. Let X be a smooth projective variety of dimension n ≥ 2. Let B be a globally generated and ample line bundle on X.
(1) Take n − 1 general sections s 1 , ...s n−1 of H 0 (B) so the intersection of the divisor of zeroes
B i = (s i ) 0 is a nonsingular projective curve C, that is C = B 1 ∩ ... ∩ B n−1 . (2) Let I be the ideal sheaf of C and let W = span{s 1 , ..., s n−1 } ⊆ H 0 (B) be the subspace spanned by s i . Note that W ⊆ H 0 (B ⊗ I ). For i ≥ 1, define the Skoda complex I i as 0 n−1 W ⊗ B −(n−1) ⊗ I i−(n−1) ... W ⊗ B −1 ⊗ I i−1 I i 0
where I k stands for I ⊗k , we have used the convention that
I k = O X for k ≤ 0.
In this article we have only used I 1 which is the following
0 n−1 W ⊗ B −(n−1) ... 2 W ⊗ B −2 W ⊗ B −1 I 0.
and it is just the Koszul resolution of I . In fact, Lazarsfeld showed that the complex I i is exact for any i ≥ 1 (see [18]).
Once we boil down our problem to a problem on curve, we use the following two results. The first one is a result of Green (see [11]).
W ⊗H 0 (M) → H 0 (L⊗ M) is surjective if h 1 (M ⊗ L −1 ) ≤ dim(W) − 2.
The second one is known as Castelnuovo-Mumford lemma (see [22]). 1.2. Background on hyperkähler varieties. For the definition of a hyperkähler manifold see Definition 0.2. We start by the following theorem of Beauville and Fujiki (see [1] and [8]) which we have crucially used in our proofs. Theorem 1.2.1. Let X be a hyperkähler variety of dimension 2n. There exists a quadratic form q X :
H 2 (X, C) → C and a positive constant c X ∈ Q + such that for all α in H 2 (X, C), X α 2n = c X .q X (α) n .
The above equation determines c X and q X uniquely if one assumes the following two conditions. (I) q X is a primitive integral quadratic form on H 2 (X, Z);
(II) q X (σ,σ) > 0 for all 0 σ ∈ H 2,0 (X)
Here q X and c X are called the Beauville form and Fujiki constant respectively. Table 1 In this table the lattice H is the standard hyperbolic plane, the lattice −E 8 is the unique negative definite even unimodular lattice of rank eight and (i) is the rank 1 lattice generated by an element whose square is i.
X dim(X) b 2 (X) c X (H 2 (X, Z), c X ) S [n] 2n 23 (2n)! n!2 n H ⊕ 3 ⊥ ⊕ ⊥ −E ⊕ 3 ⊥ 8 ⊕ ⊥ (−2(n − 1)) K n (T ) 2n 7 (2n)! n!2 n (n + 1) H ⊕ 3 ⊥ ⊕ ⊥ (−2(n − 1))
Let us recall the following theorem which helps to find the explicit form of the Riemann-Roch theorem on hyperkähler varieties. [8], [12]) Let X be a hyperkähler variety of dimension 2n. Assume that α ∈ H 4 j (X, C) of type (2 j, 2 j) on all small deformations of X. Then there exists a constant C(α) ∈ C depending on α such that X α.β 2n−2 j = C(α).q X (β) n− j for all β ∈ H 2 (X, C).
Theorem 1.2.3. (See
Remark 1.2.4. As a consequence of the theorem above, we get the following form of the Riemann-Roch formula for an line bundle L on a hyperkähler variety of dimenson 2n (see [15]).
χ(X, L) = n i=0 a i (2i)! q X (c 1 (L)) i
where a i = C(td 2n−2i (X)). Here a i 's are constants depending only on the topology of X.
Remark 1.2.5. Elingsrad-Gottsche-Lehn computes the rational constants of the Riemann roch expression for hyperkähler manifolds of deformation type K3 [n] (See [7]) and Britze-Nieper computes the same for generalized Kummer varieties of dimension 2n (see [3]).
If X is of K3 [n] type we have that
χ(L) = 1 2 q(L) + n + 1 n .
If X is a generalized Kummer variety of dimension 2n we have that
χ(L) = (n + 1) 1 2 q(L) + n n .
Now we are ready to give the proofs of our theorems.
Proof of the Main Result on Regular Fourfolds with Trivial Canonical Bundle
The main aim of this section is to prove results on effective very ampleness and projective normality on a four dimensional variety with trivial canonical bundle. We start with a general statement on projective normality and normal presentation. Corollary 2.1. Let X be a n-fold with trivial canonical sheaf. Let B be an ample and base point free line bundle on X. Let h 0 (B) ≥ n + 2. Then lB satisfies the property N 0 for all l ≥ n. Moreover, if X is Calabi-Yau, then lB satisfies the property N 1 for all l ≥ n.
Proof. Follows immediately from Theorem 2.3 and Theorem 3.4 of [21]. Now we want to find out what multiple of an ample line bundle is very ample on a four dimensional variety with trivial canonical bundle. We will use the Fujita freeness on four folds that has been proved by Kawamata in [16]. We begin with a lemma. Proof. We already know that B is base point free by Kawamata's proof of Fujita's base point freeness theorem on fourfolds (see [16]). We prove the statement for k = 1. For k > 1 the proof is exactly the same.
Let C be a smooth and irreducible curve section of the linear system |B| and let I be the ideal sheaf of C in X. We have the following commutative diagram with the two horizontal rows exact.
Here I is the ideal sheaf of C in X, V is the cokernel of the map H 0 (B ⊗ I ) → H 0 (B). 0 H 0 (B ⊗ I ) ⊗ H 0 (3B + A) H 0 (B) ⊗ H 0 (3B + A) V ⊗ H 0 (3B + A) 0 0 H 0 ((4B + A) ⊗ I ) H 0 ((4B + A)) H 0 (4B + A| C ) 0
Now we claim that the leftmost vertical map is surjective. Consider the following exact sequence (see Definition 1.1.3).
0 3 W ⊗ B −3 2 W ⊗ B −2 W ⊗ B * I 0
Tensor it with 4B + A to get the following exact sequence.
0 3 W ⊗ (B + A) 2 W ⊗ (2B + A) W ⊗ (3B + A) (4B + A) ⊗ I 0 f 3 f 2 f 1
That gives us two short exact sequences.
0
Ker( f 1 ) W ⊗ (3B + A) (4B + A) ⊗ I 0 f 1 0 3 W ⊗ (B + A) 2 W ⊗ (2B + A) Ker( f 1 ) 0 f 3 f 2
Taking long exact sequence of cohomology in the second sequence we get the following.
2 W ⊗ H 1 (2B + A) H 1 (Ker( f 1 )) 3 W ⊗ H 2 (B + A)
Hence H 1 (Ker( f 1 )) = 0 since the other terms of the exact sequence vanish by Kodaira Vanishing. The long exact sequence of cohomology associated to the first sequence is the following.
W ⊗ H 0 (3B + A) H 0 ((4B + A) ⊗ I )) H 1 (Ker( f 1 ))
We showed that the last term is zero and hence we have the surjection of the map W ⊗H 0 (3B+A) → H 0 ((4B + A) ⊗ I )).
Since W ⊆ H 0 (B ⊗ I ) we have the surjection of the multiplication map H 0 (B ⊗ I ) ⊗ H 0 (3B + A) → H 0 ((4B + A) ⊗ I ).
In order to prove the lemma we are left to show that V ⊗ H 0 (3B + A) → H 0 ((4B + A)| C ) surjects. Since we have the surjection of H 0 (3B + A) → H 0 ((3B + A)| C ), it is enough to show the surjection of V ⊗ H 0 (3B + A| C ) → H 0 ((4B + A)| C ). Using Lemma 1.1.4 it is enough to prove the inequality
h 1 (2B + A| C ) ≤ dimV − 2.
Now consider the exact sequence
0 3 W ⊗ B −2 2 W ⊗ B * W ⊗ O X B ⊗ I 0. f 3 f 2 f 1
So we get the following two exact sequences.
0
Ker( f 1 ) W ⊗ O X B ⊗ I 0 f 1 0 3 W ⊗ B −2 2 W ⊗ B * Ker( f 1 ) 0 f 3 f 2
The long exact sequence of cohomology associated to the second sequence gives
2 W ⊗ H 0 (B * ) H 0 (Ker( f 1 )) 3 W ⊗ H 1 (B −2 ).
Hence H 0 (Ker( f 1 )) = 0 since H 1 (B −2 ) = 0 by Kodaira vanishing and H 0 (B * ) = 0 since B * is negative of an ample divisor. Taking cohomology once more we have the following exact sequence
2 W ⊗ H 1 (B * ) H 1 (Ker( f 1 )) 3 W ⊗ H 2 (B −2 ).
Hence H 1 (Ker( f 1 )) = 0 since the other terms of the exact sequence vanish by Kodaira Vanishing. The long exact sequence of cohomology associated to the first sequence is the following.
H 0 (Ker( f 1 )) W ⊗ H 0 (O X ) H 0 (B ⊗ I )) H 1 (Ker( f 1 ))
But the first and last terms are zero by Kodaira Vanishing and hence h 0 (B⊗I ) = dimW ≤ 3. Hence
dimV − 2 ≥ h 0 (B) − 5.
On the other hand the canonical bundle of C is given by 3B| C . Applying Serre-Duality it is enough to prove that h 0 (
B − A) ≤ h 0 (B) − 5 i.e. h 0 ((n − 1)A) ≤ h 0 (nA) − 5.
Applying Riemann Roch theorem we get that h 0 (nA) = n 4 24
A 4 + n 2 24 A 2 .c 2 + 2 − 2h 1 (O X ) + h 2 (O X ).
Similarly, h 0 ((n − 1)A) = (n − 1) 4 24
A 4 + (n − 1) 2 24 A 2 .c 2 + 2 − 2h 1 (O X ) + h 2 (O X ).
Subtracting we get that h 0 (nA) − h 0 ((n − 1)A) = n 4 − (n − 1) 4 24
A 4 + n 2 − (n − 1) 2 24 A 2 .c 2 .
Now using a result of Miyaoka (see [20]), we have that A 2 .c 2 ≥ 0 which gives h 0 (nA) − h 0 ((n − 1)A) ≥ n 4 − (n − 1) 4 24 A 4 ≥ 5 if n ≥ 5 and hence we are done. Now we give a classification theorem in which we classify the varieties which come as an image of a regular fourfold with trivial canonical bundle by an ample, globally generated line bundle with an additional property of being a variety of minimal degree. Theorem 2.3. Let X be a regular four-fold with trivial canonical bundle. Let π be the morphism induced by an ample and base point free line bundle B on X with h 0 (B) = r + 1 and let n be the degree of π. If π maps X to a variety of minimal degree Y then n ≤ 24(r − 1) r − 3 and one of the following happens.
(1) Y = P 4 .
(2) Y is a smooth quadric hypersurface in P 5 .
(3) Y is a smooth rational normal scroll of dimension 4 in P 6 or P 7 and X is fibered over P 1 and the general fibre is a smooth threefold G with K G = 0 and the degree n satisfies
2 ≤ n ≤ min 6h 0 (B| G ), 24(r − 1) r − 3 .
If in addition G is regular we have the following
2h 0 (B| G ) − 6 ≤ n ≤ min 6(h 0 (B| G ) − 1), 24(r − 1) r − 3 ,
if n is even and
2h 0 (B| G ) − 5 ≤ n ≤ min 6(h 0 (B| G ) − 1), 24(r − 1) r − 3 , if n is odd.
(4) Y is a smooth rational normal scroll in P r for r ≥ 8 and X is fibered over P 1 and the general fibre is a three-fold G with K G = 0 and the degree n of π satisfies 2 ≤ n ≤ 18.
(5) Y is a singular four-fold which is either a triple cone over a rational normal curve or a double cone over the Veronese surface in P 5 .
Proof. We first prove the inequality. Using Riemann-Roch we can see that
h 0 (B) = 1 24 B 4 + 1 24 B 2 .c 2 + 2
and we also have that B 4 = n(r − 3) since Y is a variety of minimal degree. By Miyaoka's result (see [20]) we have that B 2 .c 2 ≥ 0 and hence we have the inequality n ≤ 24(r − 1) r − 3 . We now describe the cases when Y is a smooth variety of minimal degree. We have that r ≥ 4.
Case 1. If r = 4, we have that Y = P 4 .
Case 2. If r = 5, we have that codimension of Y is one and degree is 2 which implies that Y is a smooth quadric hypersurface.
Case 3. If r ≥ 6, we have that Y is a smooth rational normal scroll and is hence fibered over P 1 . Let this map from Y to P 1 be φ. Composing this with π we get a map φ • π : X → P 1 . Hence X is fibered over P 1 and we have that the general fibre is a smooth threefold G with K G = 0 by adjunction. We first settle the case for r ≥ 8. Let the general fibre of Y be denoted by F and that of X is denoted by G. We have the following exact sequence of cohomology of line bundles on X.
0 H 0 (B(−G)) H 0 (B) H 0 (B ⊗ O G ) H 1 (B(−G))
Now we claim that H − F is a nef and big divisor in Y where H is a hyperplane section in Y. We have that Y is S (a 0 , a 1 , a 2 , a 3
) i.e, Y is the image of P(E) where E is the vector bundle O(a 0 ) O(a 1 ) O(a 2 ) O(a 3 )
, mapped to the projective space by |O P(E) (1)|. We have that H − F is nef (in fact base point free). Now we compute (H − F) 4
= H 4 − 4H 3 F. We have H 4 = 3 i=0 a i H 3 F and r = 3 i=0 a i + 3. So, r ≥ 8 gives 3
i=0 a i ≥ 5 which gives (H − F) 4 > 0 as H is ample. Hence, H − F is nef and big and consequently B − G is nef and big as well. So, by Kawamata-Viehwag vanishing, we have that H 1 (B − G) = 0. Hence π| G is given by the complete linear system |B| G |.
Since G maps to F = P 3 we have that h 0 (B| G ) = 4. Now, the degree of π is also the degree of π| G for a general fibre G. Hence by a result of Gallego and Purnaprajna (see [9], Theorem 1.6) we have that 2 ≤ n ≤ 18. Now for the cases r = 5 or r = 6 we again use the fact that degree of π is equal to the degree of π| G and then use Riemann-Roch theorem on the threefold G noticing the fact that K G = 0 and that B| G is ample and base point free. This gives the upper bound 6h 0 (B| G ) since we have that B| G .c 2 ≥ 0 (see [20]). The lower bound 2 is due to the fact that G cannot be birational to P 3 . Now assuming G is regular and hence Calabi-Yau we have that h 0 (B| G ) ≥ 1 6 (B| G ) 3 + 1 and hence n ≤ 6(h 0 (B| G ) − 1). The lower bound is obtained by Proposition 2.2, part (1) of [17].
Case 4. Suppose the image Y of X under the morphism defined by |B| is a singular variety. If Y is a cone over a smooth 3 dimensional scroll or a double cone over a smooth 2 dimensional scroll then the codimension of the singular locus of Y is > 2. Then by [13] Proposition 2.1, part 2, the corresponding projective bundle Y ′ gives a small resolution of singularities of Y. Hence it follows that there exist a birational morphism from a variety (the fibre product X and Y ′ over Y) to X with the exceptional locus having no divisorial component which contradicts the factoriality of X. Hence Y can be either a triple cone over a rational normal curve or a double cone over the Veronese surface in P 5 . Now we prove our main result of this section using the first part of the previous theorem. We notice that since part (ii) requires a regular fourfold with trivial canonical bundle, we see that according to our definition, it holds for both hyperkähler and Calabi-Yau fourfolds in dimension four.
Theorem 2.4. Let X be a four fold with trivial canonical bundle and let A be an ample line bundle on X then (i) nA is very ample and embeds X as a projectively normal variety for n ≥ 16.
(ii) If H 1 (O X ) = 0 then nA is very ample and embeds X as a projectively normal variety for n ≥ 15.
Proof of (i). By the result of Kawamata (see [16]) we have that on a fourfold with trivial canonical bundle if A is ample then nA is base point free for n ≥ 5. Now using CM lemma (Lemma 1.1.5) we can easily prove that nA satisfies the property N 0 for n ≥ 21. If we set B = 5A then 20A = 4B and it satisfies the property N 0 by Corollary 2.1. Using Lemma 2.2, CM Lemma ( Lemma 1.1.5) and Observation 1.1.2, we can see that H 0 (nkA) ⊗ H 0 (nA) −→ H 0 ((nk + n)A) is surjective for k ≥ 2 and 16 ≤ n ≤ 19. So we are left to check the surjectivity of H 0 (nA) ⊗ H 0 (nA) −→ H 0 (2nA) for 16 ≤ n ≤ 19. We just prove it for n = 16. The rest of them follow similarly.
For n = 16, we have that H 0 (16A) ⊗ H 0 (5A) −→ H 0 (21A) and H 0 (21A) ⊗ H 0 (5A) −→ H 0 (26A) are surjective by Lemma 2.2 and CM Lemma (Lemma 1.1.5). Therefore, by Observation 1.1.2 we need to show that H 0 (26A) ⊗ H 0 (6A) −→ H 0 (32A) is surjective which also follows from CM lemma (Lemma 1.1.5).
Proof of (ii). Suppose H 1 (O X ) = 0. We just need to show that 15A satisfies the property N 0 . Let B = 5A which is ample and base point free (see [16]). Now By the result of Green (see [10]), 3B is projectively normal unless the image of the morphism induced by B is a variety of minimal degree. So, we have to show that the image of the morphism induced by 5A is not a variety of minimal degree.
Applying Riemann-Roch we get that h 0 (5A) = 625 24 A 4 + 25 24 A 2 .c 2 + 2 ≥ 28. Now suppose that the image is a variety of minimal degree. However, since the codimension of the image is ≥ 24, we have that the image cannot be a quadric hypersurface or a cone over the veronese embedding of P 2 in P 5 . Hence the image is a rational normal scroll (which might be singular). Let h 0 (B) = r +1. Hence the degree of the image is r −3. Also, let the degree of the finite morphism given by |B| be n.
We know that n ≤ 24(r − 1) r − 3 by Theorem 2.3. Using h 0 (B) ≥ 28 we have that r ≥ 27 and hence n ≤ 26. Since the image of the morphism is a rational normal scroll of dimension 4 we can choose a general P 3 = F and then take the pullback of the divisor F under the morphism induced by |B| and call it G. The degree of the morphism restricted to G is again n. Since the degree of F in the image is 1 we have that n = B 3 .G = 125A 2 .G ≥ 125 (since A is ample and G is effective) contradicting n ≤ 26. Hence the image cannot be variety of minimal degree and we are done.
3. Proof of the Main Results on Hyperkähler Varieties 3.1. Hyperkähler fourfolds. First we prove the following theorem that studies the situation when a projective hyperkähler manifold of K3 2] type maps to a variety of minimal degree. This theorem will help us to get results on effective very ampleness. (2) If φ maps to a variety of minimal degree then the one of the following happens. (a) X maps 6 : 1 to a quadric in P 5 . (b) X maps 8 : 1 to a singular rational normal scroll of degree 6 in P 9 which is the triple cone over the rational normal curve in P 6 .
Proof. (1) Let Y be the image of φ. Then we have L 4 = d.deg (Y). We have the following
deg(Y) ≥ 1 + codim(Y) = 1 + h 0 (L) − 1 − 4 = h 0 (L) − 4.
Hence if q X (L) = x, using the Riemann-Roch for X (see Remark 1.2.5) and noting that all the higher cohomologies of L vanish we have that 24x 2
≥ d.(x 2 + 10x − 8) =⇒ (24 − d)x 2 − 10dx + 8d ≥ 0.
We have that for L is nef and big and therefore q X (L) = x > 0. Hence we have that d < 24.
(2) Now consider φ maps to a variety of minimal degree. Then we have the following equation
(24 − d)x 2 − 10dx + 8d = 0.
Considering that we have an even integer solution of x for this equation and 0 < d < 24 we have that the only possible choices are d = 6, x = 2, h 0 (L) = 6 and d = 8, x = 4, h 0 (L) = 10. Now the image is either a quadric (possibly singular) or a smooth rational normal scroll or a cone over a smooth rational normal scroll by Eisenbud, Harris's classification of varieties of minimal degree (see [6]).
Suppose the image is not a quadric. We first claim that the image is not a smooth rational normal scroll. Suppose on the contrary the image is a smooth rational normal scroll. Since a smooth scroll admits a morphism to P 1 we have a composed morphism from X to P 1 . Now take the stein factorization of this morphism which has connected fibres and notice that since X is smooth this further factors through a normalization. So we get a morphism from X to a normal base of dimension 1 (hence smooth in this case) with connected fibres which contradicts Matsushita's result on the fibre space structure of a holomorphic symplectic manifold (see [19]). Hence our claim is proved. Now singular varieties of minimal degree are obtained by taking cones over over smooth scrolls or over a Veronese embedding of P 2 in P 5 . Now if the scroll is a single cone or a double cone over a smooth scroll then the codimension of singular locus is > 2. Hence the corresponding projective bundle produces a small resolution (see [13], Proposition 2.1) which contradicts the factoriality of X. Also the double cone over the veronese embedding is in P 7 which again contradicts the fact that the image is non-degenerate in either P 5 or P 9 . Hence the theorem is proved.
The upper bound to the degree of a generically finite morphism from X = K3 [2] by a complete linear series of a base point free line bundle has an interesting corollary on the secant lines of an embedding of a K3 surface which we state and prove next. Proof. Given the above conditions we construct a generically finite morphism f from X =: the hilbert scheme of two points on S to Gr(2, h 0 (L)). Given a point on X we take the length 2 subscheme it defines on S and send it to the linear span of the length two subscheme inside P h 0 (L)−1 .
Since a general such line doesnt lie on the S , it intersects S at finitely many points. So a general point in the image of the morphism f has got finite fibers. Hence f is a generically finite morphism. Also this morphism is given by the complete linear series L − δ where 2δ is the class of the divisor in S [2] that parametrizes non-reduced subschemes of length 2 on the K3 surface S . By the above theorem deg( f ) ≤ 23 . Now if a line intersects S at k points then the line has k 2 preimages under the morphism f . Hence for a general line k 2 ≤ 23 and hence k ≤ 7. Now we are ready to state our results on very ampleness and projective normality. Proof. The proof follows along the same lines as Corollary 3.1.2 with the observation that a generalized Kummer variety does not map to a variety of minimal degree by an ample and base point free complete linear system.
The following example is taken from the survey article of Debarre (see [5]). It gives an example of an ample and globally generated line bundle on a hyperkähler variety of the form K3 [2] that maps it 6:1 onto a variety of minimal degree.
Example 3.1.6 Let (S , L) be a polarized K3 surface with Pic(S ) = ZL and L 2 = 4. Then L is very ample and consequently we get a morphism φ : S [2] → G (2,4) to the Grassmannian. Now, L induces a line bundle L 2 on S [2] and it is known that Pic(S [2] ) = ZL 2 ⊕ Zδ. Moreover, the pullback of the Plücker line bundle on the Grassmannian has class L 2 − δ on S [2] . Therefore, if (S , L) is general then it contains no line and consequently φ will be finite of degree 4 2 = 6. This gives an example of a hyperkähler variety of the form K3 [2] getting mapped 6:1 onto a variety of minimal degree by the complete linear series of an ample and globally generated line bundle.
3.2.
Higher dimensional calculations. In this section, we carry out similar computations on hyperkähler six, eight and tenfolds which are deformation equivalent to K3 [n] or a generalized Kummer variety. More precisely, we want to figure out whether B ⊗2n−1 is very ample for an ample and globally generated line bundle B. As before, first we will find out q X (B) and the degree of the morphism induced by the complete linear system of |B| if it maps to a variety of minimal degree. Of course we will get an affirmative answer if it never maps to a variety of minimal degree.
Let X be a hyperkähler variety deformation equivalent to K3 [n] . Recall that we proved in Theorem 3.1.1 that for an ample and basepoint free line bundle B on a hyperkähler variety deformation equivalent to K3 [2] , 3B is very ample unless the morphism induced by the complete linear series of B maps it 6:1 or 8:1 onto S (0, 0, 0, 2) or S (0, 0, 0, 6) respectively where S (0, 0, .., 0, r) is the variety obtained by taking cones over a rational normal curve of degree r.
Calculations similar to the proof of that theorem shows that if the complete linear system of an ample and globally generated line bundle B maps onto a variety of minimal degree then we have the following equation
c X .x n = d 1 2 x + n + 1 n − 2n
where x = q X (B) , d is the degree of the morphism and c X = (2n)! n!2 n . As before, the degree of the morphism d satisfies 1 ≤ d ≤ (2n)! − 1. We are trying to find out the positive even integer solutions of x, B ⊗2n−1 will be very ample if there is none.
Similarly, if X is a hyperkähler variety deformation equivalent to a generalized Kummer variety K n (T ) then we have to work with c X .x n = d (n + 1) 1 2 x + n n − 2n . c X = (n + 1) (2n)! n!2 n in this case. Similar arguments as in the proof of the last part of Theorem 3.1.1 shows that if the morphism induced by the ample and globally generated line bundle on an hyperkähler variety of one of the two above types maps to a variety of minimal degree, then the embedded variety will be obtained by taking cones over a rational normal curve.
We ran a computer program using Python to find those solutions for X which is a hyperkähler six, eight or tenfold deformation equivalent to K3 [n] or a generalized Kummer variety K n (T ). The following table is the summary of the results we have obtained. Here 2n is the dimension of the variety, d is the degree of morphism induced by the complete linear series of an ample and globally generated line bundle B, x = q X (B), r is the degree of the embedded variety. If all the entries of d, x and r are '-' in a row, that means it will never map to a variety of minimal degree.
Deformation Type d x r Is (2n − 1)B projectively normal?
K3 [2] 6 2 2 Yes unless it maps 6:1 onto a quadric in P 5 8 4 6
Yes unless it maps 8:1 onto S (0, 0, 0, 6) K3 [3] 30 2 4 Yes unless it maps 30:1 onto S (0, 0, 0, 0, 0, 4) K3 [4] 240 2 7 Yes unless it maps 240:1 onto S (0, 0, 0, 0, 0, 0, 0, 7) K3 [5] ---Yes K 2 (T ) ---Yes K 3 (T ) 48 2 10 Yes unless it maps 48:1 onto S (0, 0, 0, 0, 0, 10) K 4 (T ) ---Yes K 5 (T ) ---Yes Table 2
Theorem 4 .
4(See Theorem 3.1.4) Suppose L be a base point free line bundle on a projective hyperkähler manifold X which is deformation equivalent to a generalized Kummer variety of dimension four. Assume that the morphism given by L is generically finite onto its image Then (i) The degree d of the generically finite morphism given by L is bounded above by 23.(ii) The morphism never maps X to a variety of minimal degree.
.
Let E and L 1 , L 2 ,..., L r be coherent sheaves on a variety X. Consider the map
.
Let C be a smooth, irreducible curve. Let L and M be line bundles on C. Let W be a base point free linear subsystem of H 0 (C, L). Then the multiplication map
.
Let L be a base point free line bundle on a variety X and let F be a coherent sheaf on X. If H i (F ⊗L −i ) = 0 for all i ≥ 1 then the multiplication map H 0 (F ⊗L ⊗i )⊗H 0 (L) → H 0 (F ⊗L ⊗i+1 ) surjects for all i ≥ 0. Now we give some background on hyperkähler varieties.
The Beauville form and Fujiki constants are fundamental invariants of a hyperkähler variety. The following table gives the list of Beauville forms, Fujiki costants and the lattice structure induced on H 2 (X, Z) by the Beauville form on the first two classes of known examples of hyperkähler varieties, namely when X = S [n] (Hilbert scheme of n points on S ) where S is a K3 surface and when X = K n (T ) (generalized Kummer variety of dimension 2n) where T is an Abelian variety.
Lemma 2. 2 .
2Let X be a fourfold with trivial canonical bundle. Let A be an ample line bundle and let B = nA for n ≥ 5. Then the multiplication map H 0 (3B + kA) ⊗ H 0 (B) → H 0 (4B + kA) is surjective for k ≥ 1.
.
Suppose L be an base point free line bundle on a projective hyperkähler manifold X which is deformation equivalent to the Hilbert scheme of 2 points on a smooth, projective K3 surface S . Assume the morphism given by the complete linear series |L|, say φ, is generically finite. Then(1) The degree d of φ is bounded above by 23.
Corollary 3.1. 2 .
2Let (S , L) be a polarized K3 surface such that L is very ample. Consider the projective embedding of S in P h 0 (L)−1 and the closed subvariety of Gr(2, h 0 (L)) consisting of lines that intersect the K3 surface at a subscheme of length at least 2. Then a general such line intersects S at a subscheme of length ≤ 7.
Corollary 3.1.3. Let X be a projective hyperkähler fourfold deformation equivalent to Hilbert scheme of two points on a smooth,projective K3 surface. Let B be an ample and base point free line bundle on X. Then (1) B ⊗n is very ample and embeds X as a projectively normal variety for n ≥ 4.(2) B ⊗n is very ample and embeds X as a projectively normal variety for n ≥ 3 unless the complete linear series of |B| maps X to a variety of minimal degree, i.e, unless one of the two cases in Theorem 3.1.1 happens.Corollary 3.1.5. Let X be a projective hyperkähler fourfold deformation equivalent to a generalized Kummer variety. Let B be an ample and base point free line bundle on X. Then B ⊗n is very ample and embeds X as a projectively normal variety for n ≥ 3.
Acknowledgements. We want to thank our advisor Prof. Purnaprajna Bangere for his encouragement, support and guidance without which this work would have never been possible. We also express our gratitude to Mr. Sutirtha Paul for providing several technical details with the computer program we used.Proof.(1)We notice that from the Riemann-Roch formula on ample line bundles on X that h 0 (B) ≥ 6 and hence the proof follows by Theorem 2.3,[21].(2) We show the surjectivity of the multiplication map H 0 (3B) ⊗ H 0 (B) −→ H 0 (4B). Choose a smooth threefold section T of the ample and base point free line bundle B. We have the following commutative diagram.Hence it is enough to prove that right most vertical map surjects. However since we have the surjection from H 0 (3B) to H 0 (3B |T ) = H 0 (3K T ), it is enough to show the surjection ofNow we appeal to[10]), p. 1089,(3)to see that the required map surjects unless X is mapped to a variety of minimal degree by |B|.Now we give analogous theorems for a hyperkähler variety deformation equivalent to a generalized Kummer variety of dimension four.Theorem 3.1.4. Let X be a projective hyperkähler fourfold deformation equivalent to a generalized Kummer variety. Let L be a base point free line bundle on X. Assume the morphism given by the complete linear system |L| is, say φ is generically finite. Then(1)The degree d of φ is bounded above by 23.(2) φ will never map to a variety of minimal degree.Proof. As in the proof of Theorem 3.1.1, we get (72 − 3d)x 2 − 18dx + 8d ≥ 0 using Riemann-Roch (seeRemark 1.2.5)where q X (L) = x. Part (1) follows from the fact that this inequality forces d ≤ 23 since x is a positive even integer. Part (2) follows from the fact that (72 − 3d)x 2 − 18dx + 8d = 0 has no even integer solution for any 1 ≤ d ≤ 23.As before, the theorem above gives the following corollary.
Variétés Kähleriennes dont la première classe de Chern est nulle. A Beauville, J. Differential Geom. 184A. Beauville, Variétés Kähleriennes dont la première classe de Chern est nulle. J. Differential Geom. 18 (1983), no. 4, 755 -782.
The decomposition of Kahler manifolds with a trivial canonical class. F A Bogomolov, Mat. Sb. (N.S.). 93135630F. A. Bogomolov, The decomposition of Kahler manifolds with a trivial canonical class. Mat. Sb. (N.S.) 93(135) (1974), 573 575, 630.
M Britze, M A Nieper, arXiv:math/0101062Hirzebruch-Riemann-Roch formulae on irreducible symplectic Kähler manifolds. M. Britze, M. A. Nieper, Hirzebruch-Riemann-Roch formulae on irreducible symplectic Kähler manifolds. arXiv:math/0101062
Remarks on Kawamata's effective non-vanishing conjecture for manifolds with trivial first Chern classes. Yalong Cao, Chen Jiang, arXiv:1612.00184v2Yalong Cao, Chen Jiang, Remarks on Kawamata's effective non-vanishing conjecture for manifolds with trivial first Chern classes. arXiv:1612.00184v2
. Olivier Debarre, Hyperkähler Manifolds, arXiv:1810.02087v1Olivier Debarre, Hyperkähler Manifolds. arXiv:1810.02087v1
On varieties of minimal degree (a centennial account. David Eisenbud, Joe Harris, Proceedings of Symposia in Pure Mathematics. Symposia in Pure Mathematics46David Eisenbud, Joe Harris, On varieties of minimal degree (a centennial account). Proceedings of Symposia in Pure Mathematics. 46 (1987), no.1, 3 -13.
On the cobordism class of the Hilbert scheme of a surface. G Ellingsrud, L Göttsche, M Lehn, J. Algebraic Geom. 101G. Ellingsrud, L. Göttsche, M. Lehn, On the cobordism class of the Hilbert scheme of a surface. J. Algebraic Geom. 10 (2001), no. 1, 81 -100.
On the de Rham cohomology group of a compact Kähler symplectic manifold. Algebraic geometry. A Fujiki, Adv. Stud. Pure Math. 10A. Fujiki, On the de Rham cohomology group of a compact Kähler symplectic manifold. Algebraic geometry, Sendai, 1985, 105 165, Adv. Stud. Pure Math., 10, North-Holland, Amsterdam, 1987.
Very ampleness and higher syzygies for Calabi-Yau threefolds. Francisco Javier Gallego, B P Purnaprajna, Math. Ann. 3121Francisco Javier Gallego and B. P. Purnaprajna, Very ampleness and higher syzygies for Calabi-Yau threefolds. Math. Ann., 312(1):133 -149, 1998.
Koszul cohomology and the geometry of projective varieties. L Mark, Green, Duke Mathematical Journal. 494J. Differential GeometryMark L.Green, The Canonical Ring of a variety of general type. Duke Mathematical Journal, Vol 49, No.4 [11] , Koszul cohomology and the geometry of projective varieties. J. Differential Geometry 19(1984), 125 - 171.
Lectures from the Summer School held in Nordfjordeid. M Gross, D Huybrechts, D Joyce, Springer-VerlagBerlinUniversitextCalabi -Yau manifolds and related geometriesM. Gross, D. Huybrechts, D. Joyce, Calabi -Yau manifolds and related geometries . Lectures from the Summer School held in Nordfjordeid, June 2001. Universitext. Springer-Verlag, Berlin, 2003.
Geometric And Combinatorial Aspects Of Commutative Algebra. Lecture Notes in Pure and Applied Mathematics. Jurgen Herzog (Editor) and Gaetana Restuccia217Jurgen Herzog (Editor) and Gaetana Restuccia (Editor), Geometric And Combinatorial Aspects Of Commutative Algebra. Lecture Notes in Pure and Applied Mathematics, Vol. 217.
Algebraic Surfaces of General Type With Small c 2 1. Eiji Horikawa, Annals of Mathematics, Second Series. 1042Eiji Horikawa, Algebraic Surfaces of General Type With Small c 2 1 . Annals of Mathematics, Second Series, Vol. 104, No. 2 (Sep., 1976), pp. 357 -387.
Compact hyper-K ahler manifolds: basic results. D Huybrechts, Invent. Math. 1351D. Huybrechts, Compact hyper-K ahler manifolds: basic results. Invent. Math. 135 (1999), no. 1, 63 -113;
Compact hyper-K ahler manifolds: basic results. Erratum, Invent. Math. 1522Erratum: "Compact hyper-K ahler manifolds: basic results ", Invent. Math. 152 (2003), no. 2, 209 -212.
On Fujitas freeness conjecture for 3-folds and 4-folds. Yujiro Kawamata, Math. Ann. 3083Yujiro Kawamata, On Fujitas freeness conjecture for 3-folds and 4-folds. Math. Ann., 308(3):491 505, 1997.
Atsushi Kanazawa, P M H Wilson, Trilinear forms and Chern classes of Calabi-Yau threefolds. Atsushi Kanazawa, P. M. H. Wilson, Trilinear forms and Chern classes of Calabi-Yau threefolds.
Positivity in algebraic geometry. Robert Lazarsfeld, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. 49Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in MathematicsRobert Lazarsfeld, Positivity in algebraic geometry. II. volume 49 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics].
Positivity for vector bundles, and multiplier ideals. Springer-Verlag, BerlinSpringer-Verlag, Berlin, 2004. Positivity for vector bundles, and multiplier ideals.
On fibre space structures of a projective irreducible symplectic manifold. D Matsushita, Addendum, Topology. 38TopologyMatsushita, D., On fibre space structures of a projective irreducible symplectic manifold. Topology 38 (1999) 79 -83. Addendum, Topology 40 (2001) 431 -432.
The Chern classes and Kodaira dimension of a minimal variety. Y Miyaoka, Adv. Stud. Pure Math. 10Algebraic geometryY. Miyaoka, The Chern classes and Kodaira dimension of a minimal variety . Algebraic geometry, Sendai, 1985, 449 -476, Adv. Stud. Pure Math., 10, North-Holland, Amsterdam, 1987.
Jayan Mukherjee, Debaditya Raychaudhury, arXiv:1810.06718v2On the projective normality and normal presentation on higher dimensional varieties with nef canonical bundle. Jayan Mukherjee, Debaditya Raychaudhury, On the projective normality and normal presentation on higher dimensional varieties with nef canonical bundle. arXiv:1810.06718v2
Varieties defined by quadratic equations. Corso CIME in Questions on Algebraic Varieties. D Mumford, RomeD. Mumford, Varieties defined by quadratic equations. Corso CIME in Questions on Algebraic Varieties, Rome, (1970), 30 -100.
On the projective normality and syzygies for Calabi-Yau varieties. Wenbo Niu, Wenbo Niu, On the projective normality and syzygies for Calabi-Yau varieties
Desingularized moduli spaces of sheaves on a K3. K G O'grady, J. Reine Angew. Math. 512K. G. O'Grady, Desingularized moduli spaces of sheaves on a K3. J. Reine Angew. Math. 512 (1999), 49 -117.
A new six-dimensional irreducible symplectic variety. K G O'grady, J. Algebraic Geom. 123K. G. O'Grady, A new six-dimensional irreducible symplectic variety. J. Algebraic Geom. 12 (2003), no. 3, 435 -505
On polarized canonical Calabi-Yau threefolds. Keiji Oguiso, Thomas Peternell, Math. Ann. 3012Keiji Oguiso and Thomas Peternell, On polarized canonical Calabi-Yau threefolds. Math. Ann., 301(2):237 - 248, 1995.
On projective models of K3 surfaces Amer. Saint-Donat, J. Math. 96Saint-Donat, On projective models of K3 surfaces Amer. J. Math. 96 (1974), 602 -639.
| []
|
[
"The SAMI Galaxy Survey: Satellite galaxies undergo little structural change during their quenching phase",
"The SAMI Galaxy Survey: Satellite galaxies undergo little structural change during their quenching phase"
]
| [
"L Cortese \nInternational Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia\n\nARC Centre of Excellence for All Sky Astrophysics in\n\n",
"J Van De Sande \nARC Centre of Excellence for All Sky Astrophysics in\n\n\nDimensions (ASTRO 3D\n\n\nSydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nSydneyNew South WalesAustralia\n",
"C P Lagos \nInternational Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia\n\nARC Centre of Excellence for All Sky Astrophysics in\n\n",
"B Catinella \nInternational Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia\n\nARC Centre of Excellence for All Sky Astrophysics in\n\n",
"L J M Davies \nInternational Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia\n",
"S M Croom \nARC Centre of Excellence for All Sky Astrophysics in\n\n\nDimensions (ASTRO 3D\n\n\nSydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nSydneyNew South WalesAustralia\n",
"S Brough \nARC Centre of Excellence for All Sky Astrophysics in\n\n\nSchool of Physics\nUniversity of New South Wales\n2052NSWAustralia\n",
"J J Bryant \nARC Centre of Excellence for All Sky Astrophysics in\n\n\nDimensions (ASTRO 3D\n\n\nSydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nSydneyNew South WalesAustralia\n\nAustralian Astronomical Optics\nSchool of Physics\nAAO-USydney\nUniversity of Sydney\n2006NSWAustralia\n",
"J S Lawrence \nAustralian Astronomical Optics -Macquarie\nMacquarie University\n2109NSWAustralia\n",
"M. SOwers \nDepartment of Physics and Astronomy\nMacquarie University\n2109NSWAustralia\n",
"S N Richards \nSOFIA Science Center\nUSRA\nNASA Ames Research Center\nBuilding N232, M/S 232-12P.O. Box 194035-0001Moffett FieldCAUSA\n",
"S M Sweet \nARC Centre of Excellence for All Sky Astrophysics in\n\n\nCentre for Astrophysics and Supercomputing\nSwinburne University of Technology\nPO Box 2183122HawthornVICAustralia\n"
]
| [
"International Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"Dimensions (ASTRO 3D\n",
"Sydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nSydneyNew South WalesAustralia",
"International Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"International Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"International Centre for Radio Astronomy Research\nThe University of Western Australia\n35 Stirling Hw6009CrawleyWAAustralia",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"Dimensions (ASTRO 3D\n",
"Sydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nSydneyNew South WalesAustralia",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"School of Physics\nUniversity of New South Wales\n2052NSWAustralia",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"Dimensions (ASTRO 3D\n",
"Sydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nSydneyNew South WalesAustralia",
"Australian Astronomical Optics\nSchool of Physics\nAAO-USydney\nUniversity of Sydney\n2006NSWAustralia",
"Australian Astronomical Optics -Macquarie\nMacquarie University\n2109NSWAustralia",
"Department of Physics and Astronomy\nMacquarie University\n2109NSWAustralia",
"SOFIA Science Center\nUSRA\nNASA Ames Research Center\nBuilding N232, M/S 232-12P.O. Box 194035-0001Moffett FieldCAUSA",
"ARC Centre of Excellence for All Sky Astrophysics in\n",
"Centre for Astrophysics and Supercomputing\nSwinburne University of Technology\nPO Box 2183122HawthornVICAustralia"
]
| [
"MNRAS"
]
| At fixed stellar mass, satellite galaxies show higher passive fractions than centrals, suggesting that environment is directly quenching their star formation. Here, we investigate whether satellite quenching is accompanied by changes in stellar spin (quantified by the ratio of the rotational to dispersion velocity V/σ) for a sample of massive (M * >10 10 M ) satellite galaxies extracted from the SAMI Galaxy Survey. These systems are carefully matched to a control sample of main sequence, high V /σ central galaxies. As expected, at fixed stellar mass and ellipticity, satellites have lower star formation rate (SFR) and spin than the control centrals. However, most of the difference is in SFR, whereas the spin decreases significantly only for satellites that have already reached the red sequence. We perform a similar analysis for galaxies in the EAGLE hydro-dynamical simulation and recover differences in both SFR and spin similar to those observed in SAMI. However, when EAGLE satellites are matched to their true central progenitors, the change in spin is further reduced and galaxies mainly show a decrease in SFR during their satellite phase. The difference in spin observed between satellites and centrals at z ∼0 is primarily due to the fact that satellites do not grow their angular momentum as fast as centrals after accreting into bigger halos, not to a reduction of V /σ due to environmental effects. Our findings highlight the effect of progenitor bias in our understanding of galaxy transformation and they suggest that satellites undergo little structural change before and during their quenching phase. | 10.1093/mnras/stz485 | [
"https://arxiv.org/pdf/1902.05652v1.pdf"
]
| 118,903,115 | 1902.05652 | 0d690e6abf561c360d9289d03b638c051d5adb5a |
The SAMI Galaxy Survey: Satellite galaxies undergo little structural change during their quenching phase
L Cortese
International Centre for Radio Astronomy Research
The University of Western Australia
35 Stirling Hw6009CrawleyWAAustralia
ARC Centre of Excellence for All Sky Astrophysics in
J Van De Sande
ARC Centre of Excellence for All Sky Astrophysics in
Dimensions (ASTRO 3D
Sydney Institute for Astronomy
School of Physics
The University of Sydney
SydneyNew South WalesAustralia
C P Lagos
International Centre for Radio Astronomy Research
The University of Western Australia
35 Stirling Hw6009CrawleyWAAustralia
ARC Centre of Excellence for All Sky Astrophysics in
B Catinella
International Centre for Radio Astronomy Research
The University of Western Australia
35 Stirling Hw6009CrawleyWAAustralia
ARC Centre of Excellence for All Sky Astrophysics in
L J M Davies
International Centre for Radio Astronomy Research
The University of Western Australia
35 Stirling Hw6009CrawleyWAAustralia
S M Croom
ARC Centre of Excellence for All Sky Astrophysics in
Dimensions (ASTRO 3D
Sydney Institute for Astronomy
School of Physics
The University of Sydney
SydneyNew South WalesAustralia
S Brough
ARC Centre of Excellence for All Sky Astrophysics in
School of Physics
University of New South Wales
2052NSWAustralia
J J Bryant
ARC Centre of Excellence for All Sky Astrophysics in
Dimensions (ASTRO 3D
Sydney Institute for Astronomy
School of Physics
The University of Sydney
SydneyNew South WalesAustralia
Australian Astronomical Optics
School of Physics
AAO-USydney
University of Sydney
2006NSWAustralia
J S Lawrence
Australian Astronomical Optics -Macquarie
Macquarie University
2109NSWAustralia
M. SOwers
Department of Physics and Astronomy
Macquarie University
2109NSWAustralia
S N Richards
SOFIA Science Center
USRA
NASA Ames Research Center
Building N232, M/S 232-12P.O. Box 194035-0001Moffett FieldCAUSA
S M Sweet
ARC Centre of Excellence for All Sky Astrophysics in
Centre for Astrophysics and Supercomputing
Swinburne University of Technology
PO Box 2183122HawthornVICAustralia
The SAMI Galaxy Survey: Satellite galaxies undergo little structural change during their quenching phase
MNRAS
0000000Preprint 18 February 2019 Compiled using MNRAS L A T E X style file v3.0galaxies: evolution-galaxies: fundamental parameters-galaxies: kine- matics and dynamics
At fixed stellar mass, satellite galaxies show higher passive fractions than centrals, suggesting that environment is directly quenching their star formation. Here, we investigate whether satellite quenching is accompanied by changes in stellar spin (quantified by the ratio of the rotational to dispersion velocity V/σ) for a sample of massive (M * >10 10 M ) satellite galaxies extracted from the SAMI Galaxy Survey. These systems are carefully matched to a control sample of main sequence, high V /σ central galaxies. As expected, at fixed stellar mass and ellipticity, satellites have lower star formation rate (SFR) and spin than the control centrals. However, most of the difference is in SFR, whereas the spin decreases significantly only for satellites that have already reached the red sequence. We perform a similar analysis for galaxies in the EAGLE hydro-dynamical simulation and recover differences in both SFR and spin similar to those observed in SAMI. However, when EAGLE satellites are matched to their true central progenitors, the change in spin is further reduced and galaxies mainly show a decrease in SFR during their satellite phase. The difference in spin observed between satellites and centrals at z ∼0 is primarily due to the fact that satellites do not grow their angular momentum as fast as centrals after accreting into bigger halos, not to a reduction of V /σ due to environmental effects. Our findings highlight the effect of progenitor bias in our understanding of galaxy transformation and they suggest that satellites undergo little structural change before and during their quenching phase.
INTRODUCTION
Observational evidence that galaxy properties vary as a function of environment has been presented since at least Hubble & Humason (1931). After almost a century, it is now clear that the structure (usually quantified via visual classification or two-dimensional surface brightness decomposition) and star formation activity of galaxies depend on [email protected] their location within the large scale structure (e.g., Dressler 1980;Lewis et al. 2002;Gómez et al. 2003;Wetzel et al. 2012).
It is also firmly established that these trends, generally referred to as 'morphology-density' and 'star formation rate-density' relations, are not simply two different manifestations of the same evolutionary paths. For example, there is plenty of evidence for the existence of a large population of rotationally-supported, disky systems with low (or no) star formation in groups and clusters (e.g., van den Bergh 1976;Poggianti et al. 1999;Gavazzi et al. 2006;Lisker et al. 2006;Boselli et al. 2008;Bamford et al. 2009;Cortese & Hughes 2009;Toloba et al. 2009;Bundy et al. 2010;Hester 2010). Thus, separating between quenching and structural transformation becomes critical to reveal what shaped the environmental trends that we see today.
The advent of large-area spectroscopic surveys and the refinement of large-scale cosmological simulations have also highlighted that the way in which we define 'environment' does matter (e.g., Muldrew et al. 2012;Fossati et al. 2015). There is no 'golden environmental ruler', every metric has its advantages and disadvantages and the definition of environment should be tuned to the particular issue being addressed (e.g., Brown et al. 2017). Nevertheless, it is now well established that one of the best ways to isolate galaxies most likely to be affected by environment is to focus on satellites. Central galaxies dominate in number at all stellar masses (e.g., Tempel et al. 2009;Yang et al. 2009), and it is still debated whether or not their evolution is significantly affected by environment (e.g., Blanton & Berlind 2007;van den Bosch et al. 2008;Wilman et al. 2010). Thus, including centrals in the analysis would significantly reduce or completely wash out any signatures of environmentally-driven transformation.
Interestingly, while the focus on satellite galaxies has reduced the disagreement between some observational results, this approach turns out not to be sufficient to separate the relative importance of quenching and morphological transformation in the life of satellite galaxies. Indeed, observational evidence supporting seemingly opposite transformation scenarios has been presented, namely simultaneous quenching and morphological transformation on one side (e.g., Moss & Whittle 2000;Christlein & Zabludoff 2004;Cappellari 2013;George et al. 2013;Omand et al. 2014;Kawinwanichakij et al. 2017), and quenching-only followed by no or minor structural transformation on the other (e.g., Larson et al. 1980;Blanton et al. 2005;Cortese & Hughes 2009;Woo et al. 2017;Rizzo et al. 2018). There are various potential reasons behind these conflicting results, but our view is that most of the difference can be ascribed to twoequally important -limitations.
First, the techniques used to quantify structure/morphology vary significantly in the literature, encompassing both visual classification (generally used to isolate early-from late-type galaxies) and structural parameters obtained via two-dimensional surface brightness decomposition of optical images. Arguably, neither of the two has a direct connection to the kinematic properties of galaxies, as it has now been demonstrated that they are not able to distinguish between rotationally-and dispersion-supported systems (e.g., Emsellem et al. 2011;Krajnović et al. 2013;Cortese et al. 2016b). Thus, to identify and quantify truly structural transformation, and separate it from visual changes simply due to quenching and disk fading, information on the kinematic properties of stars is vital.
Second, it is now well established that, for massive satellite galaxies (stellar masses M * >10 10 M ), full quenching takes at least a few Gyrs after infall (e.g., Cortese & Hughes 2009;Weinmann et al. 2010;Wetzel et al. 2013;Oman & Hudson 2016;Bremer et al. 2018), a time during which central star-forming systems have grown significantly (van der Wel et al. 2014). This means that today's centrals cannot be naively assumed to be representative of the progenitor population of local satellites and used to quantify the effect of nurture on galaxy evolution, an issue generally refereed to as progenitor bias (van Dokkum & Franx 2001;Woo et al. 2017). Only by identifying the real progenitors of satellites at the time of infall we can reveal how satellites have been transformed by environment. While this is still out of reach from an observational perspective, the improvement of cosmological simulations is starting to make it possible to use models to quantify the effect of progenitor bias and try to correct for it.
In this paper, we revisit the issue of satellite transformation with the goal of quantifying the change in star formation activity and structure separately, and to determine if they both happen simultaneously or on different time-scales. Our analysis improves on previous works by directly addressing the two limitations discussed above. First, we take advantage of optical integral field spectroscopic observations obtained as part of the SAMI Galaxy Survey (Bryant et al. 2015) to directly trace the stellar kinematic of galaxies. Second, we compare our findings with predictions from the Evolution and Assembly of GaLaxies and their Environments (EA-GLE; Schaye et al. 2015) cosmological simulation, and use it to quantify the effect of progenitor bias. The use of a cosmological simulation such as EAGLE turns out to be critical for a less biased interpretation of SAMI data, highlighting the danger of inferring galaxy evolutionary histories from single-epoch snapshots.
This paper is organized as follows. In Sec. 2 we describe how our sample is extracted from the SAMI Galaxy Survey, the stellar kinematic parameters as well as the ancillary data used in this paper. In Sec. 3, we compare the star formation and kinematic properties of satellites and centrals and compare our results with the prediction from the EAGLE simulation. This section includes the main results of this work. Lastly, the implications of our results are discussed in Sec.
4.
Throughout this paper, we use a flat Λ cold dark matter concordance cosmology: H0 = 70 km s 1 Mpc 1 , Ω0=0.3, ΩΛ=0.7.
THE DATA
The SAMI Galaxy Survey has observed ∼3000 individual galaxies in the redshift range 0.004< z <0.095 and with stellar masses greater than ∼10 7.5 M taking advantage of the Sydney-AAO Multi-object Integral field spectrograph (SAMI; Croom et al. 2012), installed at the Anglo-Australian Telescope. SAMI is equipped with photonic imaging bundles ('hexabundles', Bland-Hawthorn et al. 2011;Bryant et al. 2014) to simultaneously observe 12 galaxies across a 1 degree field of view. Each hexabundle is composed of 61 optical fibres, each with a diameter of ∼1.6 , covering a total circular field of view of ∼14.7 in diameter. SAMI fibres are fed into the AAOmega dual-beam spectrograph, providing a coverage of the 3650-5800Å and 6240-7450Å wavelength ranges with dispersions of 1.05Å pixel −1 and 0.59Å pixel −1 , respectively.
In this paper, we extract our sample from the 1552 galaxies overlapping with the footprint of the Galaxy And Mass Assembly Survey (GAMA, Driver et al. 2011) included in the SAMI public data release 2 (Scott et al. 2018) and for which integrated current star formation rate (SFR) estimates are available (referred to as parent sample). SFRs are taken from Davies et al. (2016) and have been derived by fitting the spectral energy distribution fitting code MAGPHYS (da Cunha et al. 2008) to the full 21-band photometric data available for GAMA galaxies across the ultraviolet to the far-infrared frequency range (Driver et al. 2016;Wright et al. 2016). In addition to the wealth of multi-wavelength data available, the GAMA regions are characterised by an exquisitely high spectroscopic completeness, providing us with a state-of-the-art group catalogue (Robotham et al. 2011), critical for distinguishing between central and satellite galaxies. We focus on galaxies with stellar mass greater than 10 10 M (768 galaxies), for which the signal-to-noise (S/N) in the continuum is generally high enough to allow a proper reconstruction of the stellar velocity field. Stellar masses (M * ) are estimated from g−i colours and i-band magnitudes following Taylor et al. (2011), as described in Bryant et al. (2015).
The procedure adopted to extract stellar kinematic parameters is extensively described in van de Sande et al. (2017b) and Scott et al. (2018). Here, we briefly summarise its key steps. Stellar line-of-sight velocity and intrinsic dispersion maps are obtained using the penalised pixel-fitting routine ppxf, developed by Cappellari & Emsellem (2004). SAMI blue and red spectra are combined by convolving the red spectra to match the instrumental resolution in the blue. We then use the 985 stellar template spectra from the MILES stellar library (Sánchez-Blázquez et al. 2006) to determine the best combination of model templates able to reproduce the galaxy spectrum extracted from annular binned spectra following the optical ellipticity and position angle of the target. We apply the following quality cuts to each spaxel to discriminate between good and bad fits (van de Sande et al. 2017b): S/N >3Å −1 , σ > F W HMinstr/2∼35 km s −1 , Verr <30 km s −1 and σerr < σ×0.1 + 25 km s −1 , where V , Verr, σ and σerr are the line-of-sight and dispersion velocities and their uncertainties.
The ratio of ordered versus random motions V /σ within one effective radius is then determined as in Cappellari et al. (2007):
V σ 2 = FiV 2 i Fiσ 2 i(1)
where Fi is the flux in each spaxel. We sum only spaxels included within an ellipse of semi-major axis corresponding to one effective radius in r-band and position angle and ellipticity taken from v09 of the GAMA single Sérsic profile fitting catalogue (Kelvin et al. 2012). We require that at least 95% of the spaxels within the aperture full fill our quality cuts to flag the estimate of V /σ as reliable. This reduces our sample to 726 galaxies. As SAMI galaxies cover a wide range of effective radii, we want to make sure that the one effective radius aperture provides a reasonable number of independent resolution elements to determine V /σ, and minimise the effect of beam smearing. Thus, we remove all galaxies with re <2 or re smaller than 2.5 the half-width at half-maximum of the point-spread-function of the secondary standard star observed with the same plate (121 galaxies). Conversely, we keep galaxies with effective radii larger than the SAMI bundle (154 objects) and apply the aperture correction as described in van de Sande et al. (2017a) to recover the value of V /σ within one effective radius.
The selections described above reduce our sample from 768 to 605 galaxies. During the analysis described in Sec. 3, five satellite galaxies were further removed from the sample as visual inspection highlighted issues with their photometric ellipicity and/or position angles (e.g., contamination by foreground/background objects, structural parameters tracing the inner bar instead of the disk, etc.). In conclusion, the final sample used in this paper is composed of 600 galaxies, 431 of which are centrals and 169 are group satellites according to v09 of the group catalogue by Robotham et al. (2011). Our satellites occupy halo masses up to ∼ 10 14.5 M , with an average value of ∼10 13.4 M .
The stellar mass, r-band ellipticity ( ) and specific star formation rate distributions for our parent and final samples are shown in Fig. 1 as empty and filled histograms, respectively. All galaxies are shown in the top, with only centrals/satellites included in the middle/bottom row, respectively. It is clear that our quality cuts preferentially affect round, low-mass, passive objects. However, as the two samples cover the same parameter space in all three variables, we are confident that the matching procedure at the basis of our analysis in Sec. 3 is not biased by the strict criteria used to extract our final sample. Indeed, our main conclusions and average trends are not affected even if we relax the criteria used to exclude 'marginally resolved' galaxies, with the only noticeable change being an increase in scatter. The potential effect of beam smearing on our estimates of V /σ is discussed in Appendix A, where we show that correcting for beam smearing would even reinforce the main conclusions of this paper.
QUENCHING AND STRUCTURAL TRANSFORMATION AT Z ∼0
Our primary goal is to separately quantify the changes in SF R and V /σ (a proxy for the stellar spin parameter) experienced by galaxies after they have become satellites. As shown in Fig. 2, and consistently with previous works (e.g., van den Bosch et al. 2008;Weinmann et al. 2009;Peng et al. 2012), the fraction of satellite galaxies with low specific star formation rate in our parent sample is significantly larger than that of centrals. This supports the common assumption that environmental effects are playing a more active role in the evolution of satellite than in centrals. Our aim is to determine if satellites being quenched after infall do also experience changes in their kinematic properties. Ideally, this would require a-priori knowledge of the properties of satellite galaxies at the time of infall into their host halo. While this is currently possible in cosmological simulations, observationally we are not yet able to link progenies and progenitors at different redshifts. Thus, nearly all observational studies so far have used central galaxies at z ∼0 to 'guess' the properties of galaxies at the time when they became satellites (e.g., van den Bosch et al. 2008;Woo et al. 2017).
In this work, we first make a similar assumption to quantify the variation in stellar kinematic between SAMI satellites and centrals. We then compare our results with the predictions of the EAGLE hydrodynamical simulation (Schaye et al. 2015;Crain et al. 2015;McAlpine et al. 2016) at z ∼0. This is needed to validate the ability of the simulation to reproduce the observed difference between centrals and satellites. Lastly, we use EAGLE to quantify the effect of progenitor bias on the z ∼0 comparison. This last step is the most critical one for the interpretation of the results emerging from the SAMI data.
SAMI galaxies
In order to quantify the amount of transformation experienced by SAMI satellites, we compare their properties to those of rotationally-supported centrals in the star-forming main sequence. Of course, this is very conservative and would imply that all galaxies become satellites as rotating star-forming disks. As we have evidence that this is not always the case (e.g., Cortese et al. 2006;Mei et al. 2007), our findings must be interpreted as an upper limit for the real amount of transformation experienced by galaxies during their satellite phase. We will further discuss this point in the following sections.
We isolate star-forming centrals by selecting systems with SFR higher than the lower 1σ envelope of the z ∼0 main sequence obtained by Davies et al. (2016) for GAMA galaxies, namely log(SF R) > 0.7207×(log(M * /M )−10)+ 0.061 − 0.73. Similarly, rotationally-supported centrals are selected by imposing that log(V /σ) > 0.4× −0.5, where is the observed ellipticity in r-band. Following the formalism in Cappellari (2016), this is nearly equivalent to selecting only axisymmetric galaxies with intrinsic ellipticity ( intr ) smaller than ∼0.25 for anisotropy βz=0.6× intr , i.e., consistent with what observed for disk-dominated galaxies (e.g., Giovanelli et al. 1994;Unterborn & Ryden 2008;Foster et al. 2017). We favour this empirical criterion to the analytical prescription as it provides a more conservative cut at low ellipticities, where the difference between the analitic prescription for different intrisic shapes becomes significantly smaller than the measurement errors in both V /σ and . The combination of both criteria yields 167 star-forming, rotating centrals, including both isolated (i.e., with no detected companions: 90 objects) and group centrals, with the vast majority of centrals in groups (55 out of 77 objects) having just one or two satellites according to the GAMA group catalog.
The results of our selection are shown in Fig.2, where we compare the distribution in the SF R-M * (top row) and V /σ − plane (bottom row) of all 431 centrals in our sample (left column), and for the 167 main-sequence, rotationallysupported centrals (middle column). For reference, we also show the distribution of the 169 group satellites (right column). Points are colour-coded by V /σ in the SF R-M * plane (top row) and SF R in the V /σ − (bottom row) to highlight the tight apparent link between SFR and V /σ. Fig. 2 also shows that the V /σ and SF R cuts adopted to isolate our control sample of central galaxies are equivalent from a statistical point of view: i.e., applying only one of the two would result in a control sharing the same properties and, indeed, would lead us to the same results. This is also consistent with the tight correlation between V /σ and stellar age recently presented by . The simultaneous use of the two cuts is preferred simply because it provides a more rigorous initial hypothesis to our exercise (i.e., it gives independent constraints to both star formation activity and structural properties of the control sample).
In order to quantify the difference in SFR and spin of satellites compared to main-sequence, rotationallysupported centrals, we follow a technique similar to that discussed in Ellison et al. (2015Ellison et al. ( , 2018. We define ∆(SF R) and ∆(V /σ) as the difference (in log-space) between the SFR or V /σ of a satellite and the median value obtained for a control sample of main-sequence, rotation-dominated centrals matched in both stellar mass and ellipticity. During the matching procedure, we start isolating all the control centrals within 0.15 dex in stellar mass and 0.1 in ellipticity from each satellite. If such control sample includes fewer than 10 galaxies, we iteratively increase the range of stellar mass and ellipticity (in steps of 0.01) until the control includes at least 10 objects. The end result is that our average bins are ∼0.16 dex and 0.11 wide in stellar mass and ellipticity, respectively. We then compute the median SFR and for the control and use it to determine ∆(SF R) and ∆(V /σ) for each satellite.
The additional matching by ellipticity is adopted mainly because the V /σ estimates do not include an inclination correction. This is also justified by the fact that the ellipticity distribution of central and satellites may not always be the same (e.g., see Fig. 2). The fact that, for SAMI galaxies, observed and intrinsic ellipticity do not correlate ) also suggests that this assumption is not introducing any significant bias. Indeed, matching only by stellar mass would not change our results. Our findings are also unchanged if we limit our control sample to isolated or group centrals only.
It is important to acknowledge that, despite some differences in the technique used here, our quantification of ∆(SF R) and ∆(V /σ) is deeply inspired by the definition of atomic gas (Hi) deficiency originally introduced by Haynes & Giovanelli (1984). By quantifying the difference in Hi content with respect to galaxies of same morphology and size, Hi deficiency has become a key parameter for isolating the effect of environment on the cold gas content of galaxies (e.g., Giovanelli & Haynes 1985;Solanes et al. 2001;Cortese et al. 2011Cortese et al. , 2016a.
In Fig. 3, we show the result of the matching procedure by plotting ∆(V /σ) vs. ∆(SF R), with points colourcoded by stellar mass. Dashed lines define 'normalcy' (i.e., no change) in SFR and/or V /σ, with cyan bands highlighting the 1σ variation for the control sample. If satellites were to first loose spin and then decrease their star formation with respect to centrals, they would move vertically downwards (i.e., negative ∆(V /σ) around ∆(SF R) ∼0) and then horizontally towards the left (negative ∆(SF R) and negative ∆(V /σ)). Similarly, if changes in SF R were followed by similar changes in stellar spin, satellites would form a diagonal sequence showing ∆(SF R) ∝ ∆(V /σ). Conversely, satellite galaxies occupy an L-shaped parameter space in the ∆(SF R)-∆(V /σ) plane with large changes in V /σ only for the passive population. Main-sequence satellite galaxies show an average V /σ marginally lower than that of our control sample (∆(V /σ) ∼−0.08, with standard deviation ∼0.13 dex). During the satellite quenching phase, ∆(V /σ) remains roughly constant until galaxies have reduced their current star formation rate by more than a factor of ten. Then, for the more passive population (∆(SF R) ∼ −1.8 dex), the scatter in ∆(V /σ) more than doubles and satellites span almost a dex in ∆(V /σ), although the median value never goes below −0.4 dex. This is qualitatively consistent with previous observational (e.g., Cortese & Hughes 2009;Woo et al. 2017) and theoretical works (e.g., Correa et al. 2017) suggesting the presence of a wide range of visual and/or photometric morphologies in the red sequence of satellite galaxies.
No significant dependence of the position of satellites in the ∆(SF R)-∆(V /σ) plane on stellar mass (or group halo mass, not shown here) is observed. Intriguingly, the three outliers in the bottom-right quadrant (i.e., positive ∆(SF R) and negative ∆(V /σ)) are all interacting systems (GAMA ID 301382, 485833, 618992), suggesting that our technique may also be able to identify boosts in SF R accompanied by kinematic perturbations.
It is tempting to interpret Fig. 3 in terms of galaxy transformation, and consider the variation of ∆(SF R) and ∆(V /σ) as the evolutionary paths followed by satellites after infall. As such, one would immediately conclude that satellites experience a two-phase transformation, with quench-ing of the star formation happening first and structural transformation -if any -taking place at later stages or on longer time-scales, and visibly affecting only galaxies already quenched. Unfortunately, Fig. 3 would directly show evolutionary tracks only if the vast majority of satellite galaxies at z ∼0 had become satellites in the last couple of billion years. As this is clearly not the case (e.g., De Lucia et al. 2012;Han et al. 2018), their properties at the time of infall could be significantly different from those of central galaxies in the local Universe. Not only their stellar mass was likely smaller, potentially undermining the basis of our matching procedure but, most importantly, their SFR was higher and their spin parameter lower than those of star-forming centrals at z ∼0 with the same mass. Thus, our results most likely provide just an upper limit to the change in V /σ parameter and a lower limit to the change in SFR experienced by satellite galaxies. We will demonstrate this point in the next section.
Simulated galaxies in EAGLE
In order to quantify the potential effect of progenitor bias on the results presented in Fig. 3, we perform the same analysis presented in the previous section on galaxies extracted from the EAGLE simulation. We focus on the EAGLE reference model, denoted as Ref-L100N1504 and rescaled to the cosmology adopted in this paper, which corresponds to a cubic volume of 100 comoving Mpc per side, and use the stellar kinematic measurements presented in Lagos et al. (2018). Briefly, stellar kinematic maps are produced by projecting the stellar particle kinematic properties on a twodimensional plane with bin size of 1.5 comoving kpc. The line of sight is fixed along the z-axis of the simulated box, providing a random distribution for the orientation of galaxies, and line-of-sight and dispersion velocities are obtained by fitting a Gaussian to line-of-sight velocity distribution for each pixel. The V /σ ratio is then estimated in the same way as in the observations, by integrating only pixels within one effective radius and using the r-band luminosity of each pixel as weight. Star formation rate is implemented following the prescription of Schaye & Dalla Vecchia (2008), and here we use total current star formation rates as described in Furlong et al. (2015). Central galaxies in the simulation are defined as those objects hosted by the main subhalo, while galaxies hosted in other subhaloes within the group are considered satellites. Across the stellar mass range of interest of this paper (10< log(M * /M ) <11.5), we find 2265 centrals and 1413 satellites. Satellite galaxies in EAGLE span a slightly wider range of halo masses than our SAMI final sample, extending up to ∼10 14.8 M with an average halo mass of ∼10 13.6 M : i.e., ∼0.2 dex higher than our final sample.
Because the main sequence of star-forming galaxies in EAGLE is slightly offset towards lower SFR with respect to the observed one (Furlong et al. 2015), we revise the cut used to isolate star-forming centrals for the matching procedure: i.e., log(SF R) > 0.7207 × (log(M * /M ) − 10) + 0.061 − 1.2. Similarly, EAGLE passive galaxies have naturally SFR equal to 0, whereas SAMI red-sequence objects have their star formation clustered around ∼10 −1.8 M yr −1 . This is due to the inability of SED-fitting techniques to quantify very low levels of SFRs. For consistency with observations, EAGLE Figure 4. Variations in stellar V /σ and SF R for satellite galaxies in the EAGLE simulation. Density distribution and Gaussian kernel contours are shown in grey and red, respectively. Matching is done following the same technique used for SAMI galaxies, overplotted as empty blue circles for comparison. galaxies with SFR<10 −1.8 M yr −1 are assigned a random value of SFR following a log-normal distribution peaked at 10 −1.8 M yr −1 , with 0.2 dex scatter. We note that the exact location and shape of the distribution used to re-scale passive galaxies does not affect our results. Our final sample used for the matching is thus composed of 1204 mainsequence, rotationally-supported centrals and 1413 satellites.
We perform a matching procedure identical to the one used for SAMI data. Namely, each satellite is matched with all main-sequence, rotationally-supported centrals within 0.15 dex in stellar mass and 0.1 in ellipticity. The median SFR and V /σ are then used to estimate ∆(SF R) and ∆(V /σ) for each satellite. The result is shown in Fig. 4. The density distribution of EAGLE galaxies is highlighted in grey, with Gaussian kernel density contours in red. SAMI galaxies are overplotted as empty blue circles for comparison. We find greement between the distribution of SAMI and EAGLE galaxies, with the values of ∆(V /σ) for EA-GLE galaxies becoming large only for galaxies already in the passive population.
The good agreement between SAMI and EAGLE gives us confidence to use EAGLE to investigate the effect of progenitor bias in our analysis. To do so, in Fig. 5 we plot ∆(SF R)true vs. ∆(V /σ)true, estimated by comparing the satellite's property at z ∼0 with those at the last simulation snapshot before infall, if they have become satellites between z ∼0 and 2 (i.e., ∼92% of the local satellite population). The picture that emerges is significantly different from before, with variations in V /σ becoming smaller and galaxies preferentially moving horizontally in the diagram. Interestingly, galaxies with small negative ∆(SF R) (i.e., satellites at the beginning of their quenching phase) show marginally positive ∆(V /σ). This is likely because, despite becoming satellites, galaxies keep acquiring additional angular momentum even after infall. Figure 5. 'True' variations in stellar V σ and SF R for satellite galaxies in the EAGLE simulation, determined by comparing the z ∼0 properties to those at the last snapshot during which the galaxy was a central. Density distribution and contours are as in Fig. 4
DISCUSSION AND CONCLUSION
In this work we have quantified the difference in stellar spin parameter and star formation rate between satellites and main-sequence, rotationally-dominated centrals at z ∼0 (matched in stellar mass and ellipticity). Satellites in the main-sequence and transition region show very similar stellar kinematic properties to star-forming centrals, and only satellites already in the red sequence have spin parameters significantly (i.e., at least a factor of two) lower than those typical of thin star-forming disks. As our control sample of central galaxies at z ∼0 includes only galaxies with stellar spin typical of disk-dominated galaxies, the lack of any major decrease in satellite's V /σ in the main sequence rules out significant structural transformation before quenching. If we use the same matching technique presented in Sec. 3 to estimate the variation of r-band Sérsic index (∆(n)) and stellar mass surface density (∆(µ * ), where µ * = M * /(2πr 2 e )) for main sequence satellite galaxies, we find that both ∆(n) and ∆(µ * ) change very little (∼0.08 dex on average, with standard deviations ∼0.21 dex and 0.18 dex, respectively), in line with what obtained for ∆(V /σ). This is consistent with Bremer et al. (2018), who find no difference in the bulge K-band luminosity between late-type blue sequence and green valley galaxies.
In recent years, several works have proposed a scenario in which a rapid increase in the central galaxy density truncates star formation: i.e., galaxies grow their inner core and then quench (a process sometimes referred to as 'compaction'; e.g., Cheung et al. 2012;Fang et al. 2013;Woo et al. 2015;Zolotov et al. 2015;Tacchella et al. 2016;Wang et al. 2018). While originally motivated by studies of central galaxies (e.g., Cheung et al. 2012;Fang et al. 2013), compaction has also been suggested as a viable quenching path for satellite galaxies (e.g., Wang et al. 2018).
Our findings would appear to rule out any forms of 'compaction' that significantly reduces the stellar spin (or increases the average stellar surface density) of main sequence satellites within one effective radius (with respect to star-forming, rotationally-dominated centrals). Given the limited spatial resolution of SAMI observations, we cannot exclude changes in the central kiloparsec of satellite galaxies (i.e., where stellar mass surface densities used to quantify compaction are generally estimated). However, if this is the case, 'compaction' does not seem to be affecting the global kinematic and/or photometric properties of group galaxies before quenching.
In other words, it seems unlikely that, after they have become satellites, galaxies grow prominent dispersiondominated bulges while still on the main sequence. Of course, star-forming satellites could still harbour small photometric and/or kinematic bulge components, but their structural properties are not different from those of starforming, rotationally-dominated centrals of similar stellar mass. Our interpretation is in line with Tacchella et al. (2017) and Abramson & Morishita (2018) who show that 'compaction' may not be needed to explain the properties of the local passive population.
While SAMI data alone allow us to determine what happens to z ∼0 star-forming satellites at the start of their quenching phase, cosmological simulations are invaluable to properly reconstruct the evolution of passive satellites (i.e., galaxies with ∆(SF R) < 1 dex). By comparing our findings with predictions from the EAGLE hydrodynamical simulation, we demonstrate that the difference in spin parameter between satellites and centrals must be interpreted as just an upper-limit of the true structural transformation experienced by satellites after infall.
Indeed, at least within the framework of EAGLE, the difference in spin between central and satellites at z ∼0 (∆(V /σ)) is always larger than the actual loss experienced by satellites since infall (∆(V /σ)true). This is because most of the observed difference at z ∼0 is due to the star-forming central population acquiring additional angular momentum in the last few billion years, rather than satellites losing it via environmental effects during the quenching phase. This is summarised in the cartoon presented in Fig. 6, which illustrates why Figs. 4 and 5 are so different.
From theoretical models of structure formation (e.g., White 1984;Mo et al. 1998), hydrodynamical simulations (e.g., Pedrosa & Tissera 2015Lagos et al. 2017), as well as recent observations (e.g., Simons et al. 2017;Swinbank et al. 2017), we see that star-forming central disk galaxies gradually increase their spin with time (solid green line in the top panel), due to the continuing accretion of gas which, on average, is expected to bring high specific angular momentum (e.g., Catelan & Theuns 1996;El-Badry et al. 2018). The typical increase expected in the stellar spin parameter from z∼1 to 0 is ∼0.3 dex in our stellar mass range (Lagos et al. 2017), consistent with the observed decrease in gas velocity dispersion (Wisnioski et al. 2015;Simons et al. 2017) and increase in gas specific angular momentum (Swinbank et al. 2017). After infall, the spin of satellite galaxies either remains constant or slightly decreases (solid red line) whereas centrals keep acquiring angular momentum (dashed green line). Thus, the difference observed at z ∼0 between centrals and satellites is always larger than the real change in V /σ experienced by satellite galaxies. The situation is opposite in the case of the SF R. On average, a galaxy's star formation activity is decreasing over time (solid blue line, e.g., Madau et al. 1996). Thus, when centrals become satel- Figure 6. A cartoon summarising the evolutionary scenario emerging from this work and the potential effect of progenitor bias. The top panel shows the increase of V /σ with decreasing lookback time/redshift for galaxies while being star-forming centrals (solid green line), and the change in V /σ once they become satellites (red line). The green dashed line shows the expected evolution of V /σ in case the galaxy would have remained a starforming central until z ∼0. The true ∆(V /σ) and the value obtained via our matching technique are shown by the black vertical arrows. The bottom panel shows the case of SF R, with the changes for centrals and satellites highlighted by the blue and pink lines, respectively. In this case, the observed ∆(SF R) at z ∼0 is always smaller than the real value.
lites the effect of environment is simply to accelerate this decrease. As such, the difference in SF R observed at z ∼0 is always lower than the decrease experienced by satellites since infall. In EAGLE we know the properties at infall, so we can relate z ∼0 satellites to their progenitors. In the observations, we are forced to compare satellites to z ∼0 central, missing the changes that centrals themselves have experienced since the time of infall of satellites into their halos.
Our results demonstrate that the first and most important phase in the transformation of satellites is quenching, i.e., a significant reduction in their star formation activity. Changes in stellar kinematic properties (i.e., structure) -if any -become evident at a later stage and are on average minor, such that satellites remain rotationally-dominated. This is consistent with a scenario in which multiple physical processes -acting on different time-scales -may play a significant role in altering the evolutionary history of galaxies in groups and clusters. Indeed, while many physical processes (e.g., ram-pressure, tidal stripping) are able to start actively removing the gas reservoir of galaxies and initiate the quenching phase soon after infall, it can take a significantly longer time for low-speed gravitational interactions and/or minor mergers to change the kinematic properties of satellites. However, detailed analysis and modeling of objects occupying different regions in the ∆(SF R) vs. ∆(V /σ) is required to properly isolate the physical processes acting on satellite galaxies. Moreover, it is important to remember that our results are valid for galaxies with stellar masses greater than 10 10 M and cannot be blindly extrapolated to lower stellar masses.
Thanks to the way ∆(SF R) and ∆(V /σ) are quantified for both SAMI and EAGLE galaxies, they automatically incorporate the effect of any pre-processing on galaxy transformation: i.e., environmental effects experienced by galaxies while satellites in a halo different from the one occupied at z ∼0. Thus, our results also suggest that in current numerical simulations pre-processing has a limited effect on the structural properties of satellite galaxies, contrary to what is commonly assumed (e.g., Zabludoff & Mulchaey 1998;Cortese et al. 2006).
ACKNOWLEDGMENTS
We thank the referee for useful comments and suggestions that improved the clarity of this manuscript.
LC is the recipient of an The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST ATLAS Survey. The SAMI Galaxy Survey is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions. The SAMI Galaxy Survey website is http://sami-survey.org/ GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programs including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/.
Part of this work was performed on the gSTAR national facility at Swinburne University of Technology. gSTAR is funded by Swinburne and the Australian Governments Education Investment Fund.
We acknowledge the Virgo Consortium for making their simulation data available. The EAGLE simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruyeres-le-Chatel.
APPENDIX A: THE EFFECT OF BEAM SMEARING
The typical seeing of SAMI observations is a significant fraction of the effective radius of the targeted galaxies. Thus, beam smearing could systematically bias our estimates of the V /σ ratio. In this paper, we adopted stringent quality cuts (re >2 and re >2.5×HWHM) to define our final sample, and minimise the effect of beam smearing. However, it is unquestionable that even for the final sample, our estimates of V /σ have been systematically lowered by the atmospheric conditions during the observations. In order to determine if this could affect our main conclusions, here we correct the V /σ estimates used in this paper for the effect of beam smearing following the empirical recipe recently presented by Graham et al. (2018) 1 . They take advantage of kinematic galaxy models based on the Jeans Anisotropic Modeling method developed by Cappellari (2008) to derive the intrinsic λr parameter (λ intr r , Emsellem et al. 2007) of a galaxy from the observed one (λ obs r ). This correction is a function of the galaxy's Sérsic index (n), effective radius (re) and the seeing of the observations (σP SF = F W HMP SF /2.
If we assume that Eq. A3 is valid for both observed and intrinsic values, the effect of beam smearing on V /σ is V σ intr = V σ obs λ intr r 1 − (λ obs r ) 2 λ obs r 1 − (λ intr r ) 2 (A4)
It is important to note that the last assumption is likely incorrect, as the relation between λr and V /σ depends on data quality as well as sample selection. In particular, van de Sande et al. (2017a) show that κ increases slightly with increasing seeing (∆κ=−0.02 with a ∆F W HM =0.5-3.0 arcsec seeing) and between different surveys, suggesting that κ for the intrinsic value could be higher than for the observed one. Thus, Eq. A4 must be considered as an upper limit to the real effect of beam smearing. This is why in the main paper we prefer to use observed values instead of the corrected ones. Fig. A1 shows the median and 20%-80% percentile ranges of ∆(V /σ) in bins of ∆(SF R) for the uncorrected (green line; used in the main paper) and corrected (using 1 See also Harborne et al. (2019) for an independent test of these corrections. Eq. A3; red line) final sample, respectively. We find that beam smearing has a noticeable effect for galaxies with large negative ∆(SF R), and is almost negligible close to the main sequence. This mainly reflects the difference in apparent size and Sérsic index between passive satellites and star-forming centrals, which translates into a larger correction for passive systems (see Eq. A1). This shows that, if any, the effect of beam smearing would be to further reduce the change in ∆(V /σ) experienced by satellites during their quenching phase, thus reinforcing the main conclusions of this paper.
Figure 1 .
1(SF R/M * ) [yr −1 ] The stellar mass (M * , left), ellipticity ( , middle) and specific star formation rate (SF R/M * , right) distribution for our parent (empty histogram) and final (filled histogram). The top row includes all galaxies, while the middle and bottom rows focus on central and satellite galaxies only. It is clear that our final sample covers the same parameter space as our initial parent sample.
Figure 2 .
2The M * -SFR (top) and V /σ-(bottom) planes for all centrals in our sample (left), main-sequence, disky centrals (middle) and satellite galaxies (right). Points are colour-coded by V /σ and SF R in the top and bottom panels, respectively.
Figure 3 .
3Variations in stellar V /σ and SF R for satellite galaxies with respect to our control sample of main-sequence, high V /σ centrals. Points are colour-coded by stellar mass. Dashed lines and cyan bands show the average and standard deviation for the control sample. The thick black and thin green lines show the running median and 20%-80% percentile ranges for ∆(V /σ) in bins of ∆(SF R). See § 3.1 for details on the matching procedure.
Figure A1 .
A1Variations in stellar V /σ and SF R for satellite galaxies with respect to our control sample of main-sequence, high V /σ centrals. Dashed lines and cyan bands show the average and standard deviation for the control sample. The green line and shaded region show the running median and 20%-80% percentile ranges for ∆(V /σ) in bins of ∆(SF R) for the final sample used in this paper. The red line and shaded regions show how our results would change if we were to apply a beam smearing correction based on the work byGraham et al. (2018).
Australian Research Council Future Fellowship (FT180100066) funded by the Australian Government. This research was conducted by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. JvdS is funded under Bland-Hawthorn's ARC Laureate Fellowship (FL140100278). CL has received funding from a Discovery Early Career Researcher Award (DE150100618) and the MERAC Foundation for a Postdoctoral Research Award. SB acknowledges the funding support from the Australian Research Council through a Future Fellowship (FT140101166). M.S.O. acknowledges the funding support from the Australian Research Council through a Future Fellowship (FT140100255).
Since in this paper we focus on V /σ, we need to rewrite Eq A1 as a function of V /σ. FollowingEmsellem et al. (2007), we assume For SAMI galaxies, van de Sande et al. (2017a) find κ=0.97 when V /σ and λr are measured within one effective radius. Thus, we can rewrite Eq. A2 as355):
λ obs
re =λ intr
re
1 +
σP SF /re
0.47
1.76 −0.84
×
× 1 + (n − 2) 0.26
σP SF
re
−1
(A1)
λr =
κ(V /σ)
1 + κ 2 (V /σ) 2
(A2)
V
σ
=
λr
0.97 1 − (λr) 2
MNRAS 000, 000-000 (0000)
. L E Abramson, T Morishita, 10.3847/1538-4357/aab61bApJ. 85840Abramson L. E., Morishita T., 2018, ApJ, 858, 40
. S P Bamford, 10.1111/j.1365-2966.2008.14252.xMNRAS. 3931324Bamford S. P., et al., 2009, MNRAS, 393, 1324
. J Bland-Hawthorn, 10.1364/OE.19.002649Optics Express. 192649Bland-Hawthorn J., et al., 2011, Optics Express, 19, 2649
. M R Blanton, A A Berlind, 10.1086/512478ApJ. 664791Blanton M. R., Berlind A. A., 2007, ApJ, 664, 791
. M R Blanton, D Eisenstein, D W Hogg, D J Schlegel, J Brinkmann, 10.1086/422897ApJ. 629143Blanton M. R., Eisenstein D., Hogg D. W., Schlegel D. J., Brinkmann J., 2005, ApJ, 629, 143
. A Boselli, G Gavazzi, 10.1086/500691PASP. 118517Boselli A., Gavazzi G., 2006, PASP, 118, 517
. A Boselli, S Boissier, L Cortese, G Gavazzi, 10.1086/525513ApJ. 674742Boselli A., Boissier S., Cortese L., Gavazzi G., 2008, ApJ, 674, 742
. M N Bremer, 10.1093/mnras/sty124MNRAS. 47612Bremer M. N., et al., 2018, MNRAS, 476, 12
. T Brown, 10.1093/mnras/stw2991MNRAS. 4661275Brown T., et al., 2017, MNRAS, 466, 1275
. J J Bryant, J Bland-Hawthorn, L M R Fogarty, J S Lawrence, S M Croom, 10.1093/mnras/stt2254MNRAS. 438869Bryant J. J., Bland-Hawthorn J., Fogarty L. M. R., Lawrence J. S., Croom S. M., 2014, MNRAS, 438, 869
. J J Bryant, 10.1093/mnras/stu2635MNRAS. 4472857Bryant J. J., et al., 2015, MNRAS, 447, 2857
. K Bundy, 10.1088/0004-637X/719/2/1969ApJ. 719Bundy K., et al., 2010, ApJ, 719, 1969
. M Cappellari, 10.1111/j.1365-2966.2008.13754.xMNRAS. 39071Cappellari M., 2008, MNRAS, 390, 71
. M Cappellari, 10.1088/2041-8205/778/1/L2ApJ. 7782Cappellari M., 2013, ApJ, 778, L2
. M Cappellari, 10.1146/annurev-astro-082214-122432Annual Review of Astronomy and Astrophysics. 54597Cappellari M., 2016, Annual Review of Astronomy and Astro- physics, 54, 597
. M Cappellari, E Emsellem, 10.1086/381875PASP. 116138Cappellari M., Emsellem E., 2004, PASP, 116, 138
. M Cappellari, 10.1111/j.1365-2966.2007.11963.xMNRAS. 379418Cappellari M., et al., 2007, MNRAS, 379, 418
. P Catelan, T Theuns, 10.1093/mnras/282.2.436MNRAS. 282436Catelan P., Theuns T., 1996, MNRAS, 282, 436
. E Cheung, 10.1088/0004-637X/760/2/131ApJ. 760131Cheung E., et al., 2012, ApJ, 760, 131
. D Christlein, A I Zabludoff, 10.1086/424909ApJ. 616192Christlein D., Zabludoff A. I., 2004, ApJ, 616, 192
. C A Correa, J Schaye, B Clauwens, R G Bower, R A Crain, M Schaller, T Theuns, A C R Thob, 10.1093/mnrasl/slx133MNRAS. 47245Correa C. A., Schaye J., Clauwens B., Bower R. G., Crain R. A., Schaller M., Theuns T., Thob A. C. R., 2017, MNRAS, 472, L45
. Cortese L Hughes, T M , 10.1111/j.1365-2966.2009.15548.xMNRAS. 4001225Cortese L., Hughes T. M., 2009, MNRAS, 400, 1225
. Cortese L Gavazzi, G Boselli, A Franzetti, P Kennicutt, R C O'neil, K Sakai, S , 10.1051/0004-6361:20064873A&A. 453847Cortese L., Gavazzi G., Boselli A., Franzetti P., Kennicutt R. C., O'Neil K., Sakai S., 2006, A&A, 453, 847
. Cortese L Catinella, B Boissier, S Boselli, A Heinis, S , 10.1111/j.1365-2966.2011.18822.xMNRAS. 4151797Cortese L., Catinella B., Boissier S., Boselli A., Heinis S., 2011, MNRAS, 415, 1797
. Cortese L , 10.1093/mnras/stw801MNRAS. 4593574Cortese L., et al., 2016a, MNRAS, 459, 3574
. Cortese L , 10.1093/mnras/stw1891MNRAS. 463170Cortese L., et al., 2016b, MNRAS, 463, 170
. R A Crain, 10.1093/mnras/stv725MNRAS. 4501937Crain R. A., et al., 2015, MNRAS, 450, 1937
. S M Croom, 10.1111/j.1365-2966.2011.20365.xMNRAS. 421872Croom S. M., et al., 2012, MNRAS, 421, 872
. L J M Davies, 10.1093/mnras/stw1342MNRAS. 461458Davies L. J. M., et al., 2016, MNRAS, 461, 458
. De Lucia, G Weinmann, S Poggianti, B M Aragón-Salamanca, A Zaritsky, D , 10.1111/j.1365-2966.2012.20983.xMNRAS. 4231277De Lucia G., Weinmann S., Poggianti B. M., Aragón-Salamanca A., Zaritsky D., 2012, MNRAS, 423, 1277
. A Dressler, ApJ. 236351Dressler A., 1980, ApJ, 236, 351
. S P Driver, 10.1111/j.1365-2966.2010.18188.xMNRAS. 413971Driver S. P., et al., 2011, MNRAS, 413, 971
. S P Driver, 10.1093/mnras/stv2505MNRAS. 4553911Driver S. P., et al., 2016, MNRAS, 455, 3911
. K El-Badry, 10.1093/mnras/stx2482MNRAS. 4731930El-Badry K., et al., 2018, MNRAS, 473, 1930
. S L Ellison, D Fertig, J L Rosenberg, P Nair, L Simard, P Torrey, D R Patton, 10.1093/mnras/stu2744MNRAS. 448221Ellison S. L., Fertig D., Rosenberg J. L., Nair P., Simard L., Tor- rey P., Patton D. R., 2015, MNRAS, 448, 221
. S L Ellison, B Catinella, L Cortese, 10.1093/mnras/sty1247MNRAS. 4783447Ellison S. L., Catinella B., Cortese L., 2018, MNRAS, 478, 3447
. E Emsellem, 10.1111/j.1365-2966.2007.11752.xMNRAS. 379401Emsellem E., et al., 2007, MNRAS, 379, 401
. E Emsellem, 10.1111/j.1365-2966.2011.18496.xMNRAS. 414888Emsellem E., et al., 2011, MNRAS, 414, 888
. J J Fang, S M Faber, D C Koo, A Dekel, 10.1088/0004-637X/776/1/63ApJ. 77663Fang J. J., Faber S. M., Koo D. C., Dekel A., 2013, ApJ, 776, 63
. M Fossati, 10.1093/mnras/stu2255MNRAS. 4462582Fossati M., et al., 2015, MNRAS, 446, 2582
. C Foster, 10.1093/mnras/stx1869MNRAS. 472966Foster C., et al., 2017, MNRAS, 472, 966
. M Furlong, 10.1093/mnras/stv852MNRAS. 4504486Furlong M., et al., 2015, MNRAS, 450, 4486
. G Gavazzi, A Boselli, L Cortese, I Arosio, A Gallazzi, P Pedotti, L Carrasco, 10.1051/0004-6361:20053843A&A. 446839Gavazzi G., Boselli A., Cortese L., Arosio I., Gallazzi A., Pedotti P., Carrasco L., 2006, A&A, 446, 839
. M R George, C.-P Ma, K Bundy, A Leauthaud, J Tinker, R H Wechsler, A Finoguenov, B Vulcani, 10.1088/0004-637X/770/2/113ApJ. 770113George M. R., Ma C.-P., Bundy K., Leauthaud A., Tinker J., Wechsler R. H., Finoguenov A., Vulcani B., 2013, ApJ, 770, 113
. R Giovanelli, M P Haynes, 10.1086/163170ApJ. 292404Giovanelli R., Haynes M. P., 1985, ApJ, 292, 404
. R Giovanelli, M P Haynes, J J Salzer, G Wegner, L N Da Costa, W Freudling, 10.1086/117014AJ. 1072036Giovanelli R., Haynes M. P., Salzer J. J., Wegner G., da Costa L. N., Freudling W., 1994, AJ, 107, 2036
. P L Gómez, 10.1086/345593ApJ. 584210Gómez P. L., et al., 2003, ApJ, 584, 210
. M T Graham, 10.1093/mnras/sty504MNRAS. 4774711Graham M. T., et al., 2018, MNRAS, 477, 4711
. S Han, R Smith, H Choi, L Cortese, B Catinella, E Contini, S K Yi, 10.3847/1538-4357/aadfe2ApJ. 86678Han S., Smith R., Choi H., Cortese L., Catinella B., Contini E., Yi S. K., 2018, ApJ, 866, 78
. K E Harborne, C Power, A S G Robotham, L Cortese, D S Taranu, 10.1093/mnras/sty3120MNRAS. 483249Harborne K. E., Power C., Robotham A. S. G., Cortese L., Taranu D. S., 2019, MNRAS, 483, 249
. M P Haynes, R Giovanelli, 10.1086/113573AJ. 89758Haynes M. P., Giovanelli R., 1984, AJ, 89, 758
. J A Hester, 10.1088/0004-637X/720/1/191ApJ. 720191Hester J. A., 2010, ApJ, 720, 191
. E Hubble, M L Humason, ApJ. 7443Hubble E., Humason M. L., 1931, ApJ, 74, 43
. L Kawinwanichakij, 10.3847/1538-4357/aa8b75ApJ. 847134Kawinwanichakij L., et al., 2017, ApJ, 847, 134
. L S Kelvin, 10.1111/j.1365-2966.2012.20355.xMNRAS. 4211007Kelvin L. S., et al., 2012, MNRAS, 421, 1007
. D Krajnović, 10.1093/mnras/sts315MNRAS. 4321768Krajnović D., et al., 2013, MNRAS, 432, 1768
. C D P Lagos, T Theuns, A R H Stevens, L Cortese, N D Padilla, T A Davis, S Contreras, D Croton, 10.1093/mnras/stw2610MNRAS. 4643850Lagos C. d. P., Theuns T., Stevens A. R. H., Cortese L., Padilla N. D., Davis T. A., Contreras S., Croton D., 2017, MNRAS, 464, 3850
. C D P Lagos, J Schaye, Y Bahé, J Van De Sande, S T Kay, D Barnes, T A Davis, 10.1093/mnras/sty489Dalla Vecchia C. 4764327MNRASLagos C. d. P., Schaye J., Bahé Y., Van de Sande J., Kay S. T., Barnes D., Davis T. A., Dalla Vecchia C., 2018, MNRAS, 476, 4327
. R B Larson, B M Tinsley, C N Caldwell, ApJ. 237692Larson R. B., Tinsley B. M., Caldwell C. N., 1980, ApJ, 237, 692
. I Lewis, 10.1046/j.1365-8711.2002.05558.xMNRAS. 334673Lewis I., et al., 2002, MNRAS, 334, 673
. T Lisker, E K Grebel, B Binggeli, 10.1086/505045AJ. 132497Lisker T., Grebel E. K., Binggeli B., 2006, AJ, 132, 497
. P Madau, H C Ferguson, M E Dickinson, M Giavalisco, C C Steidel, A Fruchter, MNRAS. 2831388Madau P., Ferguson H. C., Dickinson M. E., Giavalisco M., Steidel C. C., Fruchter A., 1996, MNRAS, 283, 1388
. S Mcalpine, 10.1016/j.ascom.2016.02.004Astronomy and Computing. 1572McAlpine S., et al., 2016, Astronomy and Computing, 15, 72
. S Mei, 10.1086/509598ApJ. 655144Mei S., et al., 2007, ApJ, 655, 144
. H J Mo, S Mao, S D M White, 10.1046/j.1365-8711.1998.01227.xMNRAS. 295319Mo H. J., Mao S., White S. D. M., 1998, MNRAS, 295, 319
. C Moss, M Whittle, MNRAS. 317667Moss C., Whittle M., 2000, MNRAS, 317, 667
. S I Muldrew, 10.1111/j.1365-2966.2011.19922.xMNRAS. 4192670Muldrew S. I., et al., 2012, MNRAS, 419, 2670
. K A Oman, M J Hudson, 10.1093/mnras/stw2195MNRAS. 4633083Oman K. A., Hudson M. J., 2016, MNRAS, 463, 3083
. C M B Omand, M L Balogh, B M Poggianti, 10.1093/mnras/stu331MNRAS. 440843Omand C. M. B., Balogh M. L., Poggianti B. M., 2014, MNRAS, 440, 843
. S E Pedrosa, P B Tissera, 10.1051/0004-6361/201526440A&A. 58443Pedrosa S. E., Tissera P. B., 2015, A&A, 584, A43
. Y.-J Peng, S J Lilly, A Renzini, M Carollo, 10.1088/0004-637X/757/1/4ApJ. 7574Peng Y.-j., Lilly S. J., Renzini A., Carollo M., 2012, ApJ, 757, 4
. B M Poggianti, I Smail, A Dressler, W J Couch, A J Barger, H Butcher, R S Ellis, A J Oemler, 10.1086/307322ApJ. 518576Poggianti B. M., Smail I., Dressler A., Couch W. J., Barger A. J., Butcher H., Ellis R. S., Oemler A. J., 1999, ApJ, 518, 576
. F Rizzo, F Fraternali, G Iorio, 10.1093/mnras/sty347MNRAS. 4762137Rizzo F., Fraternali F., Iorio G., 2018, MNRAS, 476, 2137
. A S G Robotham, 10.1111/j.1365-2966.2011.19217.xMNRAS. 4162640Robotham A. S. G., et al., 2011, MNRAS, 416, 2640
. P Sánchez-Blázquez, 10.1111/j.1365-2966.2006.10699.xMNRAS. 371703Sánchez-Blázquez P., et al., 2006, MNRAS, 371, 703
. J Schaye, Dalla Vecchia, C , 10.1111/j.1365-2966.2007.12639.xMNRAS. 3831210Schaye J., Dalla Vecchia C., 2008, MNRAS, 383, 1210
. J Schaye, 10.1093/mnras/stu2058MNRAS. 446521Schaye J., et al., 2015, MNRAS, 446, 521
. N Scott, 10.1093/mnras/sty2355MNRAS. 4812299Scott N., et al., 2018, MNRAS, 481, 2299
. R C Simons, 10.3847/1538-4357/aa740cApJ. 84346Simons R. C., et al., 2017, ApJ, 843, 46
. J M Solanes, A Manrique, C García-Gómez, G González-Casado, R Giovanelli, M P Haynes, 10.1086/318672ApJ. 54897Solanes J. M., Manrique A., García-Gómez C., González-Casado G., Giovanelli R., Haynes M. P., 2001, ApJ, 548, 97
. A M Swinbank, 10.1093/mnras/stx201MNRAS. 4673140Swinbank A. M., et al., 2017, MNRAS, 467, 3140
. S Tacchella, A Dekel, C M Carollo, D Ceverino, C Degraf, S Lapiner, N Mandelker, 10.1093/mnras/stw131Primack Joel R. 4572790MNRASTacchella S., Dekel A., Carollo C. M., Ceverino D., DeGraf C., Lapiner S., Mandelker N., Primack Joel R., 2016, MNRAS, 457, 2790
. S Tacchella, C M Carollo, S M Faber, A Cibinel, A Dekel, D C Koo, A Renzini, J Woo, 10.3847/2041-8213/aa7cfbApJ. 8441Tacchella S., Carollo C. M., Faber S. M., Cibinel A., Dekel A., Koo D. C., Renzini A., Woo J., 2017, ApJ, 844, L1
. E N Taylor, 10.1111/j.1365-2966.2011.19536.xMNRAS. 4181587Taylor E. N., et al., 2011, MNRAS, 418, 1587
. E Tempel, J Einasto, M Einasto, E Saar, E Tago, 10.1051/0004-6361:200810274A&A. 49537Tempel E., Einasto J., Einasto M., Saar E., Tago E., 2009, A&A, 495, 37
. E Toloba, 10.1088/0004-637X/707/1/L17ApJ. 70717Toloba E., et al., 2009, ApJ, 707, L17
. C T Unterborn, B S Ryden, 10.1086/591898ApJ. 687976Unterborn C. T., Ryden B. S., 2008, ApJ, 687, 976
. E Wang, X Kong, Z Pan, 10.3847/1538-4357/aadb9eApJ. 86549Wang E., Kong X., Pan Z., 2018, ApJ, 865, 49
. S M Weinmann, G Kauffmann, F C Van Den Bosch, A Pasquali, D H Mcintosh, H Mo, X Yang, Y Guo, 10.1111/j.1365-2966.2009.14412.xMNRAS. 3941213Weinmann S. M., Kauffmann G., van den Bosch F. C., Pasquali A., McIntosh D. H., Mo H., Yang X., Guo Y., 2009, MNRAS, 394, 1213
. S M Weinmann, G Kauffmann, A Von Der Linden, G De Lucia, 10.1111/j.1365-2966.2010.16855.xMNRAS. 4062249Weinmann S. M., Kauffmann G., von der Linden A., De Lucia G., 2010, MNRAS, 406, 2249
. A R Wetzel, J L Tinker, C Conroy, 10.1111/j.1365-2966.2012.21188.xMNRAS. 424232Wetzel A. R., Tinker J. L., Conroy C., 2012, MNRAS, 424, 232
. A R Wetzel, J L Tinker, C Conroy, F C Van Den Bosch, 10.1093/mnras/stt469MNRAS. 432336Wetzel A. R., Tinker J. L., Conroy C., van den Bosch F. C., 2013, MNRAS, 432, 336
. S D M White, 10.1086/162573ApJ. 28638White S. D. M., 1984, ApJ, 286, 38
. D J Wilman, S Zibetti, T Budavári, 10.1111/j.1365-2966.2010.16845.xMNRAS. 4061701Wilman D. J., Zibetti S., Budavári T., 2010, MNRAS, 406, 1701
. E Wisnioski, 10.1088/0004-637X/799/2/209ApJ. 799209Wisnioski E., et al., 2015, ApJ, 799, 209
. J Woo, A Dekel, S M Faber, D C Koo, 10.1093/mnras/stu2755MNRAS. 448237Woo J., Dekel A., Faber S. M., Koo D. C., 2015, MNRAS, 448, 237
. J Woo, C M Carollo, S M Faber, A Dekel, S Tacchella, 10.1093/mnras/stw2403MNRAS. 4641077Woo J., Carollo C. M., Faber S. M., Dekel A., Tacchella S., 2017, MNRAS, 464, 1077
. A H Wright, 10.1093/mnras/stw832MNRAS. 460765Wright A. H., et al., 2016, MNRAS, 460, 765
. X Yang, H J Mo, F C Van Den Bosch, 10.1088/0004-637X/695/2/900ApJ. 695900Yang X., Mo H. J., van den Bosch F. C., 2009, ApJ, 695, 900
. A I Zabludoff, J S Mulchaey, 10.1086/305355ApJ. 49639Zabludoff A. I., Mulchaey J. S., 1998, ApJ, 496, 39
. A Zolotov, 10.1093/mnras/stv740MNRAS. 4502327Zolotov A., et al., 2015, MNRAS, 450, 2327
. E Da Cunha, S Charlot, D Elbaz, P G Van Dokkum, M Franx, J Van De Sande, 10.1038/s41550-018-0436-xNature Astronomy. 388883ApJda Cunha E., Charlot S., Elbaz D., 2008, MNRAS, 388, 1595 van Dokkum P. G., Franx M., 2001, ApJ, 553, 90 van de Sande J., et al., 2017a, MNRAS, 472, 1272 van de Sande J., et al., 2017b, ApJ, 835, 104 van de Sande J., et al., 2018, Nature Astronomy, 2, 483 van den Bergh S., 1976, ApJ, 206, 883
. F C Van Den Bosch, D Aquino, X Yang, H J Mo, A Pasquali, D H Mcintosh, S M Weinmann, X Kang, A Van Der Wel, 10.1088/0004-637X/788/1/28MNRAS. 38728ApJvan den Bosch F. C., Aquino D., Yang X., Mo H. J., Pasquali A., McIntosh D. H., Weinmann S. M., Kang X., 2008, MNRAS, 387, 79 van der Wel A., et al., 2014, ApJ, 788, 28
| []
|
[
"Public Wi-Fi Monetization via Advertising",
"Public Wi-Fi Monetization via Advertising"
]
| [
"ManHaoran Yu ",
"Hon Cheung ",
"Senior Member, IEEELin Gao ",
"Fellow, IEEEJianwei Huang "
]
| []
| []
| The proliferation of public Wi-Fi hotspots has brought new business potentials for Wi-Fi networks, which carry a significant amount of global mobile data traffic today. In this paper, we propose a novel Wi-Fi monetization model for venue owners (VOs) deploying public Wi-Fi hotspots, where the VOs can generate revenue by providing two different Wi-Fi access schemes for mobile users (MUs): (i) the premium access, in which MUs directly pay VOs for their Wi-Fi usage, and (ii) the advertising sponsored access, in which MUs watch advertisements in exchange of the free usage of Wi-Fi. VOs sell their ad spaces to advertisers (ADs) via an ad platform, and share the ADs' payments with the ad platform. We formulate the economic interactions among the ad platform, VOs, MUs, and ADs as a three-stage Stackelberg game. In Stage I, the ad platform announces its advertising revenue sharing policy. In Stage II, VOs determine the Wi-Fi prices (for MUs) and advertising prices (for ADs). In Stage III, MUs make access choices and ADs purchase advertising spaces. We analyze the sub-game perfect equilibrium (SPE) of the proposed game systematically, and our analysis shows the following useful observations. First, the ad platform's advertising revenue sharing policy in Stage I will affect only the VOs' Wi-Fi prices but not the VOs' advertising prices in Stage II. Second, both the VOs' Wi-Fi prices and advertising prices are non-decreasing in the advertising concentration level and non-increasing in the MU visiting frequency. Numerical results further show that the VOs are capable of generating large revenues through mainly providing one type of Wi-Fi access (the premium access or advertising sponsored access), depending on their advertising concentration levels and MU visiting frequencies. | 10.1109/tnet.2017.2675944 | [
"https://arxiv.org/pdf/1609.01951v3.pdf"
]
| 4,433,547 | 1609.01951 | 1388144662109a9bfd4bc71ec4bd241874ed1ed2 |
Public Wi-Fi Monetization via Advertising
ManHaoran Yu
Hon Cheung
Senior Member, IEEELin Gao
Fellow, IEEEJianwei Huang
Public Wi-Fi Monetization via Advertising
Index Terms-Wi-Fi pricingWi-Fi advertisingStackelberg gamerevenue sharing
The proliferation of public Wi-Fi hotspots has brought new business potentials for Wi-Fi networks, which carry a significant amount of global mobile data traffic today. In this paper, we propose a novel Wi-Fi monetization model for venue owners (VOs) deploying public Wi-Fi hotspots, where the VOs can generate revenue by providing two different Wi-Fi access schemes for mobile users (MUs): (i) the premium access, in which MUs directly pay VOs for their Wi-Fi usage, and (ii) the advertising sponsored access, in which MUs watch advertisements in exchange of the free usage of Wi-Fi. VOs sell their ad spaces to advertisers (ADs) via an ad platform, and share the ADs' payments with the ad platform. We formulate the economic interactions among the ad platform, VOs, MUs, and ADs as a three-stage Stackelberg game. In Stage I, the ad platform announces its advertising revenue sharing policy. In Stage II, VOs determine the Wi-Fi prices (for MUs) and advertising prices (for ADs). In Stage III, MUs make access choices and ADs purchase advertising spaces. We analyze the sub-game perfect equilibrium (SPE) of the proposed game systematically, and our analysis shows the following useful observations. First, the ad platform's advertising revenue sharing policy in Stage I will affect only the VOs' Wi-Fi prices but not the VOs' advertising prices in Stage II. Second, both the VOs' Wi-Fi prices and advertising prices are non-decreasing in the advertising concentration level and non-increasing in the MU visiting frequency. Numerical results further show that the VOs are capable of generating large revenues through mainly providing one type of Wi-Fi access (the premium access or advertising sponsored access), depending on their advertising concentration levels and MU visiting frequencies.
airports. 1 The venue owners (VOs) build public Wi-Fi for the access of mobile users (MUs), in order to enhance MUs' experiences and meanwhile provide location-based services (e.g., shopping guides, navigation, billing) to benefit the VOs' own business [4].
To compensate for the Wi-Fi deployment and operational costs, VOs have been actively considering monetizing their hotspots. One conventional business model is that VOs directly charge MUs for their Wi-Fi usage. However, as most MUs prefer free Wi-Fi access, it is suggested that VOs should come up with new business models to create extra revenue streams [3]. Wi-Fi advertising, where VOs obtain revenue from advertisers (ADs) by broadcasting ADs' advertisements on their hotspots, has emerged as a promising monetization approach. It is especially attractive to ADs, as the accurate localization of Wi-Fi allows ADs to make location-aware advertising. Furthermore, with MUs' basic information collected by the hotspots, 2 ADs can efficiently find their targeted customers and deliver the personalized contents to them.
Nowadays, several companies, including SOCIFI (collaborated with Cisco) [5] and Boingo [6], are providing the following types of technical supports for VOs and ADs on Wi-Fi advertising. First, they offer the devices and softwares which enable VOs to display selected advertisements on the Wi-Fi login pages and collect the statistics information (e.g., number of visitors and click-through rates). Second, they manage the ad platforms, where VOs and ADs trade the ad spaces. Once ADs purchase the ad spaces, VOs and ad platforms share ADs' payments based on the sharing policy designed by ad platforms. Although Wi-Fi advertising has been emerging in practice, its influence on entities like VOs and MUs, as well as the detailed pricing and revenue sharing policies, has not been carefully studied in the existing literature. This motivates our study in this work.
B. Contributions
We consider a general Wi-Fi monetization model, where VOs monetize their hotspots by providing two types of Wi-Fi access: premium access and advertising sponsored access. With the premium access, MUs pay VOs according to certain pricing schemes. With the advertising sponsored access, MUs are required to watch the advertisements, after which MUs use Wi-Fi for free during a certain period. 3 Depending on the VOs' pricing schemes, MUs with different valuations on Wi-Fi access will choose different types of access. When 1 Specifically, "Retails", and "Cafes & Restaurants" are the venues with the largest number of hotspots (4.5 and 3.3 million globally in 2015, respectively), followed by "Hotels", "Municipalities", and "Airports" [3]. 2 For instance, when MUs login the public hotspots with their social network accounts, SOCIFI collects customers' information, such as age and gender [5]. 3 As an example, SOCIFI technically supports the premium access as well as the advertising sponsored access for its subscribed VOs MUs choose the advertising sponsored access, VOs sell the corresponding ad spaces to ADs through participating in the ad platform. Based on the ad platform's sharing policy δ, the ad platform and VOs obtain δ and 1 − δ fractions of the ADs' payments, respectively. Fig. 1 illustrates the Wi-Fi monetization ecosystem.
In this work, we will study such a Wi-Fi monetization system in two parts.
1) Modeling and Equilibrium Characterization: In the first part of our work, we model the economic interactions among different decision makers as a three-stage Stackelberg game, and study the game equilibrium systematically. Specifically, in Stage I, the ad platform designs an advertising revenue sharing policy for each VO, which indicates the fraction of advertising revenue that a VO needs to share with the ad platform. In Stage II, each VO decides and announces its Wi-Fi price to MUs for the premium access, and its advertising price to ADs. In Stage III, MUs choose the access types (premium or advertising sponsored access), and ADs decide the number of ad spaces to purchase from the VO.
We analyze the sub-game perfect equilibrium (SPE) of the proposed Stackelberg game systematically. Our analysis shows that: (a) the VO's advertising price (to ADs) in Stage II is independent of the ad platform's advertising revenue sharing policy in Stage I, as a VO always charges the advertising price to maximize the total advertising revenue; (b) the VO's Wi-Fi price (to MUs) in Stage II is set based on the ad platform's sharing policy in Stage I, since a VO will increase the Wi-Fi price to push more MUs to the advertising sponsored access if the VO can obtain more advertising revenue.
2) Sensitivity Analysis: In the second part of our work, we define an equilibrium indicator, the value of which determines the equilibrium outcomes, such as the ad platform's sharing policy and the VO's Wi-Fi price. Intuitively, the equilibrium indicator describes the VO's relative benefit in providing the premium access over the advertising sponsored access. We show that when the equilibrium indicator is small, the VO charges the highest Wi-Fi price to push all MUs to the advertising sponsored access. On the other hand, when the equilibrium is large, the ad platform sets the highest advertising revenue sharing ratio, and the VO mainly generates its revenue from the premium access.
Furthermore, we investigate the influences of (a) the advertising concentration level, which measures the degree of asymmetry in ADs' popularity, and (b) the visiting frequency, which reflects the average time that MUs visit the venue. Our analysis shows that these two parameters have the opposite impacts on a VO's pricing strategies: (a) both the VO's Wi-Fi price and advertising price are non-decreasing when the popularity among ADs becomes more asymmetric, and (b) both prices are non-increasing when MUs visit the VO more often.
The key contributions of this work are as follows:
• Novel Wi-Fi Monetization Model: To the best of our knowledge, this is the first work studying the advertising sponsored public Wi-Fi hotspots. We consider a general Wi-Fi monetization model with both the premium access and the advertising sponsored access, which enable a VO to segment the market based on MUs' valuations, and maximize the VO's revenue. • Wi-Fi Monetization Ecosystem Analysis: We study a Wi-Fi monetization ecosystem consisting of the ad platform, VOs, MUs, and ADs, and analyze the equilibrium via a three-stage Stackelberg game. We show that a VO's advertising price is independent of the ad platform's sharing policy, and a single term called equilibrium indicator determines the VO's Wi-Fi price and the ad platform's sharing policy. • Analysis of Parameters' Impacts: We study the impacts of the advertising concentration level and the visiting frequency, and show that they have the opposite impacts on the equilibrium outcomes. For example, a VO's Wi-Fi price and advertising price are non-decreasing in the advertising concentration level and non-increasing in the visiting frequency. • Performance Evaluations: Numerical results show that the VOs are able to generate large total revenues by mainly offering one type of Wi-Fi access (the premium access or advertising sponsored access), depending on their advertising concentration levels and visiting frequencies.
C. Related Work
Several recent works have studied the business models related to Wi-Fi networks. Duan et al. in [7] and Musacchio et al. in [8] studied the pricing schemes of Wi-Fi owners. Yu et al. in [9] analyzed the optimal strategies for network operators and VOs to deploy public Wi-Fi networks cooperatively. Gao et al. in [10] and Iosifidis et al. in [11] investigated the Wi-Fi capacity trading problem, where cellular network operators lease third-party Wi-Fi to offload their traffic. Some other recent works [12]- [16] proposed and analyzed a novel crowdsourced Wi-Fi network, where Wi-Fi owners collaborate and share their Wi-Fi access points with each other. Different from these works, we study the monetization of public Wi-Fi through the Wi-Fi advertising, and focus on the economic interactions among different entities in the entire Wi-Fi ecosystem.
A closely related work on Wi-Fi advertising is [17], where Bergemann et al. considered an advertising market with ADs having different market shares. The differences between [17] and our work are as follows. First, in [17], an MU is only interested in one AD's product, while in our model, an MU can be interested in multiple ADs' products. Second, in [17], the authors analyzed the market with an infinite number of ADs, while in our work, we first analyze the problem with a finite number of ADs, and then consider the limiting asymptotic case with an infinite number of ADs. Moreover, in [18], [19], the authors explored the influence of targeting on the advertising market. However, none of the works [17]- [19] considered the ad platform and the associated advertising revenue sharing, which is a key focus of our study.
II. SYSTEM MODEL
In this section, we define the strategies of four types of decision makers in the Wi-Fi monetization ecosystem: the ad platform, VOs, ADs, and MUs. We formulate their interactions as a three-stage Stackelberg game.
A. Ad Platform
The ad platform plays two major roles in the ecosystem. First, it offers the platform for VOs to locate ADs and sell their ad spaces to ADs. Second, it offers the necessary technical supports for VOs to display advertisements on their Wi-Fi hotspots. 4 To compensate for its operational cost, the ad platform can share a fraction of the advertising revenue when the VOs sell ad spaces to ADs. 5 Revenue Sharing Ratio δ: We first consider the VOspecific revenue sharing case, where the ad platform can set different advertising revenue sharing ratios for different VOs. In this case, we can focus on the interaction between the ad platform and a particular VO without loss of generality, as different VOs are decoupled. Let δ denote the ad platform's revenue sharing policy for the VO, which corresponds to the fraction of the advertising revenue that the ad platform obtains when the VO sells the ad spaces to ADs. When the ad platform takes away all the advertising revenue (i.e., δ = 1), the VO will not be interested in providing the advertising sponsored access, and the ad platform cannot obtain any revenue. Hence, we assume that the ad platform can only choose δ from interval [0, 1 − ], where is a positive number close to zero. Mathematically, all results in this paper hold for any ∈ 0, 1 3 . The corresponding analysis for this VO-specific revenue sharing case is given in Sections II to VII.
In Section VIII, we will further discuss the uniform revenue sharing case, where the ad platform sets a uniform δ U ∈ [0, 1 − ] for all VOs due to the fairness consideration. 4 For example, SOCIFI Media Network is the ad platform managed by SOCIFI [5]. SOCIFI Media Network collects visitors' data, provides the statistics, such as the click-through rates, and supports the ad display in different formats (e.g., website, video, message). 5 As stated in [5], there is no cost for VOs to register SOCIFI Media Network, which earns profits from sharing the advertising revenue with VOs.
B. VO's Pricing Decision
The VO provides two types of Wi-Fi access for MUs: the premium access and the advertising sponsored access.
Wi-Fi Price p f : We assume that the VO charges the premium access based on a time segment structure: Each time segment has a fixed length, and the VO charges p f per time segment. Fig. 2 illustrates such an example, where the length of one time segment is 30 minutes. If an MU chooses the premium access for two segments, it pays 2p f , and can use the Wi-Fi for 60 minutes.
Advertising Price p a : The MU can also use the Wi-Fi for free by choosing the advertising sponsored access. In this case, the MU has to watch an advertisement at the beginning of each time segment. 6 To guarantee the fairness among the MUs who choose the advertising sponsored access, we assume that all advertisements have the same displaying time. Let p a denote the advertising price for ADs (for showing one advertisement). In the example of Fig. 2, the ad display time is 1 minute. If an MU chooses the advertising sponsored access for two segments, it needs to watch 2 minutes of advertisements in total, and can use the Wi-Fi for the remaining 58 minutes free of charge. Meanwhile, the total payment of ADs is 2p a , which will be shared by the ad platform and the VO according to the revenue sharing ratio mentioned before.
In our model, we assume that the advertising price p a is set by the VO, and this setting has been adopted by the companies, such as SOCIFI [5]. It is important to note that in the case where the advertising price p a is set by the ad platform, our analysis and results will remain unchanged. This is because the VO and ad platform will choose the same advertising price, which maximizes the total advertising revenue. Therefore, our results and conclusions also apply to the scenario where the ad platform determines the advertising price and sells the ad spaces to the ADs on behalf of the VO.
C. MUs' Access Choices
MU's Payoff and Decision: We consider the operations in a fixed relatively long time period (e.g., one week). 7 Let N > 0 denote the number of MUs visiting the VO during the period. We use θ ∈ [0, θ max ] (θ max > 0) to describe a particular MU's valuation on the Wi-Fi connection. We assume that θ follows the uniform distribution. 8 Let d ∈ {0, 1} denote an MU's access choice, with d = 0 denoting the advertising sponsored access, and d = 1 denoting the premium access. We normalize the length of each segment to 1, and define the payoff of a type-θ MU in one time segment as
Π MU (θ, d, p f ) = θ (1 − β) , if d = 0, θ − p f , if d = 1,(1)
where β ∈ (0, 1] is the utility reduction factor, and term 1 − β describes the discount of the MU's utility due to the inconvenience of watching advertisements. 9 For simplicity, we assume that β is MU-independent. When d = 0, the MU's equivalent Wi-Fi usage time during each time segment is 1−β; when d = 1, the MU pays p f to use the Wi-Fi during the whole segment. Note that we model the inconvenience of watching advertisements as a multiplicative cost and the payment for the premium access as an additive cost. As we will show in Section III-A, the results obtained under our model are consistent with the reality, where the MUs with high valuations on the Wi-Fi connection choose the premium access and the MUs with low valuations choose the advertising sponsored access.
Each MU will choose an access type that maximizes its payoff. Let ϕ f (p f ) , ϕ a (p f ) ∈ [0, 1] denote the fractions of MUs choosing the premium access and the advertising sponsored access under price p f , respectively.
MU Visiting Frequency λ: We further assume that the number of time segments that an MU demands at the venue within the considered period (say one week) is a random variable K, which takes the value from set {0, 1, 2, . . .} and follows the Poisson distribution with parameter λ > 0. 10 We assume that all MUs visiting the venue have a homogenous parameter λ. Since λ = E {K}, λ reflects MU visiting frequency at the venue, and a larger λ implies that MUs visit the venue more often.
Since the current Wi-Fi technology already achieves a large throughput, we assume that the capacity of the VO's Wi-Fi is not a bottleneck and can be considered as unlimited. 11 9 In Fig. 2's example, the time segment length is 30 minutes. If an MU chooses the advertising sponsored access and its utility is equivalent to the case where it directly uses Wi-Fi for 20 minutes (which we call equivalent Wi-Fi usage time) without watching advertisements, parameter β is computed as 1 − 20 30 = 1 3 . 10 Poisson distribution has been widely used to model the distribution of the number of events that occur within a time period [21]. It is a good initial approximation before we get more measurement data that allow us to build a more elaborated model of MUs' behaviors. 11 A similar assumption on the unlimited Wi-Fi capacity has been made in reference [7]. Next we briefly discuss the problem with a limited Wi-Fi capacity. First, if the capacity is limited but MUs who choose d = 0 and d = 1 experience the same congestion level, then essentially it will not change our analysis. Second, if MUs with d = 0 and d = 1 experience different congestion levels but the difference in the congestion level is a constant, then the congestion difference can be easily factorized in our model, and also does not change the results. Third, if MUs with d = 0 and d = 1 experience different congestion levels and the difference is not a constant, the analysis will be more complicated, and we plan to investigate this in our future work.
D. ADs' Advertising Model
There are M ADs who seek to display advertisements at the venue. 12 We assume that MUs have intrinsic interests on different ADs' products. An MU will purchase a particular AD's product, if and only if it is interested in that AD's product, and has seen the AD's advertisement at least once. This assumption reflects the complementary perspective of advertising [22], and has been widely used in the advertising literature [17]- [19]. Intuitively, this assumption means that the advertising does not change the consumers' preferences, but becomes a necessary condition to generate a purchase. 13 AD's Popularity σ: We define the popularity of an AD as the percentage of MUs who are interested in the AD's product. Each AD's popularity at the venue is described by its type σ, which is uniformly distributed in [0, σ max ]. We assume the popularity of a type-σ AD is
s (σ) γe −γσ ,(2)
where γ ∈ (0, 1] is a system parameter. 14 We can show that s (σ) is decreasing in the type index σ and s (σ) ≤ 1. The parameter γ measures the advertising concentration level at the venue, which is defined as the asymmetry of the popularities of ADs with different type σ. A large γ implies a high advertising concentration level, since those ADs with small values of σ have much higher popularities than other ADs. In Fig. 3, we show different types of ADs' popularities at an electronics store and a cafe, respectively. Since the electronics store is more specialized and most visitors have interests on the electronics products, the phone AD and computer AD are much more popular than other types of ADs. Hence, the 12 In a more general situation, the ADs may simply send their requests of displaying advertisements to the ad platform without specifying the specific venues for the ad display. In this case, the ad platform needs to determine the distribution of the ADs' advertisements over different venues by jointly considering the VOs' characteristics. We leave the study of this general situation as our future work. 13 Besides the complementary perspective, reference [22] also mentioned the persuasive perspective, where the advertising alters consumers' preferences. We will study the persuasive perspective in our future work.
14 Reference [17] used a similar exponential function to model the market share of a particular AD. However, reference [17] considered a model with an infinite number of ADs, and directly made assumptions on an AD's market share. In our work, we model a finite number of ADs, and use a randomly distributed parameter σ to describe an AD's popularity. In Section IV, we first analyze the VO's optimal pricing for a finite number of ADs, and then focus on the limiting asymptotic case with an infinite number of ADs. Therefore, compared with [17], our model and analysis are different and more reasonable. concentration level γ of the electronics store is high. On the contrary, the cafe is less specialized and has a lower concentration level than the electronics store.
Advertisement Display: Next we introduce the advertisement displaying setting. Recall that the number of time segments demanded by an MU is Poisson distributed with an average of λ (segments/MU), and the proportion of MUs choosing the advertising sponsored access is ϕ a (p f ). Hence, the expected number of ad spaces that the VO has during the entire time period is λN ϕ a (p f ). Let m be the number of advertisements that an AD decides to display at the venue during the entire time period. If an MU chooses the advertising sponsored access, then the VO shows an advertisement from this particular AD to the MU with the following probability at the beginning of every time segment: 15
χ (m, p f ) m λN ϕ a (p f ) .(3)
Note that if the VO does not sell out all the ad spaces, the VO will fill the unsold ad spaces with the VO's own business promotions. This is to guarantee the fairness among the MUs choosing the advertising sponsored access. Specifically, if the VO does not fill the unsold ad spaces with its own business promotions, some MUs choosing the advertising sponsored access may not watch any advertisements or promotions, which leads to fairness issues among the MUs choosing the advertising sponsored access. AD's Payoff: Next we study a type-σ AD's payoff. We name the considered type-σ AD as the tagged AD. We use ν (m, p f ) to denote the probability of seeing the tagged AD's advertisements at least once for an MU choosing the advertising sponsored access. Next we compute ν (m, p f ).
Recall that the number of time segments that an MU demands is the discrete random variable K, which follows the Poisson distribution with parameter λ. Hence, the probability for an MU choosing the advertising sponsored access to demand K = k time segments is e −λ λ k k! . Assuming that the MU demands k time segments, hence the conditional probability that the MU does not see the tagged AD's advertisements during these k time segments is (1 − χ (m, p f )) k . Therefore, considering all possibilities of the discrete random variable K, we have
ν (m, p f ) = 1 − ∞ k=0 e −λ λ k k! (1 − χ (m, p f )) k . (4)
Based on the Maclaurin expansion of the exponential function, we can simplify (4) as
ν (m, p f ) = 1 − e − m N ϕa ( p f ) ,(5)
which is an increasing and concave function of m. We can see that ν (m, p f ) in (5) is independent of λ. This is because λ has two opposite influences on ν (m, p f ). First, when λ increases, the probability that an MU demands a large number of time segments increases, which potentially increases 15 As shown in the later analysis, the VO will set pa large enough so that the total number of displayed advertisements does not exceed λN ϕa p f . Hence, the summation of (3) over all ADs will not be greater than 1.
Stage I
The ad platform specifies the revenue sharing policy δ. ⇓ Stage II The VO specifies the Wi-Fi price p f and ad price p a . ⇓ Stage III Each MU with type θ ∈ [0, θ max ] makes access choice d; Each AD with type σ ∈ [0, σ max ] purchases m ad spaces. ν (m, p f ). Second, when λ increases, the total number of ad spaces λN ϕ a (p f ) increases. Based on (3), this leads to the decrease of χ (m, p f ), which reduces ν (m, p f ). Because the two opposite influences cancel out, ν (m, p f ) is independent of λ.
Recall that an MU will purchase the AD's product, if and only if the MU is interested in the AD's product and has seen the AD's advertisement at least once. We use Π AD (σ, m, p f , p a ) to denote a type-σ AD's payoff (i.e., revenue minus payment):
Π AD (σ, m, p f , p a ) = aN ϕ a (p f )s (σ)ν (m, p f )−p a m. (6)
The parameter a > 0 is the profit that an AD generates when an MU purchases its product, 16 N ϕ a (p f ) is the expected number of MUs choosing the advertising sponsored access, s (σ) is the type-σ AD's popularity, ν (m, p f ) is the probability of seeing the AD's advertisements at least once for an MU choosing the advertising sponsored access, and p a is the VO's advertising price.
E. Three-Stage Stackelberg Game
We formulate the interactions among the ad platform, the VO, MUs, and ADs by a three-stage Stackelberg game, as illustrated in Fig. 4. From Section III to Section V, we analyze the three-stage game by backward induction.
For convenience, we summarize the key notations in Table I, including some notations to be discussed in Sections III, IV, and V.
III. STAGE III: MUS' ACCESS AND ADS' ADVERTISING
In this section, we analyze MUs' optimal access strategies and ADs' optimal advertising strategies in Stage III. The MUs and ADs make their decisions by responding to the ad platform's revenue sharing policy δ in Stage I, and to the VO's pricing decisions p f and p a in Stage II.
A. MUs' Optimal Access
Equation (1) characterizes an MU's payoff for one time segment. Since an MU's payoff for multiple time segments is simply the summation of its payoff from each time segment, an MU's access choice only depends on its type θ and is independent of the number of time segments it demands. Equation (1) suggests that a type-θ MU will choose d = 1
Decision Variables δ ∈ [0, 1 − ] Ad platform's revenue sharing ratio p f ∈ [0, ∞) VO's Wi-Fi price p a ∈ [0, ∞) VO's advertising price d ∈ {0, 1} An MU's access choice m ∈ [0, ∞) An AD's ad display choice Parameters N ∈ (0, ∞) Expected Number of MUs θ ∈ [0, θ max ] An MU's Wi-Fi valuation (MU type) β ∈ (0, 1]
Utility reduction due to ads
λ ∈ (0, ∞) MU visiting frequency M ∈ (0, ∞) Expected Number of ADs σ ∈ [0, σ max ] An AD's popularity index (AD type) γ ∈ (0, 1] Advertising concentration level a ∈ (0, ∞) ADs' unit profit of MUs' purchasing η ∈ [0, ∞)
Popularity of the advertising market
Ω ∈ (0, ∞) Equilibrium indicator Functions Π MU (θ, d, p f ) A type-θ MU's payoff (one segment) Π AD (σ, m, p f , p a ) A type-σ AD's payoff Π VO a (p f , p a , δ) VO's revenue from sponsored access Π VO f (p f ) VO's revenue from premium access Π APL (δ) Ad platform's revenue ϕ f (p f ) Fraction of MUs in premium access ϕ a (p f ) Fraction of MUs in sponsored access s (σ) A type-σ AD's popularity ν (m, p f )
Probability of seeing the tagged AD's ads at least once for an MU choosing the advertising sponsored access
θ T (p f ) Threshold MU type σ T (p a ) Threshold AD type Q (p a , p f ) Total number of the sold ad spaces if θ − p f ≥ θ (1 − β)
, and d = 0 otherwise. Therefore, the optimal access choice of a type-θ MU is
d * (θ, p f ) = 1, if θ ≥ θ T (p f ) , 0, if θ < θ T (p f ) ,(7)
where θ T (p f ) min p f β , θ max is the threshold MU type. Intuitively, MUs with high valuations on the Wi-Fi connection will pay for the premium access and use the Wi-Fi for the whole time segment, while MUs with low valuations will watch advertisements in order to access Wi-Fi for free.
Since θ follows the uniform distribution, under a price p f , the fractions of MUs choosing different types of access are
ϕ a (p f ) = θ T (p f ) θ max and ϕ f (p f ) = 1 − θ T (p f ) θ max .(8)
B. ADs' Optimal Advertising
According to (6), a type-σ AD's optimal advertising problem is as follows.
Problem 1. The type-σ AD determines the optimal number of ad displays that maximizes its payoff in (6):
max aN ϕ a (p f )s (σ)ν (m, p f ) − p a m (9) var. m ≥ 0,(10)
where s (σ) is the type-σ AD's popularity defined in (2).
The type-σ AD's optimal advertising strategy solving Problem 1 is:
m * (σ, p a , p f ) = N ϕ a (p f ) ln aγ pa −γσ , if 0 ≤ σ ≤ σ T (p a ) , 0, if σ T (p a ) < σ ≤ σ max .(11)
Here, σ T (p a ) is the threshold AD type, indicating whether an AD places advertisements. It is defined as
σ T (p a ) min 1 γ ln aγ p a , σ max .(12)
We can show that m * (σ, p a , p f ) is non-increasing in the AD's type σ. The reason is that an AD's popularity s (σ) decreases with its type σ. Only for ADs with high popularities, the benefit of advertising can compensate for the cost of purchasing ad spaces from the VO.
Moreover, m * (σ, p a , p f ) increases with the number of MUs choosing the advertising sponsored access, N ϕ a (p f ). It is somewhat counter-intuitive to notice that the threshold σ T (p a ) is independent of N ϕ a (p f ). When N ϕ a (p f ) increases, the number of MUs that both choose the advertising sponsored access and like the product from an AD with type σ = σ T (p a ) indeed increases. While expression (5) implies that since there are more MUs, the probability for an MU to see the advertisements from the AD with type σ = σ T (p a ) decreases. As a result, the change of N ϕ a (p f ) does not affect the number of ADs who choose to display advertisements at the venue.
When m * (σ, p a , p f ) is not an integer, the type-σ AD can purchase the ad spaces in a randomized manner, and ensure that the expected number of purchased ad spaces equals m * (σ, p a , p f ). The randomized implementation does not affect the ad platform's, VO's, and MUs' equilibrium strategies. It only reduces some ADs' payoffs, and our numerical results show that such an influence is minor. We provide the details about the randomized implementation and numerical experiments in the appendix.
IV. STAGE II: VO'S WI-FI AND ADVERTISING PRICING
In this section, we study the VO's advertising pricing p a and Wi-Fi pricing p f in Stage II. The VO determines its pricing by responding to the ad platform's revenue sharing policy δ in Stage I, and anticipating the MUs' and ADs' strategies in Stage III.
A. VO's Optimal Advertising Price
We first fix the VO's Wi-Fi price p f and optimize the VO's advertising price p a . We will show that the VO's optimal advertising price p * a is independent of p f . In the next subsection, we will further optimize p f . Let Q (p a , p f ) denote the expected total number of ad spaces sold to all ADs. According to (11), if p a > aγ, no AD will purchase the ad spaces and Q (p a , p f ) = 0; if 0 ≤ p a ≤ aγ, we compute Q (p a , p f ) as follows:
Q (p a , p f ) = M σ T (pa) 0 1 σ max m * (σ, p a , p f ) dσ = M N ϕ a (p f ) σ max ln aγ p a σ T (p a ) − γ 2 σ 2 T (p a ) ,(13)
where M is the number of ADs, and 1 σmax is the probability density function for an AD's type σ.
We define Π VO a (p f , p a , δ) as the VO's expected advertising revenue, which can be computed as
Π VO a (p f , p a , δ) = (1−δ)p a Q(p a , p f ), if 0 ≤ p a ≤ aγ, 0, if p a > aγ,(14)
where 1−δ denotes the fraction of advertising revenue received by the VO under the ad platform's policy. Based on (14), we formulate the VO's advertising pricing problem as follows.
Problem 2. The VO determines the optimal advertising price by solving
max (1 − δ) p a Q (p a , p f ) (15) s.t. Q (p a , p f ) ≤ λN ϕ a (p f ) ,(16)var. 0 ≤ p a ≤ aγ,(17)
where constraint (16) means that the VO can sell at most λN ϕ a (p f ) ad spaces as discussed in Section II-D.
The solution to Problem 2 is summarized in the following proposition (the proofs of all propositions can be found in the appendix).
Proposition 1 (Advertising price). The VO's unique optimal advertising price p * a is independent of the VO's Wi-Fi price p f and the ad platform's advertising revenue sharing policy δ, and is given by
p * a = aγe − √ 2λγσmax M , if λ M ≤ min γσmax 2 ,1, 2 γσmax , aγe −( γσmax 2 + λ M ) , if γσmax 2 < λ M ≤ 1, aγe −( γσmax 2 +1) , if γσmax 2 < 1 < λ M , aγe −2 ,
other cases.
We observe that the expression of p * a is sensitive to the number of ADs M and the parameter of ADs' popularity distribution σ max . To reduce the cases to be considered and have cleaner engineering insights, we will focus on a large advertising market asymptotics with the following assumption in the rest of the paper. 17 Assumption 1. There are infinitely many ADs in the advertising market, i.e., M → ∞, and the lowest popularity among 17 Assumption 1 is for the sake of presentations. Without Assumption 1, there will be seven different regimes (which are divided based on the relations among λ M , γσmax 2 , 2 γσmax , and 1) that we need to discuss (and we can solve), and we will not be able to include the full analysis here due to the space limit. The consideration of finite systems without Assumption 1 does not change the main results in the later sections. In reference [17], the authors directly modeled and analyzed the advertising market with an infinite number of ADs.
all types of ADs is zero, i.e., σ max → ∞. 18 We define p ∞ a as the VO's optimal advertising price under Assumption 1. According to Proposition 1, we show p ∞ a in the following proposition. 19 Proposition 2 (Advertising price under Assumption 1). Under Assumption 1, the VO's unique optimal advertising price p ∞ a is independent of the VO's Wi-Fi price p f and the ad platform's advertising revenue sharing policy δ, and is given by
p ∞ a = aγe − 2λγ η , if 0 < λ ≤ 2η γ , aγe −2 , if λ > 2η γ ,(19)
where η lim M,σmax→∞ M σmax and takes a value in [0, ∞). Next we explain the physical meaning of η. Under Assumption 1, if we randomly pick an MU, the expected number of ADs that the MU likes is computed as
lim M,σmax→∞ M σmax 0 s (σ) σ max dσ = lim M,σmax→∞ M σ max = η.(20)
Hence, η describes the popularity of the advertising market.
Next we discuss how the VO's advertising price p ∞ a changes with λ ∈ 0, 2η γ and λ ∈ 2η γ , ∞ , respectively. 1) Small λ ∈ 0, 2η γ : In this case, the advertising price p ∞ a decreases with λ. This is because MUs' small demand rate λ leads to a limited number of ad spaces. When λ increases, the VO has more ad spaces to sell, and will decrease p ∞ a to attract more ADs. We can verify that the number of sold ad spaces is λN ϕ a (p f ), i.e., the VO always sells out all of the spaces. We call ADs that purchase the ad spaces as active ADs, and use ρ (p ∞ a ) to denote the expected number of active ADs. We can compute ρ (p ∞ a ) as
ρ (p ∞ a ) =M σ T (p ∞ a ) σ max = 2λη γ ,(21)
where σ T (p ∞ a ) is defined in (12). Moreover, the VO's expected advertising revenue is
Π VO a (p f , p ∞ a , δ) = (1 − δ) aN ϕ a (p f ) λγe − 2λγ η .(22)
Both (21) and (22) increase with λ when λ is small.
2) Large λ ∈ 2η γ , ∞ : In this case, the advertising price p ∞ a is independent of λ. The reason is that the VO has sufficient ad spaces to sell, so it can directly set p ∞ a to maximize the objective function (15) while guaranteeing the capacity constraint (16) satisfied. We can verify that the number of sold ad spaces Q (p ∞ a , p f ) is 2η γ N ϕ a (p f ), which is smaller than the capacity λN ϕ a (p f ). 20 Furthermore, the 18 From (2), when σmax → ∞, the popularity of a type-σmax AD is limσ max→∞ s (σmax) = limσ max→∞ γe −γσmax = 0. 19 In Section VIII, we show that even without Assumption 1, the p ∞ a derived in Proposition 2 achieves a close-to-optimal advertising revenue for most parameter settings. 20 In this case, the VO can fill the unsold ad spaces with its own business promotions to guarantee the fairness among MUs choosing the advertising sponsored access. expected number of active ADs ρ (p ∞ a ) is
ρ (p ∞ a ) = M σ T (p ∞ a ) σ max = 2η γ ,(23)
and the VO's expected advertising revenue is
Π VO a (p f , p ∞ a , δ) = 2 (1 − δ) aN ϕ a (p f ) ηe −2 .(24)
Both (23) and (24) are independent of λ. Based on (22) and (24), we summarize the VO's expected advertising revenue as
Π VO a (p f , p ∞ a , δ) = (1 − δ) aN ϕ a (p f ) g (λ, γ, η) ,(25)
where g (λ, γ, η)
λγe − 2λγ η , if λ ∈ 0, 2η γ , 2ηe −2 , if λ ∈ 2η γ , ∞ .(26)
B. VO's Optimal Wi-Fi Price
Now we analyze the VO's optimal choice of Wi-Fi pricing p f . We define Π VO f (p f ) as the VO's revenue in providing the premium access with a given p f . 21 Since there are N ϕ f (p f ) MUs choosing the premium access and the expected number of time segments demanded by an MU is λ, we have
Π VO f (p f ) = λp f N ϕ f (p f ) .(27)
Based on (25) and (27), we find that p f affects the VO's revenue in providing both types of access. The VO's total revenue is computed as
Π VO (p f , δ) = Π VO f (p f ) + Π VO a (p f , p ∞ a , δ) = λp f N ϕ f (p f ) + (1 − δ) aN g (λ, γ, η) ϕ a (p f ) . (28)
By checking ϕ f (p f ) and ϕ a (p f ) in (8), we can show that Π VO (p f , δ) does not change with p f when p f ∈ [βθ max , ∞). This is because all MUs will choose the advertising sponsored access if p f ≥ βθ max , and increasing p f in this range will no longer have an impact on Π VO (p f , δ). Therefore, we only need to consider optimizing Π VO (p f , δ) over p f ∈ [0, βθ max ]. This leads to the following optimal Wi-Fi pricing problem. Problem 3. The VO determines the optimal Wi-Fi price to maximize its total revenue in (28):
max λp f N ϕ f (p f ) + (1 − δ) aN g (λ, γ, η) ϕ a (p f ) (29) var. 0 ≤ p f ≤ βθ max .(30)
Solving Problem 3, we obtain the VO's optimal Wi-Fi pricing in the following proposition.
Proposition 3 (Optimal Wi-Fi price under δ). Given the ad platform's fixed sharing policy δ, the VO's unique optimal Wi-Fi price p * f (δ) is given by
p * f (δ) = βθ max 2 +min (1 − δ) a 2λ g (λ, γ, η) , βθ max 2 .(31)
We can show that p * f (δ) is non-increasing in δ. When δ increases, i.e., the fraction of advertising revenue left to the VO 21 Notice that the VO's revenue in providing the premium access is collected from the MUs, and hence is independent of the VO's advertising price. decreases, the VO decreases its Wi-Fi price p * f (δ) to attract more MUs to choose the premium access.
V. STAGE I: AD PLATFORM'S REVENUE SHARING
In this section, we study the ad platform's sharing policy δ in Stage I. The ad platform decides its sharing policy by anticipating the VO's pricing strategies in Stage II and MUs' and ADs' strategies in Stage III.
Based on the ad platform's sharing policy δ ∈ [0, 1 − ], the ad platform and the VO obtain δ and 1−δ fractions of the total advertising revenue, respectively. Since the VO's advertising revenue is given in (25), we compute the ad platform's revenue Π APL (δ) as
Π APL (δ) = δaN ϕ a p * f (δ) g (λ, γ, η) ,(32)
where p * f (δ) is the VO's optimal Wi-Fi price under policy δ, as computed in Proposition 3. We formulate the ad platform's optimization problem as follows. Problem 4. The ad platform determines δ * to maximize its revenue in (32):
max Π APL (δ) (33) var. 0 ≤ δ ≤ 1 − .(34)
In order to compute the optimal δ * , we introduce an equilibrium indicator Ω, which affects the function form of δ * . We define Ω as Ω λβθ max ag (λ, γ, η)
.
The intuition of Ω can be interpreted as follows. Based on (8) and (27), the VO's revenue in providing the premium access can be written as
Π VO f (p f ) = λβθ max N ϕ f (p f ) ϕ a (p f ) .(36)
Based on (25), the VO's revenue in providing the advertising sponsored access is
Π VO a (p f , p ∞ a , δ) = ag (λ, γ, η) N (1 − δ) ϕ a (p f ) .(37)
Next we focus on the system parameters in (36) and (37). We observe that the terms λβθ max N and ag (λ, γ, η) N act as the coefficients for (36) and (37), respectively. Therefore, intuitively, the indicator Ω in (35) describes the VO's relative benefit in providing the premium access over the advertising sponsored access. Based on the indicator Ω, we summarize the solution to Problem 4 as follows.
Proposition 4 (Revenue sharing policy). The ad platform's unique optimal advertising revenue sharing policy δ * is given by
δ * = 1 − ,
if Ω ∈ (0, ] , 1 − Ω, if Ω ∈ , 1 3 ,
1 2 + Ω 2 , if Ω ∈ 1 3 , 1 − 2 , 1 − , if Ω ∈ [1 − 2 , ∞).(38)
We can see that δ * ≥ 2 3 for all Ω ∈ (0, ∞). That is to say, the ad platform always takes away at least two thirds of the total advertising revenue. In particular, when Ω ∈ (0, ] or Ω ∈ [1 − 2 , ∞), the ad platform chooses the highest sharing ratio, i.e., δ * = 1 − . Based on our early discussion of Ω, the VO's relative benefits in providing the premium access over the advertising sponsored access under these cases are either very small or very large. Therefore, even if the ad platform decreases its sharing ratio δ * , the VO's interest in providing the advertising sponsored access will not significantly increase in these two cases. As a result, the ad platform chooses the highest sharing ratio δ * to extract most of the advertising revenue.
Based on Proposition 4, we obtain the VO's Wi-Fi price at the equilibrium by plugging δ * into the expression of p * f (δ) in (31), and summarize it in the following proposition.
Proposition 5 (Wi-Fi price at the equilibrium). The VO's unique Wi-Fi price at the equilibrium is given by
p * f (δ * ) = βθ max , if Ω ∈ 0, 1 3 , βθmax 4 + ag(λ,γ,η) 4λ , if Ω ∈ 1 3 , 1 − 2 , βθmax 2 + ag(λ,γ,η) 2λ , if Ω ∈ [1 − 2 , ∞).(39)
According to (8) and Proposition 5, we can compute ϕ a p * f (δ * ) , i.e., the proportion of MUs choosing the advertising sponsored access at the equilibrium. We can show that ϕ a p * f (δ * ) ≥ 1 2 for all Ω ∈ (0, ∞). Hence, at least half of the MUs choose the advertising sponsored access. In particular, when Ω ∈ 0, 1 3 , we have ϕ a p * f (δ * ) = 1, i.e., all MUs choose the advertising sponsored access. In this case, the VO has a very small relative benefit in providing the premium access, and hence it charges the highest Wi-Fi price p * f (δ * ) = βθ max to push all MUs to choose the advertising sponsored access. 22 VI. SOCIAL WELFARE ANALYSIS In this section, we study the social welfare (SW) of the whole system at the equilibrium, which consists of the ad platform's revenue, the VO's total revenue, the MUs' total payoff, and the ADs' total payoff. The social welfare analysis is important for understanding how much the entire system benefits from the Wi-Fi monetization framework, and how it is affected by different system parameters. Specifically, we compute SW as:
SW = Π APL (δ * ) + Π VO p * f (δ * ), δ * + λN θmax 0 1 θ max Π MU θ, d * θ, p * f (δ * ) , p * f (δ * ) dθ +M σmax 0 1 σ max Π AD σ, m * σ, p ∞ a , p * f (δ * ) , p * f (δ * ) , p ∞ a dσ.(40)
Here, (i) the first term is the ad platform's revenue at the equilibrium, where Π APL (δ) is given in (32) and δ * is given 22 We would like to emphasize that in order to derive clean engineering insights, our model unavoidably involves some simplifications of the much more complicated reality. It is hence most useful to focus on the engineering insights behind the results such as δ * ≥ 2 3 and ϕa p * f (δ * ) ≥ 1 2 , instead of taking the numbers of 2 3 and 1 2 literally.
in Proposition 4; (ii) the second term is the VO's total revenue at the equilibrium, where Π VO (p f , δ) is given in (28) and p * f (δ * ) is given in Proposition 5; (iii) the third term is the MUs' total payoff at the equilibrium, where Π MU (θ, d, p f ) is a type-θ MU's payoff for one time segment given in (1) and d * (θ, p f ) is given in (7); (iv) the last term is the ADs' total payoff at the equilibrium, where Π AD (σ, m, p f , p a ) is given in (6), m * (σ, p a , p f ) is given in (11), and p ∞ a is given in Proposition 2.
Note that the ADs' payments for displaying advertisements are transferred to the ad platform and the VO, and the MUs' payments for the premium access are collected by the VO. Therefore, these payments cancel out in (40). As a result, SW equals the total utility of all MUs and ADs. We show the value of SW in the following proposition.
Proposition 6 (Social welfare). The social welfare at the equilibrium is
SW = 1 2 λN θ max − 1 2 λN p * f (δ * ) ϕ a p * f (δ * ) + ηN ϕ a p * f (δ * ) a − p ∞ a γ 1 + ln aγ p ∞ a ,(41)
where p * f (δ * ) and p ∞ a are the VO's Wi-Fi price given in Proposition 5 and the VO's advertising price given in Proposition 2, respectively.
In (41), the first two terms correspond to the total utility of MUs, and the last term corresponds to the total utility of ADs. In Section VIII-E, we will investigate the impacts of parameters γ and λ on the social welfare through numerical experiments. The numerical results show that the social welfare is always non-decreasing in γ, and is increasing in λ for most parameter settings.
VII. IMPACT OF SYSTEM PARAMETERS
To understand the Wi-Fi monetization at venues with different features, we analyze the impacts of the advertising concentration level γ and visiting frequency λ on the equilibrium outcomes. Compared with other parameters, these two parameters can be dramatically different across venues and hence better reflect the features of the venues. Proposition 7 (Advertising concentration level γ). We show the following results regarding the influence of γ:
(i) The VO's advertising price p ∞ a in (19) is increasing in γ;
(ii) The expected number of active ADs ρ (p ∞ a ) in (21) and (23) is decreasing in γ;
(iii) The VO's Wi-Fi price p * f (δ * ) in (39) is non-decreasing in γ;
(iv) The proportion of MUs that choose the premium access ϕ f p * f (δ * ) is non-increasing in γ.
Items (i) and (ii) of Proposition 7 describe the advertising sponsored access. A high concentration level γ implies that the ADs with small σ have much higher popularities than other ADs. Hence, when γ increases, the ADs with small σ have larger demand in displaying their advertisements. As a result, the VO increases p ∞ a to obtain more advertising revenue. On the other hand, the ADs with large σ have smaller demand in advertising, so the expected number of active ADs decreases.
Items (iii) and (iv) of Proposition 7 describe the premium access. A larger γ corresponds to a smaller equilibrium indicator Ω. Based on the previous discussion in Section V, this means providing the advertising sponsored access is more beneficial to the VO. Hence, under a larger γ, the VO charges a higher p * f (δ * ) to push MUs to choose the advertising sponsored access, which reduces the proportion of MUs choosing the premium access.
Proposition 8 (Visiting frequency λ). We show the following results regarding the influence of λ:
(i) The VO's advertising price p ∞ a in (19) is non-increasing in λ;
(ii) The expected number of active ADs ρ (p ∞ a ) in (21) and (23) is non-decreasing in λ;
(iii) The VO's Wi-Fi price p * f (δ * ) in (39) is non-increasing in λ;
(iv) The proportion of MUs that choose the premium access ϕ f p * f (δ * ) is non-decreasing in λ. Items (i) and (ii) of Proposition 8 are related to the advertising sponsored access. According to the discussion in Section IV, a larger λ means the VO has more ad spaces to sell. Hence, when λ is larger, the VO chooses a smaller p ∞ a to attract more ADs.
Items (iii) and (iv) of Proposition 8 are related to the premium access. We can show that the equilibrium indicator Ω increases with λ. Based on the previous discussion in Section V, a larger indicator means providing the premium access is more beneficial to the VO. Therefore, with a larger λ, the VO charges a lower p * f (δ * ) to attract MUs to choose the premium access, which increases the proportion of MUs choosing the premium access.
According to Propositions 7 and 8, we can observe that parameters γ and λ have exactly the opposite impacts on the equilibrium outcomes.
VIII. NUMERICAL RESULTS
In this section, we provide numerical results. First, we study the optimality of advertising price p ∞ a in (19) without Assumption 1. Then we compare the ad platform's revenue, the VO's revenue, the ADs' payoffs, and the social welfare at venues with different values of γ and λ. Finally, since the ad platform may set a uniform sharing policy for multiple VOs in the practical implementation due to the fairness consideration, we investigate the uniform revenue sharing case and compare it with the VO-specific revenue sharing case studied above.
A. Optimality of p ∞ a without Assumption 1 In Proposition 2, we have shown that p ∞ a in (19) is the optimal solution of Problem 2, assuming both M and σ max going to ∞ (Assumption 1). Now we numerically demonstrate that price p ∞ a in (19) generates a close-to-optimal advertising revenue to Problem 2 for most finite values of M and σ max .
For a particular (M, σ max )-pair, we can compute p ∞ a by (19), 23 and obtain the corresponding advertising revenue Π VO a (p f , p ∞ a , δ) by (14). Moreover, we can compute p * a by (18), and obtain the optimal advertising revenue Π VO a (p f , p * a , δ) by (14). We define
ζ Π VO a (p f , p ∞ a , δ) Π VO a (p f , p * a , δ) .(42)
We can show that ζ ∈ [0, 1], 24 and ζ characterizes the optimality of p ∞ a without Assumption 1. In particular, ζ = 1 implies that price p ∞ a generates the optimal advertising revenue. We choose γ ∼ U [0.01, 1], λ ∼ U [0.1, 5], and a ∼ U [1,3], where U denotes the uniform distribution. 25 We change M and σ max from 1 to 15. For each (M, σ max )-pair, we run the experiment 10, 000 times, and obtain the average ζ.
In Fig. 5, we plot the average ζ against M and σ max . We observe that the average ζ is always above 0.99 when M ≥ 6 and σ max ≥ 6. That is to say, we have Π VO a (p f , p ∞ a , δ) ≥ 0.99Π VO a (p f , p * a , δ) when M ≥ 6 and σ max ≥ 6. Hence, we summarize the following observation. 23 For p ∞ a in (19), η is defined as lim M,σmax→∞ M σmax . We can simply choose η = M σmax to compute p ∞ a for the finite M and σmax situation. 24 Specifically, we can show that p ∞ a in (19) is feasible to Problem 2. Hence, the advertising revenue under price p ∞ a , Π VO a p f , p ∞ a , δ , is no greater than that under the optimal price p * a , Π VO a p f , p * a , δ . 25 Since we can show that ζ is independent of p f , δ, and N , we do not specify the numerical settings for their values. Observation 1. Without Assumption 1, the advertising price computed based on p ∞ a in (19) can still generate a close-tooptimal advertising revenue for the VO.
B. Ad Platform's δ * and Revenue with Different (γ, λ)
Next we compare the ad platform's revenue sharing policy and revenue for venues with different values of advertising concentration level γ and MU visiting frequency λ. We choose N = 200, θ max = 1, β = 0.1, η = 1, a = 4, and = 0.01. We will apply the same settings for the remaining experiments in Section VIII. Fig. 6 is a contour plot illustrating the ad platform's revenue sharing ratio. The horizontal axis corresponds to parameter γ, and the vertical axis corresponds to parameter λ. The values on the contour curves are the ad platform's revenue sharing ratios, δ * , computed for venues with different (γ, λ) pairs. The ad platform needs to strike a proper balance when choosing δ to maximize its revenue: (a) reduce δ can motivate the VO to push more MUs towards the advertising sponsored access, at the expense of a smaller ad platform's revenue per ad display; (b) increase δ can improve the ad platform's revenue per ad display, at the expense of making the advertising sponsored access less attractive to the VO. In Fig. 6, the revenue sharing ratio δ * first decreases with λ, then increases with λ, which means approach (a) is more effective when λ is small and approach (b) is more effective when λ is large. This is because a large λ leads to a large indicator Ω, which means that the VO prefers the premium access, even if the ad platform leaves a large proportion of the advertising revenue to the VO. Hence, when λ is large, it is optimal for the ad platform to set a large δ * to take a large fraction of the advertising revenue. Fig. 7 is a contour plot illustrating the ad platform's revenue. We observe that the ad platform obtains a large Π APL from the venue when γ is large (γ > 0.9) and λ is small (1.2 < λ < 1.8). This parameter combination corresponds to a venue with a small equilibrium indicator Ω. According to Proposition 5, in this case, the VO chooses the highest Wi-Fi price, i.e., p * f (δ * ) = βθ max , and hence all MUs choose the advertising sponsored access. As a result, the total advertising revenue is large. Furthermore, based on Fig. 6, the ad platform sets a large sharing ratio (δ * > 0.8) in this case to extract most of the advertising revenue.
We summarize the observations in Fig. 6 and 7 as follows.
Observation 2. The ad platform's optimal revenue sharing ratio δ * first decreases and then increases with λ. Furthermore, it obtains a large Π APL at the venue with both a large γ and a small λ. C. VO's Revenue with Different (γ, λ)
We investigate the VO's revenue from the advertising sponsored access Π VO a , its revenue from the premium access Π VO f , and its total revenue Π VO = Π VO a + Π VO f at venues with different (γ, λ) pairs.
In Fig. 8, we show the contour plot of Π VO a . We observe that a VO with γ > 0.4 and 3.5 < λ < 3.7 has a large Π VO a . Based on (25) and Proposition 5, the total advertising revenue at the equilibrium is aN ϕ a p * f (δ * ) g (λ, γ, η). From Proposition 7 (iv) and (26), we can show that both ϕ a p * f (δ * ) and g (λ, γ, η) are non-decreasing in γ. Therefore, the total advertising revenue at the equilibrium is non-decreasing in γ. Moreover, from Fig. 6, the ad platform chooses a relatively small δ * (i.e., δ * < 0.7) at the venue with γ > 0.4 and 3.5 < λ < 3.7, and hence the VO obtains a large proportion of the total advertising revenue.
In Fig. 9, we show the contour plot of Π VO f . We find that Π VO f is non-decreasing in λ. The reasons are twofold. First, as λ increases, the MUs visit the venue more frequently, and the expected number of time segments requested by the MUs increases. Second, according to Proposition 8 (iv), the proportion of MUs choosing the premium access is nondecreasing in λ.
In Fig. 10, we show the contour plot of Π VO , which is the summation of Π VO a in Fig. 8 and Π VO f in Fig. 9. We find that the VO with both a large γ (γ > 0.4) and a medium λ (3.5 < λ < 3.9) and the VO with a large λ have large Π VO . According to Fig. 8, the former VO mainly generates its revenue from the advertising sponsored access. According to Fig. 9, the latter VO mainly generates its revenue from the premium access.
We summarize the key observations in Fig. 8, 9, and 10 as follows.
Observation 3. The VO with both a large γ and a medium λ has a large total revenue, which is mainly generated from the advertising sponsored access. The VO with a large λ also has a large total revenue, which is mainly generated from the premium access. We investigate the ADs' payoffs at venues with different (γ, λ) pairs. In Fig. 11, we plot the ADs' payoffs Π AD against the AD type σ under different values of γ and λ. We can observe that the ADs with higher popularities (i.e., smaller σ) have higher payoffs. When comparing curves with the same λ = 1 and different values of γ (0.5 and 1), we find that the increase of the concentration level γ makes ADs with small values of σ even more popular, and hence increases their payoffs. ADs with large values of σ will have smaller payoffs accordingly. When comparing curves with the same γ = 0.5 and different values of λ (1, 4, and 7), we observe that ADs' payoffs first increase and then decrease with λ. According to (6), the increase of visiting frequency λ affects Π AD in two aspects: (a) from (19), the advertising price p ∞ a becomes cheaper, which encourages the ADs to buy more advertising spaces and hence potentially increases Π AD ; (b) the VO decreases the Wi-Fi price p * f (δ * ) to attract MUs to the premium access, hence the proportion of MUs choosing the advertising sponsored access, i.e., ϕ a p * f (δ * ) , becomes smaller, which potentially decreases Π AD . In Fig. 11, impact (a) dominates when λ increases from 1 to 4, and impact (b) dominates when λ increases from 4 to 7.
We summarize the key observation in Fig. 11 as follows.
Observation 4. ADs obtain large payoffs Π AD at the venue with a medium λ, and their payoffs decrease with the index σ.
E. Social Welfare with Different (γ, λ)
We study the impacts of parameters γ and λ on the social welfare, and show the contour plot of the social welfare in Fig. 12.
First, we observe that the social welfare is non-decreasing in the advertising concentration level γ. From (41), we can prove that the social welfare is independent of γ for γ ≥ 2η λ , which is consistent with the observation here.
Second, we discuss the influence of the MU visiting frequency λ. According to (41), the increase of parameter λ has the following three impacts on the social welfare. First, each MU requires more time segments for the Wi-Fi connection, which increases the MUs' total utility. Second, as shown in Proposition 8 (iv), more MUs choose the premium access. In this case, less MUs need to watch the advertisements (i.e., ϕ a p * f (δ * ) decreases), which increases the MUs' total utility. Third, since more MUs choose the premium access instead of the advertising sponsored access, the ADs' total utility decreases. For most parameter settings, the first two impacts play the dominant roles. In Fig. 12, we can observe that the social welfare always increases with λ. However, under a few extreme parameter settings (e.g., large unit advertising profit a and utility reduction factor β), the third impact plays the dominant role, and the social welfare may decrease with λ in the medium λ regime. We provide a related example in our technical report [23].
We summarize the key observation in Fig. 12 as follows.
Observation 5. The social welfare is always non-decreasing in γ. Moreover, the social welfare is increasing in λ, excluding the medium λ regime.
F. Uniform Advertising Revenue Sharing Policy δ U
In Section II-A, we assumed that the ad platform can set different advertising revenue sharing ratios for different VOs. This, however, may not be desirable in practice due to the fairness consideration. In Fig. 13, we consider a more practical case, where the ad platform chooses a uniform advertising revenue sharing ratio δ U ∈ [0, 1 − ] for all VOs.
We assume that VOs have uniformly distributed γ and λ (γ ∼ U [0.01, 1], λ ∼ U [0.1, 15]), and are identical in other parameters. We formulate the ad platform's problem as follows.
Problem 5. The ad platform decides δ * U by solving 26
max E γ,λ δ U aN ϕ a p * f (δ U ) g (λ, γ, η) (43) var. 0 ≤ δ U ≤ 1 − ,(44)
where p * f (δ U ) is the VO's optimal Wi-Fi pricing response under revenue sharing ratio δ U , and is given in (31).
We consider 10, 000 VOs. By solving Problem 5 numerically, we obtain the optimal δ * U = 0.81. Fig. 13 is a contour figure illustrating the VO's total revenue Π VO with different 26 We obtain the objective function in (43) by taking the expectation of the ad platform's revenue Π APL (δ) in (32) with respect to γ and λ.
values of γ and λ under δ * U = 0.81. 27 Next we compare the results in Fig. 13 (the uniform revenue sharing case) with those in Fig. 10 (the VO-specific revenue sharing case).
First, we find that the VO's total revenue in Fig. 13 always increases with λ, while the VO's total revenue in Fig. 10 decreases with λ in some cases (e.g., when γ > 0.4 and 3.9 < λ < 5.6). This is because a larger λ implies that the MUs request more time segments of Wi-Fi connection and there are more advertising spaces. In the uniform revenue sharing case, the ad platform chooses the same sharing ratio, δ * U , for all venues. Hence, in Fig. 13, the VO's total revenue always increases with λ. In the VO-specific revenue sharing case, the ad platform's sharing ratio δ * increases with λ for some λ (as shown in Fig. 6). In this situation, the proportion of advertising revenue received by the VO decreases with λ, and hence the VO's total revenue in Fig. 10 may decrease with λ.
Second, a VO with a medium λ in Fig. 13 has a smaller total revenue than that in Fig. 10. Moreover, a VO with a large λ in Fig. 13 has a larger total revenue than that in Fig. 10. These are consistent with the comparison between δ * U here (the uniform revenue sharing case) and δ * in Fig. 6 (the VOspecific revenue sharing case). For those VOs with δ * U > δ * , they obtain smaller proportions of the advertising revenue in the uniform revenue sharing case, so their revenues decrease. Otherwise, they obtain larger proportions of the advertising revenue in the uniform revenue sharing case, which increases their revenue.
We summarize the key observations in Fig. 13 as follows.
Observation 6. The VO's revenue under the uniform revenue sharing policy increases with λ. Compared with the VOspecific revenue sharing policy, the uniform revenue sharing policy increases the revenue of the VO with a large λ, and decreases the revenue of the VO with a medium λ.
IX. CONCLUSION
In this work, we studied the public Wi-Fi monetization problem, and analyzed the economic interactions among the ad platform, VOs, MUs, and ADs through a three-stage Stackelberg game. Our analysis led to several important observations: (a) the ad platform's advertising revenue sharing policy affects the VOs' Wi-Fi prices but not the VOs' advertising prices; (b) the advertising concentration level γ and the MU visiting frequency λ have the opposite impacts on equilibrium outcomes; (c) the ad platform obtains large revenues at the venues with both large γ and small λ; and (d) the VOs with both large concentration level and medium MU visiting frequency and the VOs with large MU visiting frequency obtain large revenues.
In our future work, we plan to relax the assumptions of the uniformly distributed MU types and AD types, and also consider the MUs and ADs with multi-dimensional heterogeneity. For example, the MUs can have heterogeneous utility 27 Here, we only show the impact of δ * U on the VO's revenue. This is because it is obvious that the ad platform's revenue under δ * U = 0.81 is not greater than its revenue under δ * in the VO-specific revenue sharing case. Furthermore, as shown in Section IV-A, the advertising price is independent of the ad platform's sharing policy. Therefore, the uniform advertising revenue sharing policy does not affect the ADs' payoffs. reduction factors β, besides the heterogeneous Wi-Fi access valuations θ. The ADs can have heterogeneous unit advertising profits a, besides the heterogeneous popularity indexes σ. According to [24], the optimal pricing problem for the multidimensional heterogeneous buyers is generally much more challenging than that for the single-dimensional heterogeneous buyers. Moreover, the VOs can organize auctions and let the ADs bid for the ad spaces. In this situation, the VOs should allocate the ad spaces to the ADs based on the ADs' bids and advertising budgets. We are interested in applying the auctionbased framework (instead of the pricing-based framework in this paper) to study the trading of the ad spaces, and investigating the corresponding influence on the equilibrium outcomes.
APPENDIX
A. Randomized Implementation of m * (σ, p a , p f ) First, we explain the implementation of a non-integer m * (σ, p a , p f ). When m * (σ, p a , p f ) is not an integer, the type-σ AD purchases the ad spaces in a randomized manner such that the expected number of purchased ad spaces equals m * (σ, p a , p f ). Specifically, we define m floor m * (σ, p a , p f ) ,
m ceil m * (σ, p a , p f ) ,
κ m * (σ, p a , p f ) − m * (σ, p a , p f ) .
Here, m floor is the largest integer no greater than m * (σ, p a , p f ), m ceil is the smallest integer no smaller than m * (σ, p a , p f ), and κ ∈ [0, 1) is the fractional part of m * (σ, p a , p f ). Under the randomized purchasing strategy, the AD purchases m floor and m ceil ad spaces with the probabilities 1−κ and κ, respectively. In this case, the number of purchased ad spaces is always an integer (either m floor or m ceil ), and the expected number of purchased ad spaces equals m * (σ, p a , p f ). Second, we show that letting the ADs implement the noninteger m * (σ, p a , p f ) in a randomized manner does not affect our analysis for the ad platform's, VO's, and MUs' equilibrium strategies. Under the randomized implementation, a type-σ AD's expected number of purchased ad spaces is always m * (σ, p a , p f ). According to equation (13), the randomized implementation does not affect the expected total number of ad spaces sold to all ADs Q (p a , p f ). Therefore, the randomized implementation does not change Problem 2's analysis, hence the randomized implementation does not change any of the later analysis for Stage II and Stage I either Third, we show that the randomized implementation may reduce the ADs' payoffs, but such an influence is minor. According to (6), a type-σ AD's payoff function Π AD (σ, m, p f , p a ) is concave in m. We use Π AD Rand (σ, m * (σ, p a , p f ) , p f , p a ) to denote the AD's expected payoff under the randomized implementation, which can be computed as follows:
Π AD Rand (σ, m * (σ, p a , p f ) , p f , p a ) = (1 − κ) Π AD (σ, m floor , p f , p a ) + κΠ AD (σ, m ceil , p f , p a ) .
Due to the concavity of Π AD (σ, m, p f , p a ), we have Π AD Rand (σ, m * (σ, p a , p f ) , p f , p a ) ≤ Π AD (σ, m * (σ, p a , p f ) , p f , p a ) . (49)
That is to say, the randomized implementation may reduce the AD's payoff. However, we show that such an influence is minor. We define
τ (σ) 1 − Π AD Rand σ, m * σ, p ∞ a , p * f (δ * ) , p * f (δ * ), p ∞ a Π AD σ, m * σ, p ∞ a , p * f (δ * ) , p * f (δ * ), p ∞ a ,(50)
which characterizes the relative reduction of a type-σ AD's payoff due to the randomized implementation. Next we show the value of τ (σ) through numerical experiments. We consider a period of one week, and assume that N = 1000 and λ = 4. Hence, there are 1000 MUs, and on average each MU visits the venue four times during the week. For the remaining parameters, we choose θ max = 1, β = 0.1, γ = 0.5, η = 1, a = 4, and = 0.01. We first compute the equilibrium advertising price p ∞ a from (19) and the equilibrium Wi-Fi price p * f (δ * ) from (39). Based on (12), the threshold AD type is σ T (p ∞ a ) = 4, which implies that only the ADs with σ ∈ [0, 4) obtain positive payoffs.
In Fig. 14, we plot τ (σ) against σ ∈ [0, 4]. We observe that τ (σ) is very small for most values of σ (except when σ is very close to 4). For example, we have τ (σ) < 10 −4 for σ ∈ [0, 3.9] and τ (σ) < 1.1 × 10 −2 for σ ∈ [0, 3.99]. Therefore, the influence of the randomized implementation on the ADs' payoffs is minor. To understand this, we plot m * σ, p ∞ a , p * f (δ * ) against σ ∈ [0, 4] in Fig. 15. We can observe that m * σ, p ∞ a , p * f (δ * ) is a large number for most values of σ (except when σ is very close to 4). In this case, the randomized implementation of a non-integer m * σ, p ∞ a , p * f (δ * ) does not significantly change the AD's payoff, and hence the value of τ (σ) is small.
Fig. 1 :
1Public Wi-Fi monetization: (a) the premium access: VOs directly charge MUs; (b) the advertising sponsored access: VOs sell the ad spaces to ADs via the ad platform, and ADs broadcast their advertisements to MUs.
Fig. 2 :
2Illustration of Wi-Fi Access.
Fig. 3 :
3Comparison of Venues with Different γ.
Fig. 4 :
4Three-Stage Stackelberg Game.
Fig. 7 :
7Ad Platform's Revenue Π APL .
Fig. 10 :
10VO's Total Revenue Π VO .
Fig. 13 :
13Uniform Ad Revenue Sharing Policy: VO's Total Revenue Π VO . D. ADs' Payoffs with Different (γ, λ)
Fig. 14 :
14ADs' Relative Payoff Reduction.
Fig. 15 :
15ADs' Optimal Purchasing Strategies.
TABLE I :
IKey Notations
Since our work focuses on studying the heterogeneity of ADs' popularities, we assume a is homogeneous for all ADs at the venue.
(12), σ T (p a ) = σ max . Hence, we can rewrite Q (p a , p f ) in(13)asBy using the assumption 1 γ ln aγ pa > σ max , we have the following relation:In Case A, we have the relation σmaxγ 2 ≥ λ M . Hence, we can derive the following relation from (52):This contradicts with constraint(16)in Problem 2. Therefore, we have proved that 1 γ ln aγ pa ≤ σ max .Based on 1 γ ln aγ pa ≤ σ max and σ T (p a )'s definition in(12), we have σ T (p a ) = 1 γ ln aγ pa . By plugging σ T (p a ) =We can prove that the objective function (54) is unimodal: it increases with p a for p a ∈ 0, aγe −2 and decreases with p a for p a ∈ aγe −2 , aγ . By considering constraint (55), we can show the solution to Problem 2 as follows:2) Case B: σmaxγ 2 < λ M : In this case, σ T (p a ) has different expressions for the regime p a ∈ [0, aγe −γσmax ] and regime p a ∈ (aγe −γσmax , aγ]. Next we discuss the VO's optimal pricing in these two regimens separately.Regime 1: p a ∈ [0, aγe −γσmax ]: In this regime, we can show that 1 γ ln aγ pa ≥ σ max . From(12), σ T (p a ) = σ max . By plugging σ T (p a ) = σ max into the expression of Q (p a , p f ) in16(13), we can rearrange (15)-(17) as follows:We can prove that the objective function (59) is unimodal: it increases for p a ∈ 0, aγe −( γσmax 2 +1) and decreases for p a ∈ aγe −( γσmax 2 +1) , aγ −γσmax . By considering (60) and(61), we show the optimal p * a and the corresponding revenue Π VO a (p f , p * a , δ) in Regime 1 as follows:Regime 2: p a ∈ (aγe −γσmax , aγ]: In this regime, we have 1 γ ln aγ pa < σ max . From(12), σ T (p a ) = 1 γ ln aγ pa . By plugging it into the expression of Q (p a , p f ) in (13), we can rearrange (15)-(17) as follows:The objective function (62) is the same as (54), which increases with p a for p a ∈ 0, aγe −2 and decreases with p a for p a ∈ aγe −2 , aγ . By considering (63) and (64), we show the optimal p * a and the corresponding revenue Π VO a (p f , p * a , δ) in Regime 2 as follows:. Combination of Regime 1 and Regime 2: Next we combine the optimal solutions in Regime 1 and Regime 2. Based on the comparison of Π VO a (p f , p * a , δ) in Regime 1 and Regime 2, we can show that:3) Combination of Case A and Case B: Finally, we show the solution based on the analysis in Case A and Case B. From (58) and (65), we can obtainwhich completes the proof of Proposition 1.C. Proof of Proposition 2Proof. The VO's optimal advertising price under general σ max and M is given in(18). With Assumption 1, we have σ max → ∞. Hence, the conditions γσmaxcan be simplified as λ M ≤ 2 γσmax . Using η = lim M,σmax→∞ M σmax , we can further rewrite this condition as λ ≤ 2η γ . Therefore, the optimal advertising price under Assumption 1 is simplyThis completes the proof of Proposition 2.D. Proof of Proposition 3Proof. The objective function(29)is a quadratic function of p f . We can show that the objective function increases withand decreases withHence, we can compute the optimal Wi-Fi price under the sharing policy δ asWe can rearrange(68)aswhich completes the proof of Proposition 3.E. Proofs of Proposition 4 and Proposition 5Proof. We discuss the ad platform's sharing policy and the VO's Wi-Fi price at the equilibrium in the following four cases: Ω ∈ (0, ], Ω ∈ ,1 3,We can easily show that at the equilibrium the optimal sharing policy is δ * = 1 − , and the VO's Wi-Fi price is p * f (δ * ) = βθ max .2) Case B: Ω ∈ , 1 3 : In this case, we discuss δ ∈ [0, 1 − Ω] and δ ∈ (1 − Ω, 1 − ] separately.(a) When δ ∈ [0, 1 − Ω], we can show p * f (δ) = βθ max based on (31). From (32), the ad platform's revenue is Π APL (δ) = δaN g (λ, γ, η). Hence, we can easily obtain the17optimal sharing policy δ * = 1 − Ω, and compute the corresponding revenue asbased on (31). According to (32), we can compute the ad platform's revenue asThis is a quadratic function of δ, and we can prove that the ad platform's revenue is always below ag (λ, γ, η) N −λβθ max N for all δ ∈ (1 − Ω, 1 − ]. Summarizing (a) and (b), we show that in Case B, the optimal sharing policy is δ * = 1 − Ω, and the VO's Wi-FiIn this case, we discuss δ ∈ [0, 1 − Ω] and δ ∈ (1 − Ω, 1 − ] separately.(a) When δ ∈ [0, 1 − Ω], the analysis is the same as item (a) of Case B. The ad platform's optimal sharing policy is δ * = 1 − Ω, and the corresponding revenue is Π APL (δ * ) = ag (λ, γ, η) N − λβθ max N .(b) When δ ∈ (1 − Ω, 1 − ], the ad platform's revenue function Π APL (δ) is the same as (72). We can easily show that function Π APL (δ) achieves the maximum value at point δ = 1+Ω 2 . Furthermore, from Ω ∈ 1 3 , 1 − 2 (condition of Case C), we can prove that 1+Ω 2 ∈ (1 − Ω, 1 − ). Therefore, the ad platform's optimal sharing policy is δ * = 1+Ω 2 , and the corresponding revenue is Π APL (δ * ) = ag(λ,γ,η)N 2 (Ω+1) 2 4Ω . Next we summarize (a) and (b). We can show that the value of Π APL (δ * ) in (b) is greater than that in (a). As a result, in Case C, the optimal sharing policy is δ * = 1+Ω 2 , and the corresponding Wi-Fi price can be computed as p * f (δ * ) = (a) When δ ∈ [0, 1 − Ω], the analysis is the same as item (a) of Case B. The ad platform's optimal sharing policy is δ * = 1 − Ω, and the corresponding revenue is Π APL (δ * ) = ag (λ, γ, η) N − λβθ max N .(b) When δ ∈ (1 − Ω, 1 − ], the ad platform's revenue function Π APL (δ) is the same as (72). We can easily prove that Π APL (δ) increases with δ for δ ∈ (1 − Ω, 1 − ]. Hence, the optimal sharing policy is δ * = 1− , and the corresponding revenue is Π APL (δ * ) = aN g(λ,γ,η) 2 (1 − ) 1 + Ω . Next we summarize (a) and (b). We can show that the value of Π APL (δ * ) in (b) is greater than that in (a). As a result, in Case D, the ad platform's optimal sharing policy is δ * = 1 − , and the corresponding Wi-Fi price can be computed as. Summarizing Case A, Case B, Case C, and Case D, we complete the proofs of Proposition 4 and Proposition 5.F. Proof of Proposition 6Proof. First, we compute the total utility of MUs. If a type-θ MU chooses the premium access, its utility for connecting Wi-Fi for one time segment is θ; otherwise, its utility for connecting Wi-Fi for one time segment is (1 − β) θ. Under the VO's Wi-Fi price p * f (δ * ), the MUs with types in 0, θ T p * f (δ * ) choose the advertising sponsored access, and the MUs with types in θ T p * f (δ * ) , θ max choose the premium access. Therefore, we can compute the MUs' total utility asSecond, we compute the total utility of ADs. Under the VO's advertising price p ∞ a , only the ADs with types in [0, σ T (p ∞ a )] purchase the ad spaces, and the amounts of purchased ad spaces are given in(11). Hence, we can compute the ADs' total utility asSince the social welfare equals the total utility of MUs and ADs, we obtain the social welfare by combining the MUs' total utility in (73) and the AD's total utility in (74). This completes the proof.G. Proof of Proposition 7 1) Proof of Item (i):The VO's advertising price p ∞ a is given in(19). We find that p ∞ a is continuous in γ for γ ∈ (0, ∞). Furthermore, by checking ∂p ∞ a ∂γ , we can show that p ∞ a is increasing in γ for γ ∈ 0, 2η λ and γ ∈ 2η λ , ∞ . Hence, the VO's advertising price p ∞ a is increasing in γ for γ ∈ (0, ∞). 2) Proof of Item (ii): According to(21)and(23), ρ (p ∞ a ) is continuous at point γ = 2η λ . Furthermore, we can prove that ρ (p ∞ a ) is decreasing in γ for γ ∈ 0, 2η λ and γ ∈ 2η λ , ∞ . Hence, we can show that ρ (p ∞ a ) is decreasing in γ for γ ∈ (0, ∞).3) Proof of Item (iii): First, we show some properties for λβθmax ag(λ,γ,η) . Based on (26), we can show that λβθmax ag(λ,γ,η) is continuous in γ for γ ∈ (0, ∞). Furthermore, we can show that λβθmax ag(λ,γ,η) is strictly decreasing in γ for γ ∈ 0, 2η λ , and does not change with γ for γ ∈ 2η λ , ∞ . We can also prove that lim γ→0 + λβθmax ag(λ,γ,η) = ∞ (i.e., for any V > 0, there exists a ξ > 0 such that λβθmax ag(λ,γ,η) > V for all γ ∈ (0, ξ)) and λβθmax ag(λ,γ,η) = λβθmax a2ηe −2 for all γ ∈ 2η λ , ∞ . Therefore, for any value W ∈ λβθmax a2ηe −2 , ∞ , we can always find a unique γ 0 ∈ (0, ∞) such that λβθmax ag(λ,γ0,η) = W . We can prove Item (iii) by considering the following three situations separately: λβθmax a2ηe −2 < 1 3 , 1 3 ≤ λβθmax a2ηe −2 < 1 − 2 , and λβθmax a2ηe −2 ≥ 1 − 2 . Next we discuss the situation where λβθmax a2ηe −2 < 1 3 . The situations where 1 3 ≤ λβθmax a2ηe −2 < 1 − 2 and λβθmax a2ηe −2 ≥ 1 − 2 can be analyzed in similar approaches. Since ∈ 0, 1 3 , we have 1 3 < 1 − 2 . When λβθmax a2ηe −2 < 1 3 , we have λβθmax a2ηe −2 < 1 3 < 1 − 2 . Based on the analysis of λβθmax ag(λ,γ,η) above, we can always find unique γ 1 and γ 2 such that λβθmax ag(λ,γ1,η) = 1 − 2 and λβθmax ag(λ,γ2,η) = 1 3 . Moreover, we have γ 1 < γ 2 .Based on the monotonicity of λβθmax ag(λ,γ,η) , we can show that λβθmax ag(λ,γ,η) ∈ [1 − 2 , ∞) for γ ∈ (0, γ 1 ], λβθmax ag(λ,γ,η) ∈ 1 3 , 1 − 2 for γ ∈ (γ 1 , γ 2 ), and λβθmax ag(λ,γ,η) ∈ λβθmax a2ηe −2 , 1 3 for γ ∈ [γ 2 , ∞). Notice that Ω = λβθmax ag(λ,γ,η) . From Proposition 5, we can easily derive thatAccording to (26), g (λ, γ, η) is non-decreasing in γ for γ ∈ (0, ∞). Hence, from (75), we can easily show that p * f (δ * ) is non-decreasing in γ for γ ∈ (0, γ 1 ], (γ 1 , γ 2 ), or [γ 2 , ∞). Moreover, from (75), p * f (δ * ) is continuous at point γ = γ 1 and point γ = γ 2 . Therefore, we show that p * f (δ * ) is nondecreasing in γ for γ ∈ (0, ∞).For situations 1 3 ≤ λβθmax a2ηe −2 < 1 − 2 and λβθmax a2ηe −2 ≥ 1 − 2 , we can apply similar approaches and show that p * f (δ * ) is also non-decreasing in γ for γ ∈ (0, ∞).4) Proof of Item (iv): Based on(8),(19), we can show that p ∞ a is continuous in λ for λ ∈ (0, ∞). Moreover, it is decreasing in λ for λ ∈ 0, 2η γ , and is independent of λ for λ ∈ 2η γ , ∞ . Therefore, the VO's advertising price p ∞ a is non-increasing in λ for λ ∈ (0, ∞).2) Proof of Item (ii): Based on(21)and(23), ρ (p ∞ a ) is continuous at point λ = 2η γ . Furthermore, ρ (p ∞ a ) is increasing in λ for λ ∈ 0, 2η γ , and is independent of λ for λ ∈ 2η γ , ∞ . Hence, we can show that ρ (p ∞ a ) is nondecreasing in λ for λ ∈ (0, ∞).3) Proof of Item (iii): Based on (26), we can rewrite λβθmax ag(λ,γ,η) asFrom (76), we can easily show that λβθmax ag(λ,γ,η) is continuous and strictly increasing in λ for λ ∈ (0, ∞). Furthermore, we can find that lim λ→0 + λβθmax ag(λ,γ,η) = βθmax aγ and lim λ→∞ λβθmax ag(λ,γ,η) = ∞. Therefore, for any value W ∈ βθmax aγ , ∞ , we can find a unique λ ∈ (0, ∞) such that λβθmax ag(λ,γ,η) = W . We can prove Item (iii) by considering the following three situations separately: βθmax aγ <Based on the analysis above, we can always find unique λ 1 and λ 2 such that λβθmax g(λ1,γ,η) = 1 3 and λβθmax g(λ2,γ,η) = 1 − 2 . Also, we can show that λ 1 < λ 2 . Based on the monotonicity of λβθmax ag(λ,γ,η) , we can show that λβθmax ag(λ,γ,η) ∈ βθmax aγ , 1 3 for λ ∈ (0, λ 1 ], λβθmax ag(λ,γ,η) ∈ 1 3 , 1 − 2 for λ ∈ (λ 1 , λ 2 ), and λβθmax ag(λ,γ,η) ∈ [1 − 2 , ∞) for λ ∈ [λ 2 , ∞). From Proposition 5, we can easily derive thatFrom (26), we can easily show that g(λ,γ,η) λ is decreasing in λ. Based on (77), we can prove that p * f (δ * ) is non-increasing in λ for λ ∈ (0, λ 1 ], (λ 1 , λ 2 ), or [λ 2 , ∞). Moreover, we can prove that p * f (δ * ) is continuous at point λ = λ 1 and point λ = λ 2 . Therefore, we show that p * f (δ * ) is non-increasing in λ for λ ∈ (0, ∞). 4) Proof of Item (iv): According to(8), we have ϕ f (p f ) = 1 − p f βθmax for p f ∈ [0, βθ max ]. Since we have shown that p * f (δ * ) is non-increasing in λ, we can show that ϕ f p * f (δ * ) is non-decreasing in λ for λ ∈ (0, ∞).I. Example of λ's Impact on SWIn this section, we show an example where the social welfare may decrease with λ for some medium λ. We choose N = 200, θ max = 1, β = 0.8, η = 1, a = 20, = 0.01, and γ = 0.8. We change λ from 0.01 to 15, and plot the social welfare against λ inFig. 16. We can observe that the social welfare decreases with λ for 2.3 < λ < 6.6.
Economics of public Wi-Fi monetization and advertising. H Yu, M H Cheung, L Gao, J Huang, Proc. of IEEE INFOCOM. of IEEE INFOCOMSan Francisco, CAH. Yu, M. H. Cheung, L. Gao, and J. Huang, "Economics of public Wi-Fi monetization and advertising," in Proc. of IEEE INFOCOM, San Francisco, CA, April 2016.
Cisco visual networking index: Global mobile data traffic forecast update. Cisco, Tech. Rep.Cisco, "Cisco visual networking index: Global mobile data traffic forecast update, 2015-2020," Tech. Rep., February 2016.
Carrier Wi-Fi: State of the market 2014. Wireless Broadband Alliance, Tech. Rep.Wireless Broadband Alliance, "Carrier Wi-Fi: State of the market 2014," Tech. Rep., November 2014.
Location based services (LBS) over Wi-Fi. --, "Location based services (LBS) over Wi-Fi," March 2015.
Pricing for local and global Wi-Fi markets. L Duan, J Huang, B Shou, IEEE Transactions on Mobile Computing. 145L. Duan, J. Huang, and B. Shou, "Pricing for local and global Wi-Fi markets," IEEE Transactions on Mobile Computing, vol. 14, no. 5, pp. 1056-1070, May 2015.
WiFi access point pricing as a dynamic game. J Musacchio, J Walrand, IEEE/ACM Transactions on Networking. 142J. Musacchio and J. Walrand, "WiFi access point pricing as a dynamic game," IEEE/ACM Transactions on Networking, vol. 14, no. 2, pp. 289- 301, April 2006.
Cooperative Wi-Fi deployment: A one-to-many bargaining framework. H Yu, M Cheung, J Huang, IEEE Transactions on Mobile Computing. H. Yu, M. Cheung, and J. Huang, "Cooperative Wi-Fi deployment: A one-to-many bargaining framework," IEEE Transactions on Mobile Computing, August 2016.
Bargainingbased mobile data offloading. L Gao, G Iosifidis, J Huang, L Tassiulas, D Li, IEEE Journal on Selected Areas in Communications. 326L. Gao, G. Iosifidis, J. Huang, L. Tassiulas, and D. Li, "Bargaining- based mobile data offloading," IEEE Journal on Selected Areas in Communications, vol. 32, no. 6, pp. 1114-1125, June 2014.
A double-auction mechanism for mobile data-offloading markets. G Iosifidis, L Gao, J Huang, L Tassiulas, IEEE/ACM Transactions on Networking. 235G. Iosifidis, L. Gao, J. Huang, and L. Tassiulas, "A double-auction mech- anism for mobile data-offloading markets," IEEE/ACM Transactions on Networking, vol. 23, no. 5, pp. 1634-1647, October 2015.
On wireless social community networks. M H Manshaei, J Freudiger, M Félegyházi, P Marbach, J P Hubaux, Proc. of IEEE INFOCOM. of IEEE INFOCOMPhoenix, AZM. H. Manshaei, J. Freudiger, M. Félegyházi, P. Marbach, and J. P. Hubaux, "On wireless social community networks," in Proc. of IEEE INFOCOM, Phoenix, AZ, April 2008, pp. 2225-2233.
Choice-based pricing for user-provided connectivity?. M Afrasiabi, R Guérin, ACM SIGMETRICS Performance Evaluation Review. 433M. Afrasiabi and R. Guérin, "Choice-based pricing for user-provided connectivity?" ACM SIGMETRICS Performance Evaluation Review, vol. 43, no. 3, pp. 63-66, December 2015.
Economic analysis of crowdsourced wireless community networks. Q Ma, L Gao, Y.-F Liu, J Huang, IEEE Transactions on Mobile Computing. Q. Ma, L. Gao, Y.-F. Liu, and J. Huang, "Economic analysis of crowdsourced wireless community networks," IEEE Transactions on Mobile Computing, September 2016.
A contract-based incentive mechanism for crowdsourced wireless community networks. Proc. of IEEE WiOpt. of IEEE WiOptTempe, AZ--, "A contract-based incentive mechanism for crowdsourced wireless community networks," in Proc. of IEEE WiOpt, Tempe, AZ, May 2016.
An evolutionary game theoretic analysis for crowdsourced WiFi networks. Y Gao, X Zhang, X Mo, L Gao, Proc. of IEEE ICC. of IEEE ICCParis, FranceY. Gao, X. Zhang, X. Mo, and L. Gao, "An evolutionary game theoretic analysis for crowdsourced WiFi networks," in Proc. of IEEE ICC, Paris, France, May 2017.
Targeting in advertising markets: Implications for offline versus online media. D Bergemann, A Bonatti, The RAND Journal of Economics. 423D. Bergemann and A. Bonatti, "Targeting in advertising markets: Implications for offline versus online media," The RAND Journal of Economics, vol. 42, no. 3, pp. 417-443, 2011.
Targeted advertising and advertising avoidance. J P Johnson, The RAND Journal of Economics. 441J. P. Johnson, "Targeted advertising and advertising avoidance," The RAND Journal of Economics, vol. 44, no. 1, pp. 128-144, 2013.
To match or not to match: Economics of cookie matching in online advertising. A Ghosh, M Mahdian, R P Mcafee, S Vassilvitskii, ACM Transactions on Economics and Computation. 3212A. Ghosh, M. Mahdian, R. P. McAfee, and S. Vassilvitskii, "To match or not to match: Economics of cookie matching in online advertising," ACM Transactions on Economics and Computation, vol. 3, no. 2, p. 12, 2015.
Economics of femtocells. N Shetty, S Parekh, J Walrand, Proc. of IEEE GLOBECOM. of IEEE GLOBECOMHonolulu, HIN. Shetty, S. Parekh, and J. Walrand, "Economics of femtocells," in Proc. of IEEE GLOBECOM, Honolulu, HI, November 2009, pp. 1-6.
Poisson processes. J F C Kingman, Oxford University PressNew York, NY, USAJ. F. C. Kingman, Poisson processes. New York, NY, USA: Oxford University Press, 2010.
The economic analysis of advertising. K Bagwell, Handbook of industrial organization. 3K. Bagwell, "The economic analysis of advertising," Handbook of industrial organization, vol. 3, pp. 1701-1844, 2007.
Public Wi-Fi monetization via advertising. H Yu, M H Cheung, L Gao, J Huang, Technical ReportH. Yu, M. H. Cheung, L. Gao, and J. Huang, "Public Wi-Fi monetization via advertising," Technical Report, 2016. [Online].
Optimal nonlinear pricing with two-dimensional characteristics. J J Laffont, E Maskin, J C Rochet, Information, Incentives and Economic Mechanisms. J. J. Laffont, E. Maskin, and J. C. Rochet, "Optimal nonlinear pric- ing with two-dimensional characteristics," Information, Incentives and Economic Mechanisms, pp. 256-266, 1987.
| []
|
[
"AN ONLINE MANIFOLD LEARNING APPROACH FOR MODEL REDUCTION OF DYNAMICAL SYSTEMS *",
"AN ONLINE MANIFOLD LEARNING APPROACH FOR MODEL REDUCTION OF DYNAMICAL SYSTEMS *"
]
| [
"Liqian Peng ",
"Kamran Mohseni "
]
| []
| []
| This article discusses a newly developed online manifold learning method, subspace iteration using reduced models (SIRM), for the dimensionality reduction of dynamical systems. This method may be viewed as subspace iteration combined with a model reduction procedure. Specifically, starting with a test solution, the method solves a reduced model to obtain a more precise solution, and it repeats this process until sufficient accuracy is achieved. The reduced model is obtained by projecting the full model onto a subspace that is spanned by the dominant modes of an extended data ensemble. The extended data ensemble in this article contains not only the state vectors of some snapshots of the approximate solution from the previous iteration but also the associated tangent vectors. Therefore, the proposed manifold learning method takes advantage of the information of the original dynamical system to reduce the dynamics. Moreover, the learning procedure is computed in the online stage, as opposed to being computed offline, which is used in many projection-based model reduction techniques that require prior calculations or experiments. After providing an error bound of the classical POD-Galerkin method in terms of the projection error and the initial condition error, we prove that the sequence of approximate solutions converge to the actual solution of the original system as long as the vector field of the full model is locally Lipschitz on an open set that contains the solution trajectory. Good accuracy of the proposed method has been demonstrated in two numerical examples, from a linear advection-diffusion equation to a nonlinear Burgers equation. In order to save computational cost, the SIRM method is extended to a local model reduction approach by partitioning the entire time domain into several subintervals and obtaining a series of local reduced models of much lower dimensionality. The accuracy and efficiency of the local SIRM are shown through the numerical simulation of the Navier-Stokes equation in a lid-driven cavity flow problem. | 10.1137/130927723 | [
"https://arxiv.org/pdf/1210.2975v2.pdf"
]
| 14,511,792 | 1210.2975 | 418ebf87dea83c922735e9083fe533efc8a04b7f |
AN ONLINE MANIFOLD LEARNING APPROACH FOR MODEL REDUCTION OF DYNAMICAL SYSTEMS *
22 Jul 2014
Liqian Peng
Kamran Mohseni
AN ONLINE MANIFOLD LEARNING APPROACH FOR MODEL REDUCTION OF DYNAMICAL SYSTEMS *
22 Jul 2014onlinemanifold learningsubspace iterationmodel reductionlocal model reduc- tion AMS subject classifications 78M3437M9965L9934C4074H1537N10
This article discusses a newly developed online manifold learning method, subspace iteration using reduced models (SIRM), for the dimensionality reduction of dynamical systems. This method may be viewed as subspace iteration combined with a model reduction procedure. Specifically, starting with a test solution, the method solves a reduced model to obtain a more precise solution, and it repeats this process until sufficient accuracy is achieved. The reduced model is obtained by projecting the full model onto a subspace that is spanned by the dominant modes of an extended data ensemble. The extended data ensemble in this article contains not only the state vectors of some snapshots of the approximate solution from the previous iteration but also the associated tangent vectors. Therefore, the proposed manifold learning method takes advantage of the information of the original dynamical system to reduce the dynamics. Moreover, the learning procedure is computed in the online stage, as opposed to being computed offline, which is used in many projection-based model reduction techniques that require prior calculations or experiments. After providing an error bound of the classical POD-Galerkin method in terms of the projection error and the initial condition error, we prove that the sequence of approximate solutions converge to the actual solution of the original system as long as the vector field of the full model is locally Lipschitz on an open set that contains the solution trajectory. Good accuracy of the proposed method has been demonstrated in two numerical examples, from a linear advection-diffusion equation to a nonlinear Burgers equation. In order to save computational cost, the SIRM method is extended to a local model reduction approach by partitioning the entire time domain into several subintervals and obtaining a series of local reduced models of much lower dimensionality. The accuracy and efficiency of the local SIRM are shown through the numerical simulation of the Navier-Stokes equation in a lid-driven cavity flow problem.
1. Introduction. The simulation, control, design, and analysis of the methods and algorithms for many large-scale dynamical systems are often computationally intensive and require massive computing resources if at all possible. The idea of model reduction is to provide an efficient computational prototyping tool to replace a highorder system of differential equations with a system of a substantially lower dimension, whereby only the most dominant properties of the full system are preserved. During the past several decades, several model reduction methods have been studied, such as Krylov subspace methods [4], balanced truncation [16,23,12], and proper orthogonal decomposition (POD) [16,13]. More techniques can be found in [3] and [2]. These model reduction methods are usually based on offline computations to build the empirical eigenfunctions of the reduced model before the computation of the reduced state variables. Most of the time these offline computations are as complex as the original simulation. For these reasons, an efficient reduced model with high fidelity based on online manifold learning is preferable. 1 However, much less effort has been expended in the field of model reduction via online manifold learning. In [22], an incremental algorithm involving adaptive periods was proposed. During these adaptive periods the incremental computation is restarted until a quality criterion is satisfied. In [14] and [17] state vectors from previous time steps are extracted to span a linear subspace in order to construct the reduced model for the next step. In [18] dynamic iteration using reduced order models (DIRM) combines the idea of the waveform relaxation technique and model reduction, which simulates each subsystem that is connected to model reduced versions of the other subsystems.
A new framework of iterative manifold learning, subspace iteration using reduced models (SIRM), is proposed in this article for the reduced modeling of high-order nonlinear dynamical systems. Similar to the well-known Picard iteration for solving ODEs, a trial solution is set at the very beginning. Using POD, a set of updated empirical eigenfunctions are constructed in each iteration by extracting dominant modes from an extended data ensemble; then, a more accurate solution is obtained by solving the reduced equation in a new subspace spanned by these empirical eigenfunctions. The extended data ensemble contains not only the state vectors of some snapshots of the trajectory in the previous iteration but also the associated tangent vectors. Therefore, the manifold learning process essentially takes advantage of the information from the original dynamical system. Both analytical results and numerical simulations indicate that a sequence of functions asymptotically converges to the solution of the full system. Moreover, the aforementioned method can be used to test (and improve) the accuracy of a trial solution of other techniques. A posterior error estimation can be estimated by the difference between the trial solution and a more precise solution obtained by SIRM.
The remainder of this article is organized as follows. Since algorithms in this article fall in the category of projection methods, the classic POD-Galerkin method and its ability to minimize truncation error are briefly reviewed in section 2. After presenting the SIRM algorithm in section 3, we provide convergence analysis, complexity analysis, and two numerical examples. Then SIRM is combined with the time domain partition in section 4, and a local SIRM method is proposed to decrease redundant dimensions. The performance of this technique is evaluated in a lid-driven cavity flow problem. Finally, conclusions are offered.
2. Background of Model Reduction. Let J = [0, T ] denote the time domain, x : J → R n denote the state variable, and f : J × R n → R n denote the discretized vector field. A dynamical system in R n can be described by an initial value problem
(2.1)ẋ = f (t, x); x(0) = x 0 .
By definition, x(t) is a flow that gives an orbit in R n as t varies over J for a fixed x 0 . The orbit contains a sequence of states (or state vectors) that follow from x 0 .
Galerkin projection.
For a k-dimensional linear subspace S in R n , there exists an n × k orthonormal matrix Φ = [φ 1 , ..., φ k ], the columns of which form a complete basis of S. The orthonormality of the column matrix requires that Φ T Φ = I, where I is an identity matrix. Any state x ∈ R n can be projected onto S by a linear projection. The projected state is given by Φ T x ∈ R k in the subspace coordinate system, where superscript T denotes the matrix transpose. Let P := ΦΦ T denote the projection matrix in R n . Then, the same projection in the original coordinate system is represented byx(t) := P x(t) ∈ R n .
Let Φ T f (t, Φz) denote a reduced-order vector field formed by Galerkin projection. The corresponding reduced model for z(t) ∈ R k is (2.2)ż = Φ T f (t, Φz);
z 0 = Φ T x 0 .
An approximate solution in the original coordinate systemx(t) = Φz(t) ∈ R n is equivalent to the solution of the following ODE:
(2.3)ẋ = P f (t,x);x 0 = P x 0 .
It is well-known that the existence and uniqueness of a solution for system (2.1) can be proved by the Picard iteration.
Lemma 2.1 (Picard-Lindelöf existence and uniqueness [15]). Suppose there is a closed ball of radius b around a point x 0 ∈ R n such that f : J a × B b (x 0 ) → R n is a uniformly Lipschitz function of x ∈ B b (x 0 ) with constant K, and a continuous function of t on J a = [0, a]. Then the initial value problem (2.1) has a unique solution
x(t) ∈ B b (x 0 ) for t ∈ J a , provided that a = b/M , where (2.4) M = max (t,x)∈Ja×B b (x0) f (t, x) .
Similarly, a reduced model formed by Galerkin projection also has a unique local solution if the original vector field is Lipschitz. Moreover, the existence and uniqueness of solutions do not depend on the projection operator.
Lemma 2.2 (local existence and uniqueness of reduced models). With a, J a , b, B b (x 0 ), M , and f (t, x) defined in Lemma 2.1, the reduced model (2.3) has a unique solutionx(t) ∈ B b (x 0 ) at the interval t ∈ J 0 = [0, a/2] for a given initial condition
x(0) =x 0 , provided that a = b/M and x 0 − x 0 < b/2. Proof. Since f (t, x) is a uniformly Lipschitz function of x with constant K for all (t, x) ∈ J a × B b (x 0 ), then f (t, x 1 ) − f (t, x 2 ) ≤ K x 1 − x 2 for x 1 , x 2 ∈ B b (x 0 ) with t ∈ J a .
Since P is a projection matrix, P = 1. As a consequence,
P f (t, x 1 ) − P f (t, x 2 ) ≤ P f (t, x 1 ) − f (t, x 2 ) ≤ K x 1 − x 2 ,
which justifies that the projected vector field P f (t, x) is also Lipschitz with constant K for the same domain. Since
x 0 − x 0 < b/2, we have B b/2 (x 0 ) ⊂ B b (x 0 ). By Lemma 2.1,ẋ = P f (t,x) has a uniquely local solutionx(t) ∈ B b/2 (x 0 ) for t ∈ [0, a 1 ] ∩ J a ,
where a 1 is given by
a 1 = b/2 max (t,x)∈J0×B b (x0) Φf (t, x)
.
Since Φf (t, x) ≤ f (t, x) , we have a 1 ≥ b/2M = a/2. Therefore, J 0 ⊂ [0, a 1 ] ∩ J a , and there exists a unique solutionx(t) ∈ B b (x 0 ) for the interval J 0 .
The error of the reduced model formed by the Galerkin projection can be defined as e(t) :=x(t) − x(t). Let e o (t) := (I k − P )e(t), which denotes the error component Illustration of the actual solution x(t) for the original system (2.1), the projected solutionx(t) on S, and the approximate solutionx(t) computed by the reduced model (2.3). The component of error orthogonal to S is given by eo(t) =x(t) − x(t) and the component of error parallel to S is given by e i (t) =x(t) −x(t). This figure is reproduced from [19].
orthogonal to S, and e i (t) := P e(t), which denotes the component of error parallel to S (see Figure 2.1). Thus, we have
(2.5) e o (t) =x(t) − x(t),
which directly comes from the projection. However, since the system is evolutionary with time, further approximations of the projection-based reduced model result in an additional error e i (t), and we have
(2.6) e i (t) =x(t) −x(t).
Although e i (t) and e o (t) are orthogonal to each other, they are not independent [19].
Lemma 2.3. Consider the initial value problem (2.1) over the interval J 0 = [0, a/2]. a, J a , b, B b (x 0 ), M , P , x(t),x(t),x(t), e(t), e o (t), and e i (t) are defined as above. Suppose f (t, x) is a uniformly Lipschitz function of x with constant K and a continuous function of t for all (t, x) ∈ J a × B b (x 0 ). Then the error e(t) =x(t) − x(t) in the infinity norm for the interval J 0 is bounded by
(2.7) e ∞ ≤ e Ka/2 e o ∞ + e Ka/2 e i (0) .
Proof. Since f (t, x) is a uniformly Lipschitz function for any (t, x) ∈ J a × B b (x 0 ), Lemmas 2.1 and 2.2 respectively imply the unique existences of x(t) ∈ B b (x 0 ) and
x(t) ∈ B b (x 0 ). Moreover, we can uniquely determinex(t) ∈ B b (x 0 ) byx(t) = P x(t).
Therefore, x(t),x(t), andx(t) are all well-defined for any t ∈ J 0 . Substituting (2.1) and (2.
3) into the differentiation of e o (t) + e i (t) =x(t) − x(t) yields (2.8)ė o +ė i = P f (t,x) − f (t, x).
Left multiplying (2.8) by P , expandingx, and recognizing that P 2 = P giveṡ
e i (t) = P (f (t, x + e o + e i ) − f (t, x)).
Using this equation by expanding e i (t + h) and applying triangular inequality yields
e i (t + h) = e i (t) + hP f (t, x + e o + e i ) − hP f (t, x) + O(h 2 ) ≤ e i (t) + h P f (t, x + e o + e i ) − P f (t, x + e o ) +h P f (t, x + e o ) − P f (t, x) + O(h 2 ).
Rearranging this inequality and applying the Lipschitz conditions gives
e i (t + h) − e i (t) h ≤ K e i (t) + K e o (t) + O(h).
Since O(h) can be uniformly bounded independent of e i (t), using the mean value theorem and letting h → 0 give
d dt e i (t) ≤ K e i (t) + K e o (t) .
Rewriting the above inequality into integral form,
e i (t) ≤ α(t) + K t 0 e i (τ ) dτ , where α(t) := e i (0) + K t 0 e o (τ ) dτ , and using Gronwall's lemma, we obtain e i (t) ≤ α(t) + t 0 α(s)K exp t s Kdτ ds. By definition, e o ∞ ≥ e o (t) for any t ∈ J 0 . It follows that α(t) ≤ e i (0) + Kt e o ∞ .
Simplifying the integral of the right-hand side of the above inequality gives e i (t) ≤ (e Ka/2 − 1) e o ∞ + e Ka/2 e i (0) , for any t ∈ J 0 . Combining the above inequality with e ∞ ≤ e i ∞ + e o ∞ , one can obtain (2.7).
Remark:
The above lemma provides a bound for e i (t) in terms of e o ∞ and e i (0) . We have e i (0) = 0 when the initial condition of the reduced model is given byx 0 = P x 0 for (2.3). In this situation, (2.7) becomes e ∞ ≤ e Ka/2 e o ∞ . Considering e ∞ ≥ e o ∞ , e ∞ = 0 holds if and only if e o ∞ = 0.
Obviously, J 0 is not the maximal time interval of the existence and uniqueness of x(t) andx(t). For convenience, we simply assume that x(t) andx(t) globally exist on J = [0, T ] throughout the rest of this article. Otherwise, we can shrink J to a smaller interval, which starts from 0, such that both x(t) andx(t) are well-defined on J. Let D be an open set that contains x(t),x(t), andx(t) for all t ∈ J. Under this assumption, Lemma 2.3 is still valid if J 0 and B b (x 0 ) are substituted by J and D, respectively.
POD.
In order to provide an accurate description for the original system, the POD method can be used to deliver a set of empirical eigenfunctions such that the error for representing the given data onto the spanned subspace is optimal, in a least squares sense [11]. Assume that m precomputed snapshots form a matrix, X := [x(t 1 ), ..., x(t m )]. Then, the truncated singular value decomposition (SVD) (2.9) X ≈ ΦΛΨ T provides the POD basis matrix Φ ∈ R n×k , where Λ ∈ R k×k is a diagonal matrix that consists of the first k nonnegative singular values arranged in decreasing order. P is then obtained by ΦΦ T . Let E denote the energy of the full system, which is approximated by the square of the Frobenius norm of snapshot matrix X, E = T 0
x(t) 2 dt ≈ X 2 F = r α=1 λ 2 α , where r = min(n, m). Let E ′ denote the energy in the optimal k-dimensional sub- space, E ′ = T 0 P x(t) 2 dt ≈ P X 2 F = k α=1 λ 2 α .
A criterion can be set to limit the approximation error in the energy by a certain fraction η. Then, we seek k ≪ r so that
(2.10) E ′ /E > η.
The key for POD and other projection-based reduced models is to find a subspace where all the state vectors approximately reside. Although these methods can significantly increase the computational speed during the online stage, the cost of data ensemble construction in the offline stage is often very expensive. For these reasons, developing an inexpensive online manifold learning technique is a desirable objective.
3. SIRM. The SIRM method is introduced by combining subspace iteration with a model reduction procedure in this section. The idea of subspace construction is to enhance the POD method by feeding it with information drawn from the observed state of the system and its time derivation. Then, a more precise solution is solved by projecting the original system onto this subspace. The subspace construction is carried out iteratively until a convergent solution is achieved.
3.1. Algorithm of SIRM. In this article, a k-dimensional subspace S is called invariant of x(t) (or invariant for short) if x(t) ∈ S for all t ∈ J. In this case, P x(t) = x(t), which means that P is an invariant projection operator on the trajectory and thatx(t) = x(t). As mentioned above, e o (t) = 0 holds if and only if e(t) = 0. Then, x(t) = x(t). Inserting (2.1) and (2.3) intoẋ(t) =ẋ(t), one can achieve P f (t, x) = f (t, x), which is equivalent to f (t, x) ∈ S. In fact, (x(t), f (t, x)) can be considered a point in the tangent bundle T S, which coincides with the Cartesian product of S with itself. As an invariant projection, P preserves not only the state vectors along the solution orbit but also the associated tangent vectors, i.e., the dynamics.
On the jth iteration, the aim is to construct a subspace S j such that bothx j−1 and f (t,x j−1 ) are invariant under the associated projection operator P j , i.e.,
(3.1) P j (x j−1 ) =x j−1 , (3.2) P j (f (t,x j−1 )) = f (t,x j−1 ). Thus, bothx j−1 (t) and f (t,x j−1 (t)) reside in S j for all t ∈ J = [0, T ].
If the solution orbit is given at discrete times t 1 , ..., t m , then we have an n × m state matrix
(3.3)X j := [x j (t 1 ), ...,x j (t m )].
Accordingly, the samples of tangent vectors along the approximating orbit can form another n × m matrix,
(3.4)F j := [f (t 1 ,x j (t 1 )), ..., f (t m ,x j (t m ))].
A combination ofX j andF j gives an information matrix, which is used to represent an extended data ensemble
(3.5)Ŷ j := [X j , γF j ],
where γ is a weighting coefficient. The basis vectors of S j can be obtained by using SVD ofŶ j−1 . γ = 1 is a typical value that is used to balance the truncation error ofX j andF j . It can be noted that a large m value will lead to intensive computation, but the selected snapshots should reflect the main dimensions of states and tangent vectors along the solution trajectory. When the width of each time subinterval (partitioned by t i ) approaches zero, S j can be given by the column space ofŶ j−1 .
Algorithm 1 SIRM
Require: The initial value problem (2.1). Ensure: An approximate solutionx(t).
Set a test functionx 0 (t) as the trial solution. Initialize the iteration number j = 0.
(t) = Φ j z j (t). until x j −x j−1 ∞ < ǫ,
where ǫ is the error tolerance. Obtain the final approximate solutionx(t) =x j (t).
Algorithm 1 lists the comprehensive procedures of the SIRM method. A new subspace S j is constructed in each iteration, followed by an approximate solutionx j−1 (t). Asx j (t) → x(t), S j approaches an invariant subspace. For this reason, SIRM is an iterative manifold learning procedure, which approximates an invariant subspace by a sequence of subspaces. A complete iteration cycle begins with a collection of snapshots from the previous iteration (or an initial test function). Then, a subspace spanned by an information matrix is constructed. Empirical eigenfunctions are generated by POD, and finally a reduced-order equation obtained by Galerkin projection (2.2) is solved.
Convergence Analysis.
In this subsection, we first provide a local error bound for the sequence of approximate solutions {x j (t)} obtained by SIRM, which paves the way for the proof of local and global convergence of the sequence.
It can be noted that both x j−1 (t) ∈ S j and f (t, x j−1 ) ∈ S j hold for all t ∈ J 0 only in an ideal situation. If S j is formed by extracting the first few dominant modes from the information matrix of the extended data ensemble (3.5), neither (3.1) nor (3.2) can be exactly satisfied. Let ε j quantify the projection error,
(3.6) ε j := a/2 0 I − P j x j−1 (τ ) 2 dτ +γ 2 a/2 0 I − P j f (τ,x j−1 ) 2 dτ .
If SVD is used to construct the empirical eigenfunctions, ε j can be estimated by
(3.7) ε j ≈ r α=k j +1 (λ j α ) 2 ,
where λ j α is the αth singular value of the information matrixŶ j−1 , and k j is the adaptive dimension of S j such that the truncation error produced by SVD is bounded by ε j ≤ ε. The following lemma gives an error bound for the limit of the sequence {x j (t)}.
Lemma 3.1. Consider solving the initial value problem (2.1) over the interval J 0 = [0, a/2] by the SIRM method. a, J a , b, B b (x 0 ), M , P , x(t),x(t),x(t), e(t), e o (t)
, and e i (t) are defined as above. The superscript j denotes the jth iteration. Suppose f (t, x) is a uniformly Lipschitz function of x with constant K and a contin-
uous function of t for all (t, x) ∈ J a × B b (x 0 ). Then {x j (t)} approaches x(t)
with an upper bound of e ∞ given by
(3.8) χ = √ aεe Ka/2 √ 2γ(1 − Kae Ka/2 /2) + 2θe Ka/2 1 − Kae Ka/2 /2 for all t ∈ J 0 provided that (3.9) a < min b M , 2e −Kb/2M K ,
where θ is the maximal error of the initial states in reduced models, and θ < b/2.
Proof. As proved in Lemma 2.3, x(t),x(t), andx(t) are well-defined over the interval J 0 . Moreover, x(t),x(t), andx(t) ∈ B b (x 0 ) for all t ∈ J 0 . Multiplying (2.8)
on the left by I − P j , we obtain the evolution equation for e j o (t),
e j o = −(I − P j )f (t, x),
which is equivalent to
(3.10)ė j o = (I − P j )[f (t, x + e j−1 ) − f (t, x)] − (I − P j )f (t, x + e j−1 ).
Considering that
x(t) ∈ B b (x 0 ),x j−1 (t) ∈ B b (x 0 ) for all t ∈ J 0 and f (t, x) is a uniformly Lipschitz function for all (t, x) ∈ J a × B b (x 0 ) with constant K, it follows that (3.11) f (t, x + e j−1 ) − f (t, x) ≤ K e j−1 (t) .
Since P j is a projection matrix, we have I − P j = 1. This equation together with (3.10) and (3.11) yields
(3.12) ė j o (t) ≤ K e j−1 (t) + (I − P j )f (t, x + e j−1 ) . For h > 0, the expansion of e j o (t + h) gives (3.13) e j o (t + h) ≤ e j o (t) + h ė j o (t) + O(h 2 ).
Rearranging (3.13) and applying (3.12) results in
(3.14) e j o (t + h) − e j o (t) h ≤ K e j−1 (t) + (I − P j )f (t, x + e j−1 ) + O(h),
where the O(h) term may be uniformly bounded independent of e j o (t). Integrating (3.14) with respect to t yields
(3.15) e j o (t) ≤ K t 0 e j−1 (τ ) dτ + t 0 (I − P j )f (τ, x + e j−1 ) dτ + e j o (0) .
For t ∈ J 0 , the first term on the right-hand side is bounded by Ka e j−1 ∞ /2. Using the definition of ε j in (3.6) and the fact that ε j ≤ ε for each j, we obtain
ε ≥ γ 2 a/2 0 I − P j f (τ,x j−1 ) 2 dτ .
By the Cauchy-Schwarz inequality, the second term on the right-hand side of (3.15) is bounded by aε/2γ 2 when t ≤ a/2. It follows that
e j o (t) ≤ Ka e j−1 ∞ 2 + aε 2γ 2 + e j o (0) .
Using (2.7) in Lemma 2.3, this inequality yields
(3.16) e j ∞ ≤ Kae Ka/2 e j−1 ∞ 2 + √ aεe Ka/2 √ 2γ + e Ka/2 e j o (0) + e Ka/2 e j i (0) .
If the error of the initial condition is bounded by e j (0) ≤ θ for each iteration, then
e j o (0) ≤ θ, and e j i (0) ≤ θ. As a result, (3.17) e j ∞ ≤ Kae Ka/2 e j−1 ∞ 2 + √ aεe Ka/2 √ 2γ + 2θe Ka/2 .
By (3.9), Kae Ka/2 /2 < Kae Kb/2M /2 < 1. Using the definition of χ in (3.8), (3.17) can be rewritten as
(3.18) e j ∞ − χ ≤ Kae Ka/2 /2( e j−1 ∞ − χ).
It follows that if e j ∞ − χ > 0 for all j, it converges to 0 linearly. Otherwise, once e j0−1 ∞ − χ ≤ 0 for some j 0 , then e j0 ∞ ≤ χ, and so does e j ∞ for all j > j 0 . Therefore, we have
(3.19) lim sup j→+∞ e j ∞ ≤ χ,
which means e j (t) is bounded by χ as j → +∞ for all t ∈ J 0 .
The first term of χ is introduced by the truncation error. By decreasing the width of time intervals among neighboring snapshots and increasing the number of POD modes, we can limit the value of ε. The second term of χ is the magnified error caused by e j (0). If both χ and e j (0) approach 0, we have the following theorem.
(t, x) ∈ J 0 × B b (x 0 )
. For each iteration, the reduced subspace S j contains x 0 and the initial state for the reduced model is given byx j 0 = P j x 0 . Moreover, (3.2) is satisfied. Then the sequence {x j (t)} uniformly converges to x(t) for all t ∈ J 0 , provided that
(3.20) a < min b M , 2e −Kb/2M K . Proof. Since the initial statex j 0 is the projection of x 0 onto S j , we have e j i (0) = 0. Meanwhile, x 0 ∈ S j results in e j o (0) = 0.
Then the initial error satisfies e j (0) = 0. On the other hand, (3.2) requires that f (t,x j−1 ) is invariant under the projection operator P j , i.e., (I − P j )f (t,x j−1 ) = 0, which leads to ε j = 0. Therefore, in Lemma 3.1, both ε and θ approach 0, and so does χ. As a consequence, {x j (t)} converges to the fixed point x(t) for all t ∈ J 0 .
It can be noted that the error bound χ of the SIRM method is completely determined by θ and ε. As an alternative to (3.5), a more straightforward form of the information matrix for the extended data ensemble can be written as
(3.21)Ỹ j := [x 0 , γF j ],
and the SIRM method can still converge to x(t) by Theorem 3.2. However, as a Picard-type iteration, SIRM can only be guaranteed to reduce local error within one iteration. If ε and θ approach 0, (3.17) can be rewritten as
(3.22) e j ∞ e j−1 ∞ ≤ Kae Ka/2 2 .
When the interval J 0 is large, for example, a > 2/K, the left-hand side might be greater than 1. Thus, althoughx j (t) has less local error thanx j−1 (x), it might be less accurate in a global sense.
On the other hand, if the information matrix (3.5) is applied to the SIRM method, x j−1 (t) ∈ S j is satisfied for each iteration. For any t, e j o (t) denotes the distance from
x(t) to S j , while e j−1 (t) denotes the distance from x(t) tox j−1 (t). Recognizing thatx j−1 (t) ∈ S j , we have (3.23) e j o (t) ≤ e j−1 (t) .
If θ approaches 0, so does e j i (0) . Using (2.7) in Lemma 2.3, one obtains
(3.24) e j ∞ e j−1 ∞ ≤ e Ka/2 .
This inequality still cannot guarantee that e j−1 (t) < e j (t) for all t ∈ J. However, when a > 2/K it provides a stronger bound than (3.22) does, which can effectively reduce the global error. So far, we have proved convergence of SIRM for a local time interval J 0 = [0, a/2]. Since the estimates used to obtain J 0 are certainly not optimal, the true convergence time interval is usually much larger. Supposing J 0 ⊂ J, we will next prove that the convergence region J 0 can be extended to J under certain conditions. Theorem 3.3 (global convergence of SIRM). Consider solving the initial value problem (2.1) over the interval J = [0, T ] by the SIRM method. P , x(t),x(t),x(t), e(t), e o (t), and e i (t) are defined as above. The superscript j denotes the jth iteration. Suppose f (t, x) is a locally Lipschitz function of x and a continuous function of t for all (t, x) ∈ J × D ′ , where D ′ is an open set that contains x(t) for all t ∈ J. For each iteration, the reduced subspace S j contains x 0 and the initial state for the reduced model is given byx j
0 = P j x 0 . Moreover, (3.2) is satisfied. The sequence {x j (t)} then uniformly converges to x(t) for all t ∈ J. Proof. Since D ′ is open, there exists a constant b such that b > 0, and E := ∪ tBb (x(t)) ⊂ D ′ . Since f (t, x) is locally Lipschitz on J × D ′ and E is compact, f (t, x) is Lipschitz on J × E. Let K denote the Lipschitz constant for (t, x) ∈ J × E.
In addition, we can choose the value of a, which is bounded by (3.20). Let J m be the maximal interval in J such that for all t ∈ J m ,x j (t) → x(t) uniformly as j → ∞. Theorem 3.2 indicates that SIRM will generate a sequence of functions {x j (t)} that uniformly converges to
x(t) for all t ∈ J 0 = [0, a/2]. For this reason, we have J 0 ⊂ J m . Now assume J m = J. Then, there exists a t i ∈ J m such that t i + a/2 ≤ T , but t i + a/2 / ∈ J m . t i ∈ J m means for every κ > 0 there exists an integer M 1 (κ) > 0 such that for all j with j > M 1 (κ),x j (t i ) uniquely exists and x j (t i ) − x(t i ) < κ.
Consider the initial value problem
(3.25)ẏ = f (t, y); y(0) = y 0 = x(t i ).
The corresponding reduced model of SIRM at iteration l is given by
(3.26)ẏ l = P l f (t,ŷ l );ŷ l (0) = y 0 + e l (t i ), where e l (t i ) =x l (t i ) − x(t i ).
For an arbitrary small positive number κ ′ , Lemma 3.1 implies that there exists a positive integer M 2 (κ ′ ) such that whenever l > M 2 (κ ′ ) and
t ∈ J 0 = [0, a/2], ŷ l (t) − y(t) < χ + κ ′ .
Plugging χ from (3.8) into this inequality and replacing θ by κ, we have
(3.27) ŷ l (t) − y(t) ≤ √ aεe Ka/2 √ 2γ(1 − Kae Ka/2 /2) + 2κe Ka/2 1 − Kae Ka/2 /2 + κ ′ .
In an ideal case, the truncation error is zero, i.e., ε = 0. Then, the right-hand side of (3.27) can be arbitrarily small. The uniqueness lemma for ODE's (Lemma 2.1) yields
y(t) = x(t i + t) andŷ j (t) =x j (t i + t)
. Therefore, for every ǫ > 0, there exists an integer
(3.28) N (ǫ) = M 1 (1 − Kae Ka/2 /2)ǫ 4e Ka/2 + M 2 ǫ 2 such that, as long as j > N (ǫ), x j (t) − x(t) ≤ ǫ holds for all t ∈ [0, t i + a/2].
Moreover, t i + a/2 ≤ T . However, this contradicts our assumption that t i + a/2 / ∈ J m . Therefore, J m = J, i.e.,x j (t) uniformly converges to x t (t) for all t ∈ J.
3.3. Computational complexity. The computational complexity of the SIRM method for solving an initial value problem is discussed in this subsection. We follow [19] when we estimate the computational cost of the procedures related to the standard POD-Galerkin approach.
Let γ(n) be the cost of computing the original vector field f (t, x), and letγ(k, n) be the cost of computing the reduced vector field Φ T f (t, Φz) based on the POD-Galerkin approach. In the full model, the cost of one-step time integration using Newton iteration is given by bγ(n)/5 + b 2 n/20 if all n × n matrices are assumed to be banded and have b + 1 entries around the diagonal [19]. Thus, the total cost of the full model for N T steps is given by (3.29) N T · bγ(n)/5 + b 2 n/20 .
Next, we analyze the complexity of Algorithm 1. Assuming a trial solution is given initially, the computational cost for each iteration mainly involves the following procedures. In procedure 3, an n × m vector field matrix,F j−1 , is computed based on m snapshots. In procedure 5, from an n × 2m information matrix,Ŷ j−1 , the empirical eigenfunctions Φ j can be obtained in 4m 2 n operations by SVD [25]. In procedure 6, the original system is projected onto a subspace spanned by Φ j , and this cost is denoted by β(k, n). For a linear time-invariant system,ẋ = Ax, β(k, n) represents the cost to compute (Φ j ) T AΦ j , which is given by bnk for sparse A. For a general system, β(k, n) is a nonlinear function of n. In procedure 7, the reduced model is evolved for N T steps by an implicit scheme to obtain z j (t). If the reduced model inherits the same scheme from the full model, then one-step time integration needs kγ(k, n)/5 + k 3 /15 operations [19]. In procedure 8, an n × m snapshot matrixX j (t) is constructed throughx(t i ) = Φz(t i ). Table 3.1 shows the asymptotic complexity for each basic procedure mentioned above. Let N I denote the number of iterations; then, the total cost of Algorithm 1 is given by (3.30) N I · 4m 2 n + β(k, n) + N T · kγ(k, n)/5 + k 3 /15 + mγ(n) + mnk .
Notice that the first three terms in (3.30) represent the cost for the classic POD-Galerkin method, while construction of the extended data ensemble needs extra computational overhead, mγ(n) + mnk, for each iteration. On one hand, the subspace dimension k is no greater than the number of sampling points m, which means mnk < 4m 2 n. On the other hand, we can always choose an optimal m value such that m ≪ N T . Thus, the extra computational overhead plays a secondary role in (3.30), and the computational complexity of Algorithm 1 is approximately equal to the number of iterations, N I , multiplied by the cost of the standard POD-Galerkin approach.
Algorithm 1 does not explicitly specify the trial solutionx 0 (t). In fact, the convergence of SIRM does not depend onx 0 (t), as previously shown. Thus, we can simply setx 0 (t) as a constant, i.e.,x 0 (t) = x 0 . However, a "good" trial solution could lead to a convergent solution in fewer iterations and could thus decrease the total computational cost. For example, if the full model is obtained by a finite difference method with n grid points and time step δt, a trial solution could be obtained by a coarse model, using the same scheme but n/10 grid points with time step 10 × δt. Thus, the coarse model can cost less than 1% of the required operations in the full model.
Notice thatγ(k, n) ≪ γ(n) is achieved only when the analytical formula of Φ T f (t, Φz) can be significantly simplified, especially when f (t, x) is a low-degree polynomial of x [19]. Otherwise, it is entirely possible that the reduced model could be
Procedure
Complexity Compute a vector field matrixF j−1 mγ(n) SVD: empirical eigenfunctions Φ j 4m 2 n Construct a reduced model β(k, n) Evolve the reduced model N T (kγ(k, n)/5 + k 3 /15) Obtain an approximate solutionX j mnk more expensive than the original one. Because of this effect, there is no guarantee that Algorithm 1 can speed up a general nonlinear system. However, it should be emphasized that the POD-Galerkin approach is not the only method that can be used to construct a reduced model in the framework of SIRM; in principle, it can be substituted by a more efficient model reduction technique when f (t, x) contains a nonlinear term, such as trajectory piecewise linear and quadratic approximations [20,21,26,6], the empirical interpolation method [10], or its variant, the discrete empirical interpolation method [5]. This article, however, focuses on using SIRM to obtain an accurate solution without a precomputed database. Therefore, the numerical simulations in the next subsection are still based on the classic POD-Galerkin approach.
Numerical Results. The proposed algorithm, SIRM, is illustrated in this subsection by a linear advection-diffusion equation and a nonlinear Burgers equation.
These examples focus on demonstrating the capability of SIRM to deliver accurate results using reduced models. We also show an application of SIRM for the posteriori error estimation of a coarse model.
Advection-Diffusion Equation.
Let u = u(t, x). Consider the onedimensional advection-diffusion equation with constant moving speed c and diffusion coefficient ν,
(3.31) u t = −cu x + νu xx ,
on space x ∈ [0, 1]. Without loss of generality, periodic boundary conditions are applied,
(3.32) u(t, 0) = u(t, 1), u x (t, 0) = u x (t, 1).
The initial condition is provided by a cubic spline function,
(3.33) u(0, x) = 1 − 3 2 s 2 + 3 4 s 3 if 0 ≤ s ≤ 1, 1 4 (2 − s) 3 if 1 < s ≤ 2, 0 if s > 2,
where s = 10 × |x − 1/3|. The fully resolved model is obtained through a highresolution finite difference simulation with spatial discretization by n equally spaced grid points. The advection term is discretized by the first-order upwind difference scheme with the explicit two-step Adams-Bashforth method for time integration, while the diffusion term is discretized by the second-order central difference scheme with the Crank-Nicolson method for time integration. For our numerical experiments, we consider a system with c = 0.5 and ν = 10 −3 , which gives rise to a system with diffusion as well as advection propagating to the right. This can be seen in Figure 3.1(a), where the initial state and the final state (at t = 0.5) are shown. The full model (reference benchmark solver) is computed through n = 500 grid points. Thus, the unit step can be set as δt = 10 −3 such that the Courant-Friedrichs-Lewy (CFL) condition is satisfied for the stability requirement, i.e., cδt/δx ≤ 1. In order to initialize SIRM, a smaller simulation is carried out by the finite difference method with a coarse grid of k 0 = 20 and a larger time step of 2.5 × 10 −2 . In order to obtain a smooth function for the trial solution, the coarse solution is filtered by extracting the first 10 Fourier modes. When η = 10 −8 , the full-order equation is projected onto a subspace spanned by k = 12 dominant modes during the first iteration and a better approximation is obtained. For different η and k 0 values, Figure 3.1(b) compares the maximal L 2 error between the benchmark solution u(t) and the iterative solutionû j (t) for t ∈ [0, 0.5] in the first 10 iterations. Each subspace dimension is adaptively determined by (2.10). If k 0 = 20, the first three iterations of SIRM respectively use 9, 9, and 11 dominant modes when η = 10 −6 ; use 12, 14, and 14 dominant modes when η = 10 −8 ; use 14, 17, and 18 dominant modes when η = 10 −10 ; and use 17, 19, and 20 dominant modes when η = 10 −12 . As expected, a smaller η value results in a smaller truncation error produced by SVD and the total error, e j ∞ , for an approximate solution. Meanwhile, a trial solution with a higher initial dimension, k 0 , could also significantly decrease the error for the first 10 iterations. It is also noted that e j ∞ is not a monotonically decreasing function of j, especially when η = 10 −6 and k 0 = 20. This does not contradict the convergence analysis in the previous subsection. As a variant of the Picard iteration, the SIRM method achieves a better local solution in each iteration. As (3.24) indicates, we can only guarantee e j ∞ ≤ e Ka/2 e j−1 ∞ in a global sense. Before switching to the next numerical example, we compare the performance of SIRM with another online manifold learning technique, DIRM [18]. The DIRM method splits the whole system into m n subsystems. Starting with a trial solution, Table 3. 2 The minimal subspace dimension of DIRM and SIRM that is required for solving the onedimensional advection-diffusion equation when the error of the first iteration is smaller than 10 −3 , i.e., e 1 ∞ < 10 DIRM simulates each subsystem in turn and repeats this process until a globally convergent solution is obtained. For iteration j, DIRM connects the unreduced subsystem i with the reduced versions of all other subsystems and simulates the resulting system
(3.34)ẋ j i = f i (t, X j i ), z j l = (Φ j l ) T f l (t, X j i ), l = 1, ..., i − 1, i + 1, ..., m n , where X j i = [Φ j 1 z j 1 ; ...; Φ j i−1 z j i−1 ; x j i ; Φ j i+1 z j i+1 ; ...; Φ j mn z j mn ]
. If x j i ∈ R n/mn and z j l ∈ R k , the reduced model of DIRM has a dimension of m n k+m/m n . Since DIRM reduces the dimension for each subsystem, rather than the original system, it inevitably keeps some redundant dimensions. Table 3.2 compares the minimal subspace dimension of DIRM and SIRM that is required for solving (3.31) when the error of the first iteration is smaller than 10 −3 . We use all the aforementioned parameters except scanning ν from 10 −1 to 10 −4 . For the DIRM application, the whole system with n = 500 is divided into 25 subsystems, and the dimension of each subsystem is 20. When ν is greater than 10 −2 , the DIRM method uses three modes for each subsystem, and therefore the dimension of DIRM is 3 × 24 + 20 = 92. When ν decreases to 10 −3 and less, DIRM requires four modes for each system in order to maintain high accuracy, and the subspace dimension grows to 4 × 24 + 20 = 116. Since SIRM requires fewer modes and simulates only one reduced system rather than 20 subsystems, it is much more efficient than DIRM for solving (3.31).
Viscous Burgers Equation. The viscous Burgers equation is similar to the advection-diffusion equation except, in the case of the viscous Burgers equation, the advection velocity is no longer constant. The general form of the one-dimensional Burgers equation is given by
(3.35) u t = −uu x + νu xx ,
where ν is the diffusion coefficient. Let Ω = [0, 1] denote the computational domain. Periodic boundary conditions (3.32) are applied. The cubic spline function (3.33) is used for the initial condition.
In the numerical simulation, the diffusion coefficient is given by ν = 10 −3 . The full model is obtained using n = 2000 grid points, while the trial solution is obtained by extracting the first 10 Fourier modes from a coarse simulation with k 0 = 100 grid points. Because the one-dimensional Burgers equation has a positive velocity, a wave will propagate to the right with the higher velocities overcoming the lower velocities and creating steep gradients. This steepening continues until a balance with the dissipation is reached, as shown by the velocity profile at t = 1 in Figure 3.2(a). Because states of the Burgers equation have high variability with time evolution, more modes are necessary in order to present the whole solution trajectory with high accuracy. Meanwhile, the SIRM method requires more iterations to obtain convergence.
The convergence plot for SIRM is shown in Figure 3.2(b). Equation (2.10) gives an adaptive dimension, k, in each iteration: their values are 21, 38, and 60 for the first three iterations when η = 10 −6 ; are 26, 49, and 85 when η = 10 −8 ; are 30, 62, and 105 when η = 10 −10 ; and are 34, 76, and 129 when η = 10 −12 . When η ≤ 10 −8 , the error of the approximate solution decreases in the first few iterations and then converges to a fixed value, which is mainly determined by the truncation error produced by SVD. In order to achieve higher resolution, for each iteration, more snapshots are needed for each iteration to construct the information matrix and include more modes in the associated reduced model.
What is more, the SIRM method can be used to estimate errors of other approximate models as well. The Euclidean distance between the actual solution u(t) and the approximate solutionû 0 (t) as a function of t can indicate the accuracy of a coarse model (or a reduced model),
(3.36) e 0 (t) = u(t) −û 0 (t) .
However, in many applications, the actual solution u(t) is unknown or very expensive to obtain. In this case, the SIRM method can be used to obtain a more precise solutionû 1 (t), and the Euclidean distance betweenû 1 (t) andû 0 (t) can be used as an error estimator,
(3.37) ∆ 0 (t) = û 1 (t) −û 0 (t) .
Althoughû 1 (t) is only guaranteed to have higher accuracy thanû 0 (t) locally, (3.37) can be applied to identify whether and when the trial solution has a significant discrepancy from the actual solution. More generally, the error of the iterative solution u j (t) computed by SIRM, can (at least locally) be approximated by the difference betweenû j+1 (t) andû j (t), as follows:
(3.38) e j (t) = u(t) −û j (t) ,t e 0 (t) e 1 (t) e 2 (t) e 3 (t) ∆ 0 (t) ∆ 1 (t) ∆ 2 (t) ∆ 3 (t)(3.39) ∆ j (t) = û j+1 (t) −û j (t) .
For this reason, the criterion x j −x j−1 ∞ < ǫ is used in Algorithm 1 to indicate convergence of SIRM.
Revisiting the one-dimensional Burgers equation, Figure 3.3 shows that ∆ j (t) is a good approximation for the actual error e j (t) for t ∈ [0, 1].
Local SIRM.
In the previous section, the presented analysis and simulations illustrate that under certain conditions the SIRM method is able to obtain a convergent solution in the global time domain. However, the SIRM method still has existing redundancy with respect to both dimensionality and computation, as described in the following, that could be improved.
First, the reduced subspace formed by POD in SIRM keeps some redundant dimensions of the original system in each iteration. To explain this, consider a largescale dynamical system whose solution exhibits vastly different states as it evolves over a large time horizon. In order to obtain a highly accurate representation for the entire trajectory, we need a subspace with relatively high dimensionality to form a surrogate model. However, projection-based model reduction techniques usually generate small but full matrices from large (but sparse) matrices. Thus, unless the reduced model uses significantly fewer modes, computing the reduced model could potentially be more expensive than computing the original one. Notice that the orbit of a dynamical system (2.1) is a one-dimensional curve; thus, it is desired that a local section of curve be embedded into a linear subspace of much lower dimensionality.
Second, for each iteration, SIRM requires that the entire trajectory be computed from the initial time to the final time of interest, T , which causes computational redundancy. As a variant of the Picard iteration, the rate of convergence of SIRM could be very slow for a nonlinear system with a large time domain. Under certain conditions, SIRM has a locally linear convergence. As inequality (3.18) indicates, when t ∈ J 0 the rate of convergence of e j ∞ − χ is given by Ka exp(Ka/2)/2. However, as (3.24) suggests, we cannot guarantee that SIRM could obtain a better global solution in each iteration. Meanwhile, if we have already obtained convergence at t = a for some 0 < a ≤ T , it would be a waste of computation to return to t = 0 for the next iteration.
Thus, it is preferable to partition the entire time domain into several smaller subintervals, obtain a convergent solution for one subinterval, and then move forward to the next. A simple concept of time-domain partition was already introduced in [7] in the context of the standard POD-Galerkin method. As opposed to time domain partition, space domain partition [1] and parameter domain partition [8] also have be devolved to construct local reduced models using partial snapshots from a precomputed database. In this section, we combine the idea of time domain partition with SIRM and propose a local SIRM algorithm for model reduction. For each subinterval, the resulting method constructs a convergent sequence of approximating trajectories solved in subspaces of significantly lower dimensionality. Convergence analysis and the relation of the presented method with some other numerical schemes are also discussed. Then, we demonstrate its effectiveness in an example of the Navier-Stokes simulation of a lid-driven cavity flow problem.
(4.1)ż = Φ T i f (t, Φ i z) for t ∈ J i .
The SIRM method can be applied to approach a locally invariant subspace and obtain a convergent solution x(t) for this subinterval. Specifically, we initially set a trial solution,x 0 (t) for t ∈ J i . During iteration j, an extended data ensemble,Ŷ j−1 i , which contains a small number of snapshots within the subinterval, is constructed and then served to generate the empirical eigenfunctions of the subspace to be used in the next iteration cycle. After locally projecting the full model onto this subspace and constructing a reduced model through (4.1), the time integration is carried out in a low-dimensional space to obtain an updated approximate solution. Once sufficient accuracy is achieved, one can move forward to the next subinterval.
Suppose a convergent solutionx(t) for subinterval J i−1 is obtained by the SIRM method. Then, the ending state of J i−1 ,x(t i−1 ), is the starting state of the next subinterval J i . There are several options to estimate the trial solutionx 0 (t) for t ∈ J i , and we just list a few here. One can simply set the trial solution as a constant, which meansx 0 (t) =x(t i−1 ) for t ∈ J i (although this is inaccurate). Alternatively, a coarse model can be used to obtain a rough estimation ofx 0 (t). These two methods can also be used for SIRM, as discussed in the previous section. The third option is to use the time history of the solution trajectory to obtain an initial estimation of the invariant subspace. Similar to [14,17], one can assume that the solution for subinterval J i approximately resides in the invariant subspace of the previous subinterval. Thus, a set of empirical eigenfunctions can be generated by SVD of the state matrix or the information matrix formed by snapshots in J i−1 . Especially, if only the starting snapshot and the ending snapshot are used to construct the initial information matrix, we have
(4.2)Ŷ 0 i = [x(t i−2 ),x(t i−1 ), γf (t i−2 ,x(t i−2 )), γf (t i−1 ,x(t i−1 ))].
After projecting the full model onto this subspace, we can calculate the trial solution for t ∈ J i . Since we do not have snapshots for t < 0, the time-history-based initialization cannot be used for the first subinterval. After obtaining a trial solution for a subinterval, SIRM is used to obtain a better approximation of the actual solution. When the width of a subinterval is small enough, the reduced equation has a significantly lower dimension. Let m denote the number of sampling snapshots in the whole trajectory and m ′ denote the number of sampling snapshots within one time interval. For each i, bothx(t i−1 ) andx(t i ) are sampled for the extended data ensemble. Thus, m = (m ′ − 1) × M + 1. If m ′ = 2, the information matrixŶ j i can be constructed from snapshots at t i−1 and t i ,
(4.3)Ŷ j i = [x(t i−1 ),x j (t i ), γf (t i−1 ,x(t i−1 )), γf (t i ,x j (t i ))].
Then, Φ j i can be constructed by the SVD. When m ′ is small enough, say, m ′ ≤ 5, there is no need to further reduce dimensions fromŶ j i . Instead, Φ j i can be computed more efficiently by the Gram-Schmidt process. Algorithm 2 represents the complete process of the local SIRM method.
Algorithm 2 Local SIRM
Require: The initial value problem (2.1). Ensure: An approximate solutionx(t).
Divide the whole time domain into smaller subintervals J 1 , ..., J M . for subinterval i do Set a test functionx 0 (t) as the trial solution. Obtain a local solution by SIRM. end for Using the formula (3.30), we can obtain the computational complexity for Algorithm 2. Table 4.1 illustrates the complexity of the full model, the (global) SIRM method, and the local SIRM method. Compared with the full model, the SIRM and local SIRM methods are more efficient only when the following conditions are satisfied: (1) The standard POD-Galerkin approach is significantly faster than the original model, and (2) the number of sampling points m is much smaller than the total number of time steps N T .
Next, we compare the computational complexity of SIRM and its variant, local SIRM. In order to achieve the same level of accuracy for the same problem, the number of iterations needed, N I , for Algorithm 1 is usually much greater than the average number of iterations needed, N ′ I , for one subinterval of Algorithm 2. In addition, since m ≃ M m ′ , we can assume k ≃ M k ′ . Although there is no general formula for β(k, n), we may expect that it is at least a linear function of k, and therefore β(k, n) ≥ M β(k ′ , n) holds. In fact, SIRM can be considered a special case of the local SIRM method where M = 1, and the local reduced model offers more flexibility to choose a subspace dimension. Furthermore, the unit step of (4.1) could be the same as the unit step of the full model δt when computing the time integration. Suppose SVD or the time integration plays a dominant role in determining the complexity of Algorithm 1; a local reduced model can obtain at least M times speedups.
Based on the aforementioned complexity analysis, we discuss some heuristics for some parameter choice strategies of the local SIRM method. Although the selection of m has some flexibility, a good choice of m should balance accuracy and computational
Full model N T · bγ(n)/5 + b 2 n/20 SIRM N I · 4m 2 n + β(k, n) + N T · kγ(k, n)/5 + k 3 /15 + mγ(n) + mnk Local SIRM N ′ I · 4mm ′ n + M β(k ′ , n) + N T · k ′γ (k ′ , n)/5 + k ′3 /15 + mγ(n) + mnk ′ speed.
Once m is determined, in order to generate maximal speedups for one iteration, each subinterval contains a small number of sampling snapshots, say, m ′ = 2 or m ′ = 3. Numerical study in section 4.3 indicates that if m remains constant, a large m ′ value cannot significantly increase the accuracy for the lid-driven cavity flow problem.
If the Gram-Schmidt process is used to form a set of orthonormal eigenvectors, the dimension of each local subspace, k ′ , can be directly determined by m ′ . Usually, k ′ = 2m ′ if the solution trajectory is represented by one curve. The multiplier 2 stems from the fact that the information matrix contains both state vectors and their corresponding tangent vectors. For the lid-driven cavity flow problem, since the solution involves both ψ and ω, we have k ′ = 4m ′ .
Convergence Analysis.
We have already shown the capability of SIRM to effectively approach a globally invariant subspace for a dynamical system, and thus generate a sequence of functions that converges to the actual solution. As an extension of SIRM, local SIRM generates a set of local invariant subspaces and obtains corresponding local solutions. The union of all these local solutions forms a full trajectory for the original system.
We begin here with the first subinterval. In an ideal situation, x 0 ∈ S j 1 , while the state and the vector field satisfyx j−1 (t) ⊂ S j 1 and f (t,x j−1 (t)) ⊂ S j 1 for all t ∈ J 1 , respectively. As Theorem 3.3 indicates, the sequence {x j (t)} generated by the local SIRM method approaches x(t) for t ∈ J 1 . If the vector field is Lipschitz, then the local solution to (2.1) on subinterval J 2 continually depends on the initial condition x(t 1 ). For this reason, starting fromx(t 1 ), we can obtain a sequence of functions that converges to x(t) for t ∈ J 2 . We can then move forward to the rest of the subintervals and achieve the following theorem. Finally, it is interesting to consider the local SIRM method as a generalization of many current time integration schemes. We can again consider m ′ = 2 as an example. The time-history initialization provides a linear subspace spanned byŶ 0 i to estimate the trial solution for t ∈ J i . Especially, when t = t i , the initial estimation of the state vector is given by
(4.4)x 0 (t i ) =Ŷ 0 i · ς 0 i ,
whereŶ 0 i is given by (4.2) and ς 0 i is a vector that contains four elements. Suppose the width of each subinterval equals δT and the width of one time step of integration equals δt. As δT → δt, local SIRM degenerates to the two-step Adams-Bashforth scheme if
ς 0 i = 0, 1, − δt 2γ , 3δt 2γ T .
On the other hand, if one uses the SIRM method to obtain a better estimation at t = t i , the approximate solution is given by
(4.5)x 1 (t i ) =Ŷ 1 i · ς 1 i ,
where it is assumed that only two snapshots are used to construct the information matrixŶ 1 i , as expressed by (4.3). As δT → δt, local SIRM degenerates to the Crank-Nicolson scheme if
ς 1 i = 1, 0, δt 2γ , δt 2γ T .
More generally, suppose m ′ snapshots are sampled from the previous subinterval. Then, as δT → δt, the time-history initialization can degenerate to the m ′ -step Adams-Bashforth method if proper coefficients are set for ς 0 i . In addition, if the Y j i has m ′ + 1 snapshots from J i , and the first m ′ snapshots are overlapping with J i−1 , then each iteration defined by SIRM can degenerate to the m ′ -step Adams-Moulton method. Furthermore, if δT = m ′ δt, then each iteration defined by SIRM is a generalized form of the m ′ -order Runge-Kutta method with variable coefficients.
However, as a manifold learning approach, local SIRM applies reduced models to determine the coefficient values for each subinterval. This is more flexible than a common scheme for time integration because the latter uses predesigned coefficients for each column of the information matrix. Therefore, the local SIRM method has the ability to provide more stable results for a fixed time interval. Even if δT ≫ δt, local SIRM can still generate stable results with high accuracy. In the next subsection, the local SIRM approach is applied to a lid-driven cavity flow problem. ψ xx + ψ yy = −ω,
(4.7) ω t = −ψ y ω x + ψ x ω y + 1 Re (ω xx + ω yy ) ,
where Re is the Reynolds number and x and y are the Cartesian coordinates. The velocity field is given by u = ∂ψ/∂y, v = −∂ψ/∂x. No-slip boundary conditions are applied on all nonporous walls including the top wall moving at speed U = 1. Using Thom's formula [24], these conditions are, then, written in terms of stream function and vorticity. For example on the top wall one might have
4.9) ω B = −2ψ B−1 h 2 − U h ,
where subscript B denotes points on the moving wall, subscript B − 1 denotes points adjacent to the moving wall, and h denotes grid spacing. Expressions for ψ and ω at remaining walls with U = 0 can be obtained in an analogous manner. The initial condition is set as u(x, y) = v(x, y) = 0. The discretization is performed on a uniform mesh with finite difference approximations. For the time integration of (4.7), the implicit Crank-Nicolson scheme is applied for the diffusion term, and the explicit two-step Adams-Bashforth method is employed for the advection term.
In the numerical simulation, the Reynolds number is given by Re= 1000. The full model uses 129 × 129 grid points and δt = 5 × 10 −3 as a unit time step. The whole time domain, [0, 50], is divided into 250 subintervals. For each subinterval, the trial solution is obtained through a simulation based on 33 × 33 coarse grid points with a unit time step of 4δt. The same discretization scheme is applied for the coarse model. Thus, the coarse model can cost less than 1/64 of the required operations in the full model. A sequence of functions defined by local SIRM is used to approach the local solution.
The streamline contours for the lid-driven cavity flow are shown in Figure 4.1. In 4.1(a), the full model matches well with the numerical results from [9], and the values of ψ of the contours are the same as shown in Table III of [9]. Local SIRM provides an approximate solution. The main error occurs around the vortex center, where the contour of ψ = −0.1175 is missing in 4.1(b). Comparison of the computational time of the full model and the local SIRM method for the lid-driven cavity flow problem. As the resolution increases from 65 × 65 to 257 × 257, the dimension of the full system, n, increases from 2 × 65 2 to 2 × 257 2 . Using a log-log plot, the asymptotic complexity can be determined by the linear regression coefficient.
and ω as well as the associated tangent vectors, the subspace dimension is 12. An average of five iterations are carried out to obtain a local convergent solution for each subinterval.
Since an explicit scheme is used for the advection term, the CFL condition, uδt/δx + vδt/δy ≤ 1 is a necessary condition for stability. Therefore, if the number of grid points increases from 65 × 65 to 257 × 257, the unit time step decreases from 10 −2 to 2.5 × 10 −3 accordingly. Accounting for this, the asymptotic computational complexity of the full model for the entire time domain is no less than O(n 1.5 ). The above analysis only focuses on the advection term. Since the diffusion term uses an implicit scheme, there is no extra limit to the unit time step for the stability requirement. However, a large n will lead to a slower convergence for many iterative methods, such as the successive over-relaxation method or the conjugate gradient method. Thus, O(n 1.5 ) provides only a low bound estimation for the full model.
Since the Navier-Stokes equation contains only linear and quadratic terms, the complexity of the reduced model constructed by the Galerkin projection for one-step integration does not explicitly depend on n. Moreover, the computational complexity of all the other terms of local SIRM in Table 4.1 depends at most linearly on n. Thus, we may roughly estimate that the overall complexity of local SIRM is O(n). Figure 4.3 compares the running time of the full model and the running time of SIRM for different resolutions in (4.6) and (4.7). Except n and δt, all the parameters remain the same. The linear regression indicates that the asymptotic complexity of the full model is O(n 1.74 ), and the asymptotic complexity of the reduced model is O(n 1.07 ) using the same scheme. Finally, Table 4.2 shows the maximal L 2 error for the local SIRM method using different m and m ′ values. If each local reduced equation is solved in a larger subinterval with more modes while the total number of sampling snapshots remains the same, there is no significant improvement in accuracy. On the other hand, if the length of each subinterval remains the same but we sample more snapshots, a more accurate solution can be achieved. Thus, a good m value should balance accuracy and cost of the reduced model, while a small m ′ is desired for the lid-driven cavity flow problem.
Conclusion.
In this article, a new online manifold learning framework, subspace iteration using reduced models (SIRM), was proposed for the reduced-order modeling of large-scale nonlinear problems where both the data sets and the dynamics are systematically reduced. This framework does not require prior simulations or experiments to obtain state vectors. During each iteration cycle, an approximate solution is calculated in a low-dimensional subspace, providing many snapshots to construct an information matrix. The POD (SVD) method could be applied to generate a set of empirical eigenfunctions that span a new subspace. In an ideal case, a sequence of functions defined by SIRM uniformly converges to the actual solution of the original problem. This article also discussed the truncation error produced by SIRM and provided an error bound. The capability of SIRM to solve a high-dimensional system with high accuracy was demonstrated in several linear and nonlinear equations. Moreover, SIRM could also be used as a posterior error estimator for other coarse or reduced models.
In addition, the local SIRM method was developed as an extension that can reduce the cost of SIRM. The SIRM method is used to obtain a better approximate solution for each subinterval of a partitioned time domain. Because each subinterval has less state variation, the associated reduced model could be small enough. The numerical results of the nonlinear Navier-Stokes equation through a cavity flow problem implied that the local SIRM method could obtain significant speedups for a large-scale problem while maintaining good accuracy.
There are some interesting open questions to study in the future. For example, since the choice of the extended data ensemble is not unique, there might be other methods that can be used to form an information matrix that results in a more efficiently reduced model. It should be noted that the POD-Galerkin approach is not the only technique that can be used to extract the dominant modes from an information matrix and to construct a reduced model. How to combine SIRM with other model reduction techniques that exhibit higher efficiency remains a topic for future research.
Figure 2 . 1 .
21Figure 2.1. Illustration of the actual solution x(t) for the original system (2.1), the projected solutionx(t) on S, and the approximate solutionx(t) computed by the reduced model (2.3). The component of error orthogonal to S is given by eo(t) =x(t) − x(t) and the component of error parallel to S is given by e i (t) =x(t) −x(t). This figure is reproduced from [19].
repeat 1 :
1Update the iteration number j = j + 1. 2: Assemble snapshots of an approximate solutionx j−1 (t) into matrix formX j−1 . 3: Compute vector field matrixF j−1 associated with snapshots inX j−1 . 4: Form an information matrix for the extended data ensembleŶ j−1 = [X j−1 , γF j−1 ]. 5: Based onŶ j−1 , compute the empirical eigenfunctions Φ j through POD. 6: Project the original equation onto a linear subspace spanned by Φ j and form a reduced model. 7: Solve the reduced model and obtain an approximate solution z j (t) in the subspace coordinate system. 8: Express the updated solution in the original coordinate systemx j
Theorem 3.2 (local convergence of SIRM). Consider solving the initial value problem (2.1) over the interval J 0 = [0, a/2] by the SIRM method. a, J a , b, B b (x 0 ), M , P , x(t),x(t),x(t), e(t), e o (t), and e i (t) are defined as above. The superscript j denotes the jth iteration. Suppose f (t, x) is a uniformly Lipschitz function of x with constant K and a continuous function of t for all
Figure 3 . 1 .
31(a) The velocity profiles at t = 0 and t = 0.5 of the one-dimensional advectiondiffusion equation with constant speed c = 0.5 and diffusion coefficient ν = 10 −3 . n = 500 grid points are used to obtain the full model for the fixed space domain [0, 1]. The trial solution is obtained by extracting the first 10 Fourier modes from a coarse model based on k 0 = 20 grid points. When η = 10 −8 , it takes one iteration for SIRM to obtain an accurate solution by 12 modes. (b) Convergence of SIRM for different η and k 0 values. Plot of the maximal L 2 error, e j ∞ = sup{ û j (t)− u(t) : t ∈ [0, 0.5]}, between the benchmark solution u(t) and the iterative solutionû j (t) for t ∈ [0, 0.5].
− 3 .
3Parameter values are n = 500, c = 0.5, δt = 10 −3 . The time domain is [0, 0.5]. The trial solution is obtained by extracting the first 10 Fourier modes from a coarse simulation based on 20 grid points. ν = 10 −1 ν = 10 −2 ν = 10 −3 ν = 10
Figure 3 . 2 .
32(a) The velocity profiles at t = 0 and t = 1 of the one-dimensional Burgers equation with constant diffusion coefficient ν = 10 −3 . n = 2000 grid points are used to obtain the full model for the fixed space domain [0, 1], while k 0 = 100 grids are used to obtain a coarse model. The first 10 Fourier modes are extracted to construct the trial solution. When η = 10 −10 , it takes three iterations for SIRM to obtain an accurate solution. (b) Convergence of SIRM for different η and k 0 values. Plot of the maximal L 2 error, e j ∞ = sup{ û j (t) − u(t) : t ∈ [0, 1]}, between the benchmark solution u(t) and the iterative solutionû j (t).
Figure 3 . 3 .
33Comparison of the actual error e j (t) = u(t) −û j (t) with the estimated error∆ j (t) = û j+1 (t) −û j (t) for t ∈ [0, 1], where u(t)is the actual solution of the one-dimensional Burgers equation computed by 2000 grid points,û 0 (t) is the trial solution obtained by extracting the first 10 Fourier modes from a coarse simulation based on 100 grid points, andû j (t) (j = 0) are iterative solutions computed by the SIRM method.
4. 1 .
1Algorithm of Local SIRM. Suppose the entire time domain J := [0, T ] is partitioned into M smaller subintervals J 1 , ..., J M with J i := [t i−1 , t i ]. We slightly abuse the notation and denote the subinterval index by the subscript i. Let t 0 = 0 and t M = T , such that J = ∪ M i=1 J i . At subinterval J i , the local solution trajectory approximately resides in a linear subspace S i spanned by column vectors in Φ i . Let Φ i be orthonormal; then the reduced equation formed by the Galerkin projection is given by
Theorem 4.1 (convergence of local SIRM). Consider solving the initial value problem (2.1) by local SIRM for the time domain J = [0, T ], which is partitioned into M smaller subintervals J 1 , ..., J M with J i := [t i−1 , t i ]. Suppose f (t, x) is a locally Lipschitz function of x and a continuous function of t for all(t, x) ∈ J × D ′ , where D ′is an open set that contains x(t) for all t ∈ J. For subinterval J i , the SIRM method is applied to obtain an approximation for the local solution. Let x(t) be the local solution of the full model, and letx j (t) be the solution of the reduced model at iteration j. For each iteration, the reduced subspace S j i containsx(t i−1 ). Furthermore, the vector field satisfies f (t,x j−1 ) ⊂ S j i for all t ∈ J i . Then, for all i ∈ {1, ..., M } and t ∈ J i , the sequence of functions {x j (t)} uniformly converges to x(t).
4. 3 .
3Cavity Flow Problem. Consider a lid-driven cavity flow problem in a rectangular domain Ω = [0, 1] × [0, 1]. The space domain is fixed in time. Mathematically, the problem can be represented in terms of the stream function ψ and vorticity ω formulation of the incompressible Navier-Stokes equation. In nondimensional form, the governing equations are given as (4.6)
Figure 4 . 1 .
41Streamline pattern for driven cavity problem with Re=1000. (a) The full model uses 129 × 129 grid points. (b) The approximating result obtained through local SIRM. The whole time domain, [0, 50], is partitioned into 250 subintervals. For each subinterval, a trial solution is calculated from 33 × 33 grid points. An average of five iterations are used to achieve a better approximation. We plot the contours of ψ whose values are −1 × 10 −10 , −1 × 10 −7 , −1 × 10 −5 , −1 × 10 −4 , −0.01, −0.03, −0.05, −0.07, −0.09, −0.1, −0.11, −0.115, −0.1175, 1 × 10 −8 , 1 × 10 −7 , 1 × 10 −6 , 1 × 10 −5 , 5 × 10 −5 , 1 × 10 −4 , 2.5 × 10 −4 , 1 × 10 −3 , 1.3 × 10 −3 , and 3 × 10 −3 .
(
Figure 4 .
42 shows the velocity profiles for u along the vertical line and v along the horizontal line passing through the geometric center of the cavity. The coarse model provides a trial solution, which significantly deviates from the actual one. Then, local SIRM is used to obtain much more accurate results. For each iteration, three snapshots and their corresponding tangent vectors are used to form the information matrix. Instead of POD, the Gram-Schmidt process is applied here to form a set of orthonormal empirical eigenfunctions. Since the local data ensemble contains both ψ
Figure 4 . 2 .
42(a) Comparison of the velocity component u(x = 0.5, y) along the y-direction passing the geometric center between the full model, the coarse model, and the local SIRM method at t = 50. (b) Comparison of the velocity component v(x, y = 0.5) along the x-direction passing geometric center between the full model, the coarse model, and the local SIRM method at t = 50.
Figure 4
4Figure 4.3. Comparison of the computational time of the full model and the local SIRM method for the lid-driven cavity flow problem. As the resolution increases from 65 × 65 to 257 × 257, the dimension of the full system, n, increases from 2 × 65 2 to 2 × 257 2 . Using a log-log plot, the asymptotic complexity can be determined by the linear regression coefficient.
The maximal L 2 error between the benchmark solution and approximate solutions solved by the local SIRM method for different m and m ′ values. m ′ = 2 m ′ = 3 m ′ = 5 m ′ = 6 m
Table 3 .1
3Complexity of Algorithm 1 for one iteration using an implicit scheme for time integration
Table 4 .1
4Complexity of the full model, SIRM, and local SIRM using implicit schemes for time integration
Nonlinear model order reduction based on local reduced-order bases. D Amsallem, M J Zahr, C Farhat, Int. J. Numer. Meth. Engng. D. Amsallem, M. J. Zahr, and C. Farhat, Nonlinear model order reduction based on local reduced-order bases, Int. J. Numer. Meth. Engng, 92 (2012), pp. 891-916.
Approximation of Large-Scale Dynamical Systems. A C Antoulas, SIAM, Philadelphia, PAA. C. Antoulas, Approximation of Large-Scale Dynamical Systems, SIAM, Philadelphia, PA, 2005.
A survey of model reduction methods for large-scale systems. A C Antoulas, D C Sorensen, S Gugercin, Contemp. Math. 280A. C. Antoulas, D. C. Sorensen, and S. Gugercin, A survey of model reduction methods for large-scale systems, Contemp. Math., 280 (2001), pp. 193-219.
Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Z Bai, Appl. Numer. Math. 43Z. Bai, Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems, Appl. Numer. Math., 43 (2002), pp. 9-44.
Nonlinear model reduction via discrete empirical interpolation. S Chaturantabut, D C Sorensen, SIAM J. Sci. Comput. 32S. Chaturantabut and D. C. Sorensen, Nonlinear model reduction via discrete empirical interpolation, SIAM J. Sci. Comput., 32 (2010), pp. 2737-2764.
A quadratic method for nonlinear model order reduction. Y Chen, J White, Proc. Int. Conf. Modeling and Simulation of Microsystems. Int. Conf. Modeling and Simulation of MicrosystemsY. Chen and J. White, A quadratic method for nonlinear model order reduction, in Proc. Int. Conf. Modeling and Simulation of Microsystems, 2000, pp. 477-480.
Model reduction of parametrized evolution problems using the reduced basis method with adaptive time partitioning. M Dihlmann, M Drohmann, B Haasdonk, 2011-13Stuttgart, GermanyStuttgart Research Centre for Simulation TechnologyTechnical ReportM. Dihlmann, M. Drohmann, and B. Haasdonk, Model reduction of parametrized evolution problems using the reduced basis method with adaptive time partitioning, Technical Report 2011-13, Stuttgart Research Centre for Simulation Technology, Stuttgart, Germany, May 2011.
An "hp" certified reduced basis method for parametrized elliptic partial differential equations. J L Eftang, A T Patera, E M Rønquist, SIAM J. Sci. Comput. 32J. L. Eftang, A. T. Patera, and E. M. Rønquist, An "hp" certified reduced basis method for parametrized elliptic partial differential equations, SIAM J. Sci. Comput., 32 (2010), pp. 3170-3200.
High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method. U Ghia, K N Ghia, C T Shin, J. Comput. Phys. 48U. Ghia, K. N. Ghia, and C. T. Shin, High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method, J. Comput. Phys., 48 (1982), pp. 387- 411.
Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations. M A Grepl, Y Maday, N C Nguyen, A T Patera, M2AN Math. Model. Numer. Anal. 41M. A. Grepl, Y. Maday, N. C. Nguyen, and A. T. Patera, Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations, M2AN Math. Model. Numer. Anal., 41 (2007), pp. 575-605.
P Holmes, J L Lumley, G Berkooz, C W Rowley, Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge, UKCambridge Univ. Press2nd ed.P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley, Turbulence, Coherent Structures, Dynamical Systems and Symmetry, Cambridge Univ. Press, Cambridge, UK, 2nd ed., 2002.
A subspace approach to balanced truncation for model reduction of nonlinear control systems. S Lall, J E Marsden, S Glavaski, Int. J. Robust. Nonlinear Control. 12S. Lall, J. E. Marsden, and S. Glavaski, A subspace approach to balanced truncation for model reduction of nonlinear control systems, Int. J. Robust. Nonlinear Control, 12 (2002), pp. 519-535.
. M Loève, Probability Theory, Van Nostrand, N J Princeton, M. Loève, Probability Theory, Van Nostrand, Princeton, N.J., 1955.
Accelerating iterative solution methods using reducedorder models as solution predictors. R Markovinović, J D Jansen, Int. J. Numer. Meth. Engng. 68R. Markovinović and J. D. Jansen, Accelerating iterative solution methods using reduced- order models as solution predictors, Int. J. Numer. Meth. Engng, 68 (2006), pp. 525-541.
. J D Meiss, Differential Dynamical Systems. J. D. Meiss, Differential Dynamical Systems, SIAM, Philadelphia, PA, 2007.
Principal component analysis in linear systems: Controllability, observability, and model reduction. B C Moore, IEEE Trans. Automat. Contr. 26B. C. Moore, Principal component analysis in linear systems: Controllability, observability, and model reduction, IEEE Trans. Automat. Contr., 26 (1981), pp. 17-32.
Reduced order models based on local POD plus Galerkin projection. M L Rapún, J M Vega, J. Comput. Phys. 229M. L. Rapún and J. M. Vega, Reduced order models based on local POD plus Galerkin projection, J. Comput. Phys., 229 (2010), pp. 3046-3063.
Dynamic iteration using reduced order models: A method for simulation of large scale modular systems. M Rathinam, L R Petzold, SIAM J. Numer. Anal. 40M. Rathinam and L. R. Petzold, Dynamic iteration using reduced order models: A method for simulation of large scale modular systems, SIAM J. Numer. Anal., 40 (2002), pp. 1446- 1474.
A new look at proper orthogonal decomposition. SIAM J. Numer. Anal. 41, A new look at proper orthogonal decomposition, SIAM J. Numer. Anal., 41 (2003), pp. 1893-1925.
A trajectory piecewise-linear approach to model order reduction and fast simulation of nonlinear circuits and micromachined devices. M Rewieński, J White, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 22M. Rewieński and J. White, A trajectory piecewise-linear approach to model order reduction and fast simulation of nonlinear circuits and micromachined devices, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 22 (2003), pp. 155-170.
Model order reduction for nonlinear dynamical systems based on trajectory piecewiselinear approximations. Linear Algebra Appl. 415, Model order reduction for nonlinear dynamical systems based on trajectory piecewise- linear approximations, Linear Algebra Appl., 415 (2006), pp. 426-454.
A priori hyperreduction method: An adaptive approach. D Ryckelynck, J. Comput. Phys. 202D. Ryckelynck, A priori hyperreduction method: An adaptive approach, J. Comput. Phys., 202 (2005), pp. 346-366.
Balancing for nonlinear systems. J M A Scherpen, Systems & Control Letters. 21J. M. A. Scherpen, Balancing for nonlinear systems, Systems & Control Letters, 21 (1993), pp. 143-153.
The flow past circular cylinders at low speed. A Thom, Proc. Roy. Soc. Lond. A. 141A. Thom, The flow past circular cylinders at low speed, Proc. Roy. Soc. Lond. A., 141 (1933), pp. 651-669.
. L N Trefethen, D Bau, Numerical Linear Algebra. L. N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, PA, 1997.
Reduced-order modeling of multiscreen frequency-selective surfaces using Krylov-based rational interpolation. D S Weile, E Michielssen, K Gallivan, IEEE Trans. Antennas Propag. 49D. S. Weile, E. Michielssen, and K. Gallivan, Reduced-order modeling of multiscreen frequency-selective surfaces using Krylov-based rational interpolation, IEEE Trans. An- tennas Propag., 49 (2001), pp. 801-813.
| []
|
[
"A method to rigorously enclose eigendecompositions of interval matrices",
"A method to rigorously enclose eigendecompositions of interval matrices"
]
| [
"Roberto Castelli ",
"Jean-Philippe Lessard "
]
| []
| []
| In this paper, a rigorous computational method to enclose eigendecompositions of complex interval matrices is proposed. Each eigenpair x = (λ, v) is found by solving a nonlinear equation of the form f (x) = 0 via a contraction argument. The set-up of the method relies on the notion of radii polynomials, which provide an efficient mean of determining a domain on which the contraction mapping theorem is applicable. | null | [
"https://arxiv.org/pdf/1112.5052v1.pdf"
]
| 119,146,764 | 1112.5052 | 00464940006bc53c5ca98c2d92ce4cd157aa5fce |
A method to rigorously enclose eigendecompositions of interval matrices
Roberto Castelli
Jean-Philippe Lessard
A method to rigorously enclose eigendecompositions of interval matrices
Algebraic eigenvalue problemRigorous computationsContraction mapping theorem Mathematics Subject Classification (2010) 15A1865G40
In this paper, a rigorous computational method to enclose eigendecompositions of complex interval matrices is proposed. Each eigenpair x = (λ, v) is found by solving a nonlinear equation of the form f (x) = 0 via a contraction argument. The set-up of the method relies on the notion of radii polynomials, which provide an efficient mean of determining a domain on which the contraction mapping theorem is applicable.
Introduction
Computing eigenvalues and eigenvectors of matrices is a central problem in many fields of applied sciences involving mathematical modelling. When applied to real-life phenomena, any model needs to consider the occurrence of diverse errors in the data, due for instance to inaccuracy of measurements or noise effects. Such uncertainty in the data can be represented by intervals. In the context of studying a matrix with uncertain entries, one can consider an interval matrix A whose entries consist of intervals containing all possible errors. Although the entries of A are intervals, many classical questions of linear algebra can be raised. For instance, can we demonstrate, given an interval matrix A, that for any matrix A ∈ A, A is invertible, diagonalizable or can we enclose rigorously its eigendecomposition, that is its set of eigenvalues and eigenvectors? In order to address explicitly one of these question, this paper aims at computing rigorous enclosure of eigendecompositions of interval n × n complex valued matrices A. More precisely, we propose a method to construct (if possible) a list of balls {B i : i = 1, . . . , n} such that, for every A ∈ A and for each i = 1, . . . , n, there exists x i = (λ i , v i ) ∈ B i such that Av i = λ i v i , where the solution x i is unique up to a scaling factor of v i .
Before proceeding further, we hasten to mention that several different methods have been developed to find error bounds for computed eigenvalues and eigenvectors of standard (non interval) matrices (e.g. see [1,2,3]). Also, while the problem of computing rigorous bounds for the eigenvalue set of interval matrices is well studied, see for instance [4,5] and the references therein, a not so large literature has been produced regarding the simultaneous enclosure of the eigenvalues and eigenvectors of interval matrices. In this direction we refer to [1,6], where different techniques have been developed to enclose simple eigenvalues and corresponding eigenvectors, while for double or nearly double eigenvalues a method has been introduced in [7]. For the rigorous enclosure of multiple or nearly multiple eigenvalues of general complex matrices, a significant contribution has been made by S. Rump in [3,8].
In this paper, we propose the new idea of enclosing rigorously the eigendecomposition of complex interval matrices by using the notion of radii polynomials, which provide a computationally efficient way of determining a domain on which the contraction mapping theorem is applicable. The radii polynomials approach, which is very similar to the Krawczyk operator approach, aims at demonstrating existence and local uniqueness of solutions of nonlinear equations. The approach involving the Krawczyk operator consists of applying directly the operator to interval vectors (in the form of small neighbourhoods around a numerical approximation) and then attempt to verify a posteriori the hypotheses of a contraction mapping argument [9,10]. On the other hand, the radii polynomials are a priori conditions that are derived using analytic estimates, and once they are theoretically constructed, they are used to solve for the sets (also in the form of small neighbourhoods of a numerical solution) on which a Newton-like operator is a contraction. The advantage of this approach is that most of the estimates are done analytically and generally, and that costly interval arithmetic computations are postponed to the very end of the proofs. It is worth mentioning that the radii polynomials were originally introduced in [11] to compute equilibria of PDEs with the goal of minimizing the extra computational cost required to prove existence of solutions of infinite dimensional PDEs [12].
In this paper, it is demonstrated that in the context of computing rigorous enclosure of complex matrices, the method based on the radii polynomials is faster than the algorithm introduced in [8], with the extra advantage of having local uniqueness (see Section 3.3). Also, it is demonstrated that an approach based on the Krawczyk operator can be significantly slower than the method involving the radii polynomials (see also Section 3.3). As in the case of the Krawczyk operator, the radii polynomials verifies existence and uniqueness of a zero of a nonlinear function within the inclusion interval. This also implies multiplicity 1 of the solution as well as non-singularity of the Jacobian at the solution. Therefore, our new proposed method will necessarily fail to enclose multiple or nearly multiple eigenvalues, a constraint that the method proposed in [8] does not have.
The paper is organized as follows. In Section 2, we introduce the computational method, where we first present the method for non interval matrices. In Section 2.1, we demonstrate how to generalize the idea to rigorously enclose eigendecompositions of interval matrices. Finally, in Section 3, we present applications of our method. In Section 3.1, we use the method to rigorously compute the Floquet exponents of a periodic orbit of the Lorenz system of ordinary differential equations. In Section 3.2, we study the applicability of our approach to matrices with interval entries of large radius. Finally, in Section 3.3, we evaluate the cost of our method and compare it to the cost of the algorithm of S. Rump introduced in [8] and to a method based on the Krawczyk operator.
The computational method
To begin with, let us fix some notation: throughout this paper we denote by IC n×n the set of complex matrices with interval entries, A ∈ C n×n an n × n complex matrix and A ∈ IC n×n an n × n interval complex matrix, meaning that any entry of A is a complex interval of the form
A k,j = [Re(Â k,j ) ± rad (1) k,j ] + i[Im(Â k,j ) ± rad (2) k,j ], rad (1) k,j , rad (2) k,j ∈ R + .
The matrix ∈ C n×n is called the center of A while rad (1) k,j , rad (2) k,j are called the radii of the real and imaginary part of A k,j , respectively. A matrix A is said to belong to A, denoted A ∈ A, if A k,j ∈ A k,j for any 1 ≤ k, j ≤ n. Bold face letters will always denote interval quantities. Moreover, unless differently specified,
• |·| is the complex absolute value and, in case of matrices M ∈ C n×m , it acts componentwise, i.e. |M | i,j = |M i,j |;
• given two real matrices M , N any relations <, >, ≤ etc, is assumed component-wise;
• I n denotes the n dimensional identity matrix, 1 n is the column vector of length n with all the entries equal to 1;
• given any matrix M ∈ C n×m , the object (M )k stands for the n × (m − 1) matrix obtained by deleting the k-th column of M .
As already mentioned in the Introduction, given A ∈ IC n×n , the goal of this paper is to develop a computational method to construct a list of sets {B i : i = 1, . . . , n} such that, for any A ∈ A and for any i = 1, . . . , n, there exists (λ, v) ∈ B i solving Av = λv, with v unique up to a scaling factor.
To simplify the exposition, we first present the method in the context of non interval matrices A ∈ C n×n , that is one introduces a method to enclose the solutions (λ, v) of the equation Av = λv.
As one shall see in Section 2.1, only minor modifications are necessary for the extension to the interval case. Suppose that an approximate eigenpair of A has been computed, that is (λ,v) such that Av ≈λv and let f (x) be the function f :
C n → C n that maps a point x = (λ, v 1 , v 2 , . . . , v k−1 , v k+1 , . . . , v n ) to f (x) = A v 1 . . . v k . . . v n − λ v 1 . . . v k . . . v n (2)
wherev k is the largest component ofv. By definition, a solution x of f (x) = 0 corresponds to an eigenpair (λ, v) of A with the eigenvalue λ given by the first component of x and the
eigenvector v = (v 1 , . . . , v k−1 ,v k , v k+1 , . . . , v n ).
Thus, the target of the incoming analysis is to prove the existence and to provide rigorous bounds for the zeros of f (x). Note that the unknowns in the equation f (x) = 0 are λ and n − 1 component of v, while the remaining componentv k is a fixed parameter of the problem. Since the eigenvectors are invariant under rescaling, the solutions of (1) come in continuous families. However, fixing one of the component of v (in our case letting v k =v k ) removes such arbitrariness and therefore
isolates the zeros of f . Denotingx = (λ,v 1 ,v 2 , . . . ,v k−1 ,v k+1 , . . . ,v n ) and Df (x) the Jacobian matrix of f atx, one has that
Df (x) = v 1 . . . v k . . . v n (A −λI n )k .(3)
The problem of finding the zeros of the function f (x) is addressed by introducing an operator T on a Banach space whose fixed points correspond to solutions of f (x) = 0.
Let Ω be the Banach space
Ω = {x ∈ C n : ||x|| Ω < ∞}, ||x|| Ω = max i {|x i |},
and define the operator T : Ω → Ω by
T (x) = x − Rf (x),(4)
where R is a numerical inverse of Df (x), i.e. R · Df (x) ≈ I n . Since fixed points of T correspond to zeros of f (x), the idea is to construct a small set B ⊂ Ω such that T : B → B is a contraction, and then to apply the contraction mapping theorem to conclude about the existence of a unique fixed point of T in B. Note thatx is an approximate zero of f and the operator T has been defined as a Newton-like operator around the pointx, thus it is advantageous to test the contractibility of T on neighbourhoods ofx in Ω. More precisely, denote by
B(r) = {x ∈ Ω, ||x|| Ω ≤ r}
the close ball of radius r around the origin and let Bx(r) =x + B(r) the ball with the same radius and centered atx. Treating r as a variable, we choose the balls Bx(r) as the candidate sets where to check if T is a contraction. The question whether T is a contracting map will be formulated in terms of the verification of a set of computable conditions, called radii polynomials, whose construction is based on verifying efficiently the hypothesis of the next result.
Theorem 2.1. Suppose that Y, Z(r) ∈ Ω are such that |T (x) −x| ≤ Y, sup b,c∈B(r) |DT (x + b)c| ≤ Z(r),(5)
and satisfy Y + Z(r) Ω < r.
Then there exists a unique x ∈ Bx(r) such that T (x) = x.
Proof. The mean value theorem applied component-wise to T implies that for any x, y ∈ Bx(r) and for any k = 1, . . . , n,
T k (x) − T k (y) = DT k (z)(x − y), for z ∈ {tx + (1 − t)y : t ∈ [0, 1]} ⊂ Bx(r).
Then,
|T k (x) − T k (y)| = DT k (z) r(x − y) x − y Ω 1 r x − y Ω ≤ Z k (r) r x − y Ω .(6)
Choosing y =x and using the triangular inequality, one has that
|T k (x) −x k | ≤ |T k (x) − T k (x)| + |T k (x) −x k | ≤ Y k + Z k (r) ≤ r,
where the last inequality follows from the fact that Y, Z ∈ R n + , and thus Y + Z(r) Ω = max i {Y i + Z i (r)}. That proves that T (Bx(r)) ⊆ Bx(r). From (6), it follows that
T (x) − T (y) Ω = max k {|T k (x) − T k (y)|} ≤ Z(r) Ω r x − y Ω .
Since Z(r) Ω < r, T is a contraction on Bx(r). Thus, from the contraction mapping theorem, the exists a unique fixed point of T in Bx(r).
Definition 2.2. Given the vectors Y, Z(r) ∈ Ω satisfying (5), we define the radii polynomials
p k (r), k = 1, . . . , n by p k (r) = (Y + Z(r)) k − r.
The following result holds.
Lemma 2.3. Consider the radii polynomials p k (r), k = 1, . . . , n introduced in Definition 2.2. Then for any r > 0 such that We proceed with the explicit construction of the bounds Y and Z. Since
p k (r) < 0, for all k = 1, . . . , n, there exists a unique x ∈ Bx(r) such that f (x) = 0. Proof. Supposer > 0 is such that p k (r) < 0, for all k = 1, . . . ,T (x)−x = −Rf (x), let Y def = |Rf (x)|.(7)
The bound Z(r) satisfying (5) is constructed as a polynomial in r.
First rewrite DT (x + b)c as DT (x + b)c = (I n − R · Df (x))c + R[(Df (x) − Df (x + b))c] so that |DT (x + b)c| ≤ |(I n − R · Df (x))c| + |R[(Df (x) − Df (x + b))c]|.(8)
Define
Z(r) = rZ 0 + r 2 Z 1 ,(9)
where
Z 0 def = |I n − R · Df (x)|1 n , Z 1 def = 2|R|1 n ,(10)
where1 n is the same as 1 n with a zero, instead of one, in the k-th component. |DT (x + b)c|.
Proof. From (8), the statement follows by proving that
i) sup c∈B(r) |(I n − R · Df (x))c| ≤ rZ 0 ii) sup b,c∈B(r) |R[(Df (x) − Df (x + b))c]| ≤ r 2 Z 1 .
Since c ∈ B(r), |c| ≤ r1 n , it follows that sup c∈B(r)
|(I n − R · Df (x))c| ≤ rZ 0 . That proves i). For any b = (b λ , b 1 , . . . , b k−1 , b k , . . . , b n ) (Df (x) − Df (x + b)) = − b 1 . . . b k−1 0 b k+1 . . . b n (b λ I n )k .
Note that the k-th row of the above matrix is null.
Since |b i | ≤ r, we have |(Df (x) − Df (x + b))c| ≤ 2r 21 n and therefore sup b,c∈B(r) |R[(Df (x) − Df (x + b))c]| ≤ 2r 2 |R|1 n = r 2 Z 1 .
In summary, given an approximate eigenpair (λ,v), the method consists of computing rigorously the bounds Y and Z(r) given by (7) and (9), and then to check whether there exists an interval I where all the polynomials p k (r) are negative. If I = ∅ we select r = inf I and we conclude that f (x) = 0 has a unique solution within the ball Bx(r). In practice, we get the existence of an eigenpair (λ, v) of A, with |λ −λ| ≤ r, |v j −v j | ≤ r, for j = k and v k =v k . To prove the existence of a second eigenpair of A, it is necessary to provide an approximate solution (λ,v), different from the previous one, and to repeat the computation.
Extension to the interval case
Besides few modifications necessary to deal with interval quantities, the procedure to compute rigorously bounds for the eigendecomposition of an interval matrix A ∈ IC n×n is basically the same as for the scalar case. However, a fundamental difference consists in the fact that all the computations are done in the interval arithmetic regime [4], in which any of the basic operations • ∈ {+, −, ·, /} is extended to the interval case in order to satisfy the general assumption
∀P ∈ P ∀Q ∈ Q, P • Q ∈ P • Q .(11)
Given A an interval complex valued matrix, we now address the problem we stated at the beginning of the paper, that is how to construct sets {B i } n i=1 so that each A ∈ A admits one and only one eigenpair (λ, v) in any of the B i 's. Recall that is the center of the interval matrix A. We first compute (λ,v) an approximate eigenpair of and, as before, definex = (λ,v 1 ,v 2 , . . . ,v k−1 ,v k+1 , . . . ,v n ) where the missing component is chosen so thatv k = max j {v j }. Then, replacing the scalar matrix A in (2) by the interval matrix A, the function f (x) and the Jacobian matrix Df (x) defined in (2) and (3) are replaced respectively by f : C n → IC n and by an interval matrix Df (x) that represents a linear operator from C n to IC n . We choose R to be a numerical inverse of Df (x), the center of Df (x), and we proceed to the definition of the operator T (x) = x − Rf (x) and to the bounds Y and Z(r), as done before with the boldface quantities in place of the previous one. Clearly the quantities on the left hand side of relations (5) are now intervals, thus we define component-wise Y , Z 1 , Z 2 as the supremum of the intervals appearing on the right hand sides of (7) and (10). That will yield the uniform bounds
|T (x) −x| ≤ Y, sup b,c∈B(r) |DT (x + b)c| ≤ Z(r),
where DT (x) = I n − R · Df (x). Suppose that r > 0 is such that the radii polynomials p k (r) < 0 for all k. Then interval arithmetic insures that for all A ∈ A, there exists a unique (λ, v) such that |λ −λ| ≤ r, |v j −v j | ≤ r, v k =v k , and Av = λv. In other words, r is a uniform bound in A for the existence of an eigenpair of any A ∈ A. Indeed, having fixed (λ,v) and R ≈ Df (x) −1 , for any A ∈ A define f A (x) and Df A (x) as in (2) and (3), and the fixed point operator
T A (x) = x − Rf A (x). The fundamental inclusion (11) implies that f A (x) ∈ f (x), Df A (x) ∈ Df (x) and T A (x) ∈ T (x)
, for any A ∈ A and x ∈ C n . Thus, as A varies in A, the bounds (5), with T A in place of T , are satisfied for the same Y, Z, r proving the existence of a fixed point in Bx(r) for any T A and consequently an eigenpair for any A ∈ A.
Results
In this section we report some computational results. All the computations have been done in Matlab supported by the package Intlab [13] where the interval arithmetic routines have been implemented. The approximate eigenpairs (λ,v) of have been computed running the standard eig.m function in Matlab. In order to avoid rounding error and to obtain rigorous results, we emphasize that the computational algorithm treats any matrix as an interval matrix. Thus, even if one wishes to deal with a scalar matrix A, the method first constructs a (narrow) interval matrix around A and perform all the computation with interval arithmetics.
Example 1: rigorous computations of Floquet exponents
The first example concerns the rigorous enclosure of the Floquet exponents and related eigenvectors associated to a periodic orbit of the Lorenz system
u 1 = σ(u 2 − u 1 ) u 2 = ρu 1 − u 2 − u 1 u 3 u 3 = u 1 u 2 − βu 3 .(12)
For a choice of the parameters σ = 10, β = 8/3, ρ = 20.8815 the existence of a periodic orbit γ(t) in a neighborhood of an approximate solution has been proved [14]. It is known that the stability character of the orbit γ(t) is encoded by the Floquet exponents, which are the eigenvalues of a particular real matrix, denoted by A, resulting by integrating the linearized system around γ(t). Without going into the details (we refer to [15] for an exhaustive explanation), we only mention that one of the Floquet exponents is zero, due to the time shift, while the number of the other eigenvalues of A with negative (positive) real part gives the dimension of the stable (unstable) manifold. Moreover the associated eigenvectors provide the directions tangent to the invariant manifolds on γ(t) (tangent bundles).
By means of a computational method based on the radii polynomials, in [15] we proved that the matrix A associated to the solution γ(t) lies within the interval matrix with radius rad = 9.66146973 · 10 −7 , meaning that each entry A(i, j) consists of the interval [Â(i, j) − rad,Â(i, j) + rad]. Following the computational method discussed in Section 2, we compute the enclosure of the eigenpairs of A: it results that any A ∈ A admits three eigenpairs (λ i , v i ), i = 1, 2, 3 each one lying in the ball of radius r i around the approximate values (λ i ,v i ) given in Table 1. Table 1: Data associated to the rigorous computation of the Floquet exponents of the periodic orbit γ.
We remark that in the general situation the genuine solution (λ, v) of the eigenproblem is proved to exist in a complex neighborhood of the approximate solution (λ,v). Therefore, even if one or bothλ andv are real vectors, the same can not be concluded for λ or v. However, if the matrix A and the approximate solutionλ andv are real and the computation is successful, then the genuine solution so obtained by solving the radii polynomials is also real. Indeed, suppose the contrary, that is the exact solution λ and v are complex. Since A is real, the complex conjugate couple (C(λ), C(v))) is also a solution of the eigenproblem, AC(v) = C(λ)C(v). But both the solutions (λ, v) and (C(λ), C(v))) belong to the same ball in Ω aroundx and this violates the uniqueness result stated in Lemma 2.3. The same argument extends in the case of interval matrices.
Coming back to the previous example, we conclude that the Floquet multipliers of γ(t) are real, one is negative and one is positive. Note that the zero Floquet multiplier is indeed contained in the ball around (λ 2 ,v 2 ).
Example 2: matrices with interval entries of large radius
In the next sample we compute the eigendecomposition of an interval matrix A constructed as follows: consider the complex number λ 0 = 0 and λ j = e i 2π 5 j , j = 1, . . . , 5 and define D as the diagonal matrix with entries λ k , k = 0, . . . approximate eigenvalues ofÂ,λ 0 = 0.00000 + 0.00000ī
λ 1 = 0.30901 + 0.95105ī λ 2 = −0.80901 + 0.58778ī λ 3 = −0.80901 − 0.58778ī λ 4 = 0.30901 − 0.95105ī λ 5 = 1.00000 − 0.00000i
denote by r k , k = 0, . . . 5 the radius of the ball in the complex plane centered atλ k inside which, for any A ∈ A, a unique eigenvalues of A has been proved to exist. The results are presented in Table 2, where the different values of rad are given in the first column, while each row collects the result for the radii r k . For values of rad ≤ 1.3 · 10 −3 the method succeeded in computing the enclosure of the entire eigendecomposition of A, while for larger values of rad the method starts failing in enclosing some of the eigenpairs, up to the value rad = 3.2 · 10 −3 , where no computation is successful. Figure 1 shows the radii of the disks in the complex plane enclosing the six eigenvalues for the case rad = 1.3 · 10 −3 .
Example 3: evaluating the performance
In this last section we compare the performance of our new method, that we denote here by radiipol, with two different algorithms developed by S. Rump. The first one, here denoted by verifyeig, has been introduced in [8] with the primary goal of computing enclosures of multiple of nearly multiple eigenvalues (and related eigenvectors) of interval matrices. It consists of a verification method and provides rigorous bounds around an approximate solution within which the existence, but not uniqueness, of the exact solution of the eigenproblem is proved. The second one, here denoted by verifynlss, is based on a Krawczyk operator [9,10] and is a general routine to rigorously compute well separated zeros of nonlinear functions. In fact, in the code verifyeig.m (available in the library Intlab [13]), where the method verifyeig has been implemented, the author suggested to use verifynlss to compute simple and well separated eigenpairs. This method is implemented in the code verifynlss.m in the library Intlab [13]. We summarize the obtained results in Table 3 and Table 4. In Table 3, the computational time necessary to compute the entire eigendecomposition of a test matrix (measured in seconds on a 2.4 GHz computer using the tic-toc Matlab function) is reported, independently whether the methods were successful or not. Table 4 presents the output of the algorithms: for each computation of the eigendecomposition, it provides the average of the radius of the balls enclosing the exact eigenpairs. The entry − means that the method fails in the enclosure of at least one of the eigenpair.
− 0.0759 − 0.0485 − − 3.1 · 10 −3 − − − 0.0828 − − 3.2 · 10 −3 − − − − − −
For both experiments the test matrices A have been constructed as in the previous section: given N we define D ∈ C N +1,N +1 as a diagonal matrix with entries given by N equispaced values on the unit circle in the complex plane and 0, i.e. diag(D) = [0, e i 2π N j ], j = 1, . . . , N . Then let = XDX −1 , where X is a complex random matrix with entries in the complex square [−1, 1]+i[−1, 1] and finally define A as the interval complex matrix centered in and of radius rad.
From the results of Table 3, we conclude that our new proposed method radiipol is much faster than the method verif ynlss based on the Krawczyk operator, while also proving existence and uniqueness of the solutions. Moreover, our algorithm is faster than verif yeig, especially for small N , and the computational time of the two methods is almost equivalent for large N . This achievement have been possible thanks to the analytical estimates (in the form of the radii polynomials) introduced in the method, that allow minimizing the number of computations done with interval quantities. Hence, the radiipol method provides a computationally efficient way to compute eigendecompositions of complex interval matrices.
The results presented in Table 4 confirm that the new approach radiipol is satisfactory also from the point of view of the accuracy of the results. Indeed, while the algorithm verif ynlss fails quite soon as N and rad increase (it fails for rad = 0 and for all N ≥ 15), the new algorithm is successful also for large entries of A. Moreover, as one can see in Table 4, the performance of radiipol is very close to the performance of the algorithm verif yeig which, we underline this one last time, does not prove uniqueness of the solution. Table 4: Comparison of the accuracy of the three methods as the dimension N and the radius rad of the test matrix A change. The entry − means that the method fails in the enclosure of at least one of the eigenpair.
n. For (5) all the entries of Y, Z(r) are real positive, thus max i {(Y + Z(r)) i } = ||Y + Z(r)|| Ω <r. From Theorem 2.1 there exists a unique x ∈ Bx(r) such that T (x) = x and therefore f (x) = 0.
Lemma 2. 4 .
4Consider the polynomial vector Z(r) defined by(9). ThenZ(r) ≥ sup b,c∈B(r)
, 5 .Figure 1 :
51Let = XDX −1 , for a random matrix X with values in the complex square [−1, 1] + i[−1, 1] and finally let A be the interval complex matrix centered at with component-wise radius rad both in the real and imaginary part.For different values of rad we compute the enclosure of the eigenvalues of A: givenλ k the Balls in the complex plane enclosing the 6 eigenvalues of any A ∈ A for rad = 1.3 · 10 −3 .
3 :
3Comparison of the running time necessary to enclose the entire eigendecomposition of A using radiipol and the two other methods verif yeig and verif ynlss. A is a complex interval matrix of dimension N +1 and rad = 10 −15 . The entries of the form (·) * correspond to unsuccessful computations. The figure on the right depicts the ratio between the computational time of the algorithms verif yeig and radiipol.Average of the radius of the disks enclosing the
· 10 −4 1.5 · 10 −4 1.6 · 10 −4 1.3 · 10 −4 1.8 · 10 −4 1.6 · 10 −4rad
r 0
r 1
r 2
r 3
r 4
r 5
1 · 10 −5
2.0 1 · 10 −4
0.0021
0.0016
0.0018
0.0014
0.0019
0.0017
1 · 10 −3
0.0281
0.0181
0.0202
0.0149
0.0219
0.0197
1.3 · 10 −3
0.0492
0.0247
0.0284
0.0201
0.0306
0.0276
1.9 · 10 −3
−
0.0416
0.0563
0.0323
0.0515
0.0581
2 · 10 −3
−
0.0452
−
0.0346
0.0678
0.0591
2.5 · 10 −3
Table 2 :
2Enclosures of the eigenvalues of a complex interval matrix A, as the radius rad of the entries of A increases.
Computational time (s)N
radiipol verif yeig verif ynlss
5
0.0148
0.0372
0.3423
10
0.0282
0.0650
0.6432
50
0.2481
0.4170
7.4476 *
100
1.4580
1.8814
66.119 *
200
14.512
16.739
792.76 *
500
377.3
423.6
0
100
200
300
400
500
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
N
Table
Error bounds for computed eigenvalues and eigenvectors. Tetsuro Yamamoto, Numer. Math. 342Tetsuro Yamamoto. Error bounds for computed eigenvalues and eigenvectors. Numer. Math., 34(2):189-199, 1980.
Error bounds for computed eigenvalues and eigenvectors. Tetsuro Yamamoto, II. Numer. Math. 402Tetsuro Yamamoto. Error bounds for computed eigenvalues and eigenvectors. II. Nu- mer. Math., 40(2):201-206, 1982.
. Siegfried M Rump, M Jens-Peter, Zemke, On eigenvector bounds. BIT. 434Siegfried M. Rump and Jens-Peter M. Zemke. On eigenvector bounds. BIT, 43(4):823- 837, 2003.
Interval analysis: theory and applications. Götz Alefeld, Günter Mayer, J. Comput. Appl. Math. 1211-2Numerical analysis in the 20th century. Approximation theoryGötz Alefeld and Günter Mayer. Interval analysis: theory and applications. J. Comput. Appl. Math., 121(1-2):421-464, 2000. Numerical analysis in the 20th century, Vol. I, Approximation theory.
Bounds on real eigenvalues and singular values of interval matrices. Milan Hladík, David Daney, Elias Tsigaridas, SIAM J. Matrix Anal. Appl. 314Milan Hladík, David Daney, and Elias Tsigaridas. Bounds on real eigenvalues and singular values of interval matrices. SIAM J. Matrix Anal. Appl., 31(4):2116-2129, 2009/10.
Result verification for eigenvectors and eigenvalues. G Mayer, Topics in validated computations. Oldenburg; North-Holland, Amsterdam5G. Mayer. Result verification for eigenvectors and eigenvalues. In Topics in validated computations (Oldenburg, 1993), volume 5 of Stud. Comput. Math., pages 209-276. North-Holland, Amsterdam, 1994.
Iterative improvement of componentwise error bounds for invariant subspaces belonging to a double or nearly double eigenvalue. G Alefeld, H Spreuer, Computing. 364G. Alefeld and H. Spreuer. Iterative improvement of componentwise error bounds for invariant subspaces belonging to a double or nearly double eigenvalue. Computing, 36(4):321-334, 1986.
Computational error bounds for multiple or nearly multiple eigenvalues. Siegfried M Rump, Linear Algebra Appl. 3241-3Special issue on linear algebra in self-validating methodsSiegfried M. Rump. Computational error bounds for multiple or nearly multiple eigen- values. Linear Algebra Appl., 324(1-3):209-226, 2001. Special issue on linear algebra in self-validating methods.
Newton-Algorithmen zur Bestimmung von Nullstellen mit Fehlerschranken. R Krawczyk, Computing (Arch. Elektron. Rechnen). 4R. Krawczyk. Newton-Algorithmen zur Bestimmung von Nullstellen mit Fehler- schranken. Computing (Arch. Elektron. Rechnen), 4:187-201, 1969.
A test for existence of solutions to nonlinear systems. R E Moore, SIAM J. Numer. Anal. 144R. E. Moore. A test for existence of solutions to nonlinear systems. SIAM J. Numer. Anal., 14(4):611-615, 1977.
Validated continuation for equilibria of PDEs. Sarah Day, Jean-Philippe Lessard, Konstantin Mischaikow, SIAM J. Numer. Anal. 454Sarah Day, Jean-Philippe Lessard, and Konstantin Mischaikow. Validated continuation for equilibria of PDEs. SIAM J. Numer. Anal., 45(4):1398-1424 (electronic), 2007.
Rigorous computation of smooth branches of equilibria for the three dimensional Cahn-Hilliard equation. Marcio Gameiro, Jean-Philippe Lessard, Numer. Math. 1174Marcio Gameiro and Jean-Philippe Lessard. Rigorous computation of smooth branches of equilibria for the three dimensional Cahn-Hilliard equation. Numer. Math., 117(4):753-778, 2011.
INTLAB -INTerval LABoratory. S M Rump, Developments in Reliable Computing. Tibor CsendesDordrechtKluwer Academic PublishersS.M. Rump. INTLAB -INTerval LABoratory. In Tibor Csendes, editor, Develop- ments in Reliable Computing, pages 77-104. Kluwer Academic Publishers, Dordrecht, 1999. http://www.ti3.tu-harburg.de/rump/.
The radii polynomials: a rigorous computational tool to study differential equations. Roberto Castelli, Marcio Gameiro, Jean-Philippe Lessard, In preparationRoberto Castelli, Marcio Gameiro, and Jean-Philippe Lessard. The radii polynomials: a rigorous computational tool to study differential equations. In preparation.
Roberto Castelli, Jean-Philippe Lessard, Rigorous numerics in Floquet theory: computing stable and unstable bundles of periodic orbits. Roberto Castelli and Jean-Philippe Lessard. Rigorous numerics in Floquet theory: computing stable and unstable bundles of periodic orbits. Submitted., 2011.
| []
|
[
"National research assessment exercises: a comparison of peer review and bibliometrics rankings 1",
"National research assessment exercises: a comparison of peer review and bibliometrics rankings 1"
]
| [
"Giovanni Abramo [email protected] \nDept of Management\nNational Research Council of Italy, IASI-CNR b Laboratory for Studies of Research and Technology Transfer School of Engineering\nUniversity of Rome \"Tor Vergata\"\n\n",
"Ciriaco Andrea D'angelo ",
"Flavia Di Costa ",
"\nDipartimento di Ingegneria dell'Impresa\nUniversità degli Studi di Roma \"Tor Vergata\"\nVia del Politecnico 100133, 72597362Rome -ITALY, tel. +39 06\n",
"\nAbramo, G., D'Angelo, C.A., Di Costa, F\n"
]
| [
"Dept of Management\nNational Research Council of Italy, IASI-CNR b Laboratory for Studies of Research and Technology Transfer School of Engineering\nUniversity of Rome \"Tor Vergata\"\n",
"Dipartimento di Ingegneria dell'Impresa\nUniversità degli Studi di Roma \"Tor Vergata\"\nVia del Politecnico 100133, 72597362Rome -ITALY, tel. +39 06",
"Abramo, G., D'Angelo, C.A., Di Costa, F"
]
| []
| Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity. | 10.1007/s11192-011-0459-x | [
"https://arxiv.org/pdf/1811.01703v1.pdf"
]
| 40,980,644 | 1811.01703 | 9e8d742110222c0e3d340fe7c4918d5783fe4f7a |
National research assessment exercises: a comparison of peer review and bibliometrics rankings 1
Giovanni Abramo [email protected]
Dept of Management
National Research Council of Italy, IASI-CNR b Laboratory for Studies of Research and Technology Transfer School of Engineering
University of Rome "Tor Vergata"
Ciriaco Andrea D'angelo
Flavia Di Costa
Dipartimento di Ingegneria dell'Impresa
Università degli Studi di Roma "Tor Vergata"
Via del Politecnico 100133, 72597362Rome -ITALY, tel. +39 06
Abramo, G., D'Angelo, C.A., Di Costa, F
National research assessment exercises: a comparison of peer review and bibliometrics rankings 1
10.1007/s11192-011-* Corresponding author: (2011). National research assessment exercises: a comparison of peer review and bibliometrics rankings. Scientometrics, 89(3), 929-941. 2Research assessmentbibliometricspeer reviewresearch productivityuniversityItaly
Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity.
Introduction
In recent years there has been unanimous agreement that governments should assign resources for scientific development according to rigorous evaluation criteria. This responds to the needs of the knowledge economy, which demands development of efficient scientific infrastructure capable of supporting the competitiveness of the national production system. The rising costs of research and tight restrictions on budgets add to the tendency for evaluation. Governments thus resort to such exercises, for the following purposes: i) to stimulate greater efficiency in research activity; ii) to allocate resources in function of merit; iii) to reduce information asymmetry between supply and demand for new knowledge; iv) to inform research policies and institutional strategies; and v) to demonstrate that investment in research is effective and delivers public benefits.
The need for evaluation is fully agreed at the theoretical level, but issues are more problematic when it comes to what methods to apply. The recent development of bibliometric techniques has led various governments to introduce bibliometrics, where applicable, in support or substitution for more traditional peer review. In the United Kingdom the Research Excellence Framework (REF), taking place in 2014, is an informed peer-review exercise, where the assessment outcomes will be a product of expert review informed by citation information and other quantitative indicators. It will substitute the previous Research Assessment Exercise series which were pure peerreview. In Italy, the Quality of Research Assessment (VQR), expected in 2012, substitutes the previous pure peer-review Triennial Evaluation Exercise (VTR). It can be considered a hybrid, as the panels of experts can choose one or both of two methodologies for evaluating any particular output: i) citation analysis; and/or ii) peerreview by external experts. The Excellence in Research for Australia initiative (ERA), launched in 2010, is conducted through a pure bibliometric approach for the hard sciences. Single research outputs are evaluated by a citation index referring to world and Australian benchmarks.
The pros and cons of peer-review and bilbiometrics methods have been thoroughly dissected in the literature (Horrobin, 1990;Moxham and Anderson, 1992;MacRoberts and MacRoberts, 1996;Moed, 2002;van Raan, 2005;Pendlebury, 2009;Abramo and D'Angelo, 2011). For evaluation of individual scientific products, the literature fails to decisively indicate whether one method is better than the other but demonstrates that there is certainly a correlation between the results from peer-review evaluation and those from purely bibliometric exercises. This has been demonstrated for the Italian system based on a broad scale study conducted by Abramo et al. (2009), with metrics based on the impact factor of journals, and by Franceschet and Costantini (2011) using citation analysis of publications. Preceding studies concerning other nations have also demonstrated a positive correlation between peer quality esteem and citation indicators (Aksnes and Taxt 2004;Oppenheim and Norris 2003;Rinia et al. 1998;Oppenheim 1997).
The severe limits of peer review emerge when it is applied to comparative evaluation, whether of individuals, research groups or entire institutions. Abramo and D'Angelo (2011) have contrasted the peer-review and bibliometrics approaches in national research assessments and conclude that the bibliometric methodology is by far preferable to peer review in terms of robustness, validity, functionality, time and costs. This is due to the intrinsic limits of all peer-review exercises, in which restrictions on budget and time force the review to focus the evaluation on a limited share of total output from each research organization. One of the consequences is that comparative peer review is limited to the dimension of excellence and is unable to deal with average quality or productivity of the subjects evaluated. A second limitation is that the final rankings are strongly dependent on the share of product evaluated (lack of robustness). A third is that the selection of products to submit to evaluation can be inefficient, due to both technical and social factors (parochialism, the real difficulty of comparing articles from various disciplines, etc.). This can impact negatively on the final rankings and their capacity to represent the true value (or lack of same) for the single organizations evaluated. A fourth consequence is that peer-review evaluations do not offer any assistance to universities in allocating resources to their best individual researchers, since they do not consistently penetrate to precise and comparable levels of information (lack of functionality). Finally, the time and costs of execution involved prevent peerreview evaluations from being sufficiently frequent for effective stimulation of improvement in research systems.
The limitations indicated, particularly those related to the selection and the share of products, lead to legitimate doubts about the accuracy of rankings of organizations as obtained from peer-review national assessment exercises. The aim of this work is to measure the amplitude of shift in rankings of organizations compared to the rankings from bibliometric-type evaluations. Bibliometric simulation is legitimated by the abovenoted correlation between peer review and bibliometrics concerning individual research products. The comparison refers to the first Italian research assessment exercise (VTR, 2006), for the scientific production from the period 2001-2003.
The next section of the work describes the dataset used and the methodology for the analysis. Sections 3 and 4 present and comment on the results obtained from the study, conducted at the aggregate level of disciplines. The last section provides a summary of the main results and some further considerations of the authors.
Methodology
Before showing the comparison between the Italian VTR rankings list 2 and those derived from the bibliometric simulation, we describe the dataset and the specific methodologies applied.
The VTR peer evaluation
In December 2003, the Italian Ministry for Universities and Research (MIUR) launched its first-ever Triennial Research Evaluation (VTR), which for the opening occasion referred to the period 2001-2003. The national Directory Committee for the Evaluation of Research (CIVR) was made responsible for conducting the VTR (2006). The assessment system was designed to evaluate research and development carried out by public research organizations (102 in total), including both universities and research organizations with MIUR funding. However, the remainder of the current work pertains only to universities.
4
In Italy each university scientist belongs to one specific disciplinary sector (SDS), 370 in all 3 , grouped in 14 University Disciplinary Areas (UDAs). As a first step, the CIVR selected experts for 14 panels, one for each UDA 4 . Universities were then asked to autonomously submit research outputs to the panels 5 : outputs were to be in the proportion of one every four researchers working in the university in the period under observation. Outputs acceptable were limited to articles, books, and book chapters; proceedings of national and international congresses; patents and designs; performances, exhibitions and art works. Thus the VTR was designed as an ex-post evaluation exercise focused on the best outputs produced by Italian research institutions.
In the next step, the panels assessed the research outputs and attributed a final judgment to each product, giving ratings of either "excellent", "good", "acceptable" or "limited". The panels were composed of 183 high level peers appointed by the CIVR, and called on additional support from outside experts. The judgments were made on the basis of various criteria, such as quality, relevance and originality, international scope, and potential to support competition at an international level. To this purpose, the following quality index ( , ) was used for ranking research institution "i" in UDA "u":
, = 1 , • ( , + 0.8 , + 0.6 , + 0.2 , ) [1]
Where:
, ; , ; , ; , = numbers of "excellent, good, acceptable" and "limited" outputs submitted by the i th university in UDA u , = total number of outputs submitted by the i th university in UDA u A final report ranks universities based on their results under the quality assessment index. The rankings were realized at the level of single UDAs. Within each UDA the universities were subdivided by size into four classes: very large, large, medium, and small. As an example, Table 1 shows the ranking list of Italian "large" universities based on , , in the UDA "Mathematics and computer science". Table 1, in addition to the dimensional ranking, gives the excellence ranking within the universe of institutions active in the UDA under examination. Table 2 presents the example of the specific ratings obtained by the University of Rome "Tor Vergata", in the 11 disciplinary UDAs for which it submitted outputs.
The magnitude of the VTR effort can be suggested by a few pertinent facts: the evaluation included 102 research institutions (77 universities and 25 public research organizations) and examined about 18,000 outputs, drawing on 20 peer panels, 183 panelists and 6,661 reviewers, with the work taking almost two years and with direct costs mounting to 3.5 million euros. 9
The bibliometric dataset
The dataset of scientific products examined in the study is based on the Observatory of Public Research (ORP), derived under license from the Thomson Reuters Web of Science (WoS). ORP provides a census of scientific production dating back to 2001, from all Italian public research organizations (95 universities, 76 research institutions and 192 hospitals and health care research organizations). For this particular study the analysis is limited to universities. Beginning from the raw data of the WoS, and applying a complex algorithm for reconciliation of affiliations and disambiguation of the true identity of the authors, each publication (article, review and conference proceeding) is attributed to the university scientist or scientists that produced it . Every publication is assigned to a UDA on the basis of the SDS to which the author belongs. A research product co-authored by scientists working in different UDAs is assigned to all these UDAs, and a research product co-authored by scientists working in different universities is assigned to all these universities. Overall, in the triennium examined, the research staff of these UDAs achieved 84,289 publications 7 . The products submitted for evaluation in the VTR represented less than 9% of the total portfolio.
Evaluation of scientific excellence in universities: VTR versus bibliometric assessment
The VTR provided for evaluation of a number of products from each university proportionate to the number of researchers belonging to each UDA 8 . The underlying objective was clearly to identify and reward the universities on the basis of excellence. However the resulting rankings listings present distortions due to two factors. The first is the inefficiency in selection of the best products on the part of the university, which we have already noted. For this, the rankings lists do not reflect true excellence, but rather that suggested by the products submitted, with the distance from reality 7 depending on the inefficiency of the selection. Abramo et al. (2009) have already quantified the inefficiency related to this problem 9 . The second factor concerns the method of identifying excellence. If an exercise is conceived to measure (and reward) excellence, then the ranking lists that it produces should indicate first place for those universities that produce, under equal availability of resources, a greater quantity of excellent research results (top-down approach). However the VTR, in a pattern that is unavoidable under peer review, evaluated a fixed number of products per university, independently of their real excellence (bottom-up approach). Given the assumption, backed by the literature, that peer review and bibliometrics are of equivalent in their evaluations of individual research products, the bibliometric approach can overcome these limits. Through indicators of impact, it is possible to adopt a top-down approach and at the same time eliminate the inefficiency in selection by the universities.
Using the bibliometric method for the evaluation of excellence, the position of university i in the national ranking list of UDA u derives from the indicator of excellence Ii,u, defined: But how can we qualify the excellence of a research product? From a bibliometric point of view, the excellence of a publication is indicated by the citations that it receives from the scientific community of reference. For the aims of the present work we consider an indicator, named the Article impact index (AII), equal to the standardized citations of a publication, i.e. the ratio of citations received by a publication to the median of citations 10 for all Italian publications of the same year and WoS subject category 11 . The distribution of the AII of national publications of a given UDA permits identification of the excellent products on the basis of a given threshold level. We have simulated two scenarios, one in line with international practice and the other in line with the Italian VTR exercise. Consequently the two reference datasets differ in function of the different selection methods for excellent publications: i) consisting of the top 10% 9 They found that the average percentages of publications selected by universities for the VTR with a bibliometric quality value lower than the median of the national distribution for all of the university's outputs in a UDA varies from a minimum of 3.7% in biology to a maximum of 29.6% for agricultural and veterinary sciences. Other than this last discipline, notable figures also emerge for industrial and information engineering (26.5%) and mathematics and computer science (24.8%) as disciplines in which the selection process results as particularly ineffective. In six out of eight UDAs there were actually universities that submitted all publications with a bibliometric quality indicator lower than the median for the UDA. 8 of the national publications per AII in each UDA (analogous to international practice); and ii) consisting of the best publications from a UDA in numbers equal to 25% of the total national members of the UDA (analogous to the VTR guidelines).
For each of these two scenarios, national ranking lists were prepared in each UDA on the basis of indicator Ii,u. For comparison with the rankings from the VTR, Spearman coefficients of correlation were calculated (Table 5): these result as significant for five UDAs out of eight for scenario A and six out of eight for scenario B. Between the two scenarios, five UDAs are the same: Mathematics and computer sciences, Chemistry, Biology, Medicine, Agricultural and veterinary sciences. For scenario B, the coefficient also results as significant for Industrial and information engineering. Amongst these areas, the coefficients show a non-weak correlation only in Biology. Thus we certainly cannot affirm that the bibliometric evaluation of excellence provides a framework that thoroughly coincides with the results of the evaluation exercise.
: Spearman correlation between VTR ranking list and bibliometric ranking list
Given the correlation analysis, it is useful to analyze the shifts between the ranking lists in terms of variation of percentile and quartile. The results for Scenario A are seen in Table 6. The variations are very substantial: in terms of percentiles, the shifts always involve at least 89% of the universities, with average values falling in the range of 20-31 percentile points and medians in the range of 11-25. Maximum shifts are notable, always greater than 67; in four UDAs the maximum shift is actually over 90 percentiles; in Earth sciences and in Medicine there is the extreme circumstance of the university that places first in the VTR rankings coming last in the rankings on the basis of bibliometric indicator for excellence. The variations by quartile are also very substantial. At least 45% of the universities active in Agricultural and veterinary sciences shift by at least one quartile, and 80% of those active in Biology register such shifts. In the other UDAs, the percentages of universities that make a shift fall between these two extremes. The values for average and median shift are uniform (equal to one quartile), except for Agricultural and veterinary sciences (median nil), as is the value for maximum shift (3 quartiles) for all the eight UDA examined.
The comparison between VTR and bibliometric rankings for scenario B is presented in Table 7: the results are almost a complete match to those from the comparison for scenario A. Table 8 provides an examination in more detail concerning the distribution of universities for extent of shift, in quartiles. We provide this examination for the example of scenario B. The most striking cases (shifts of 3 quartiles) are seen in three UDAs: Physics (5 universities out of 52), Earth sciences (5 out of 41) and Industrial and information engineering (5 out of 44). In Physics, of the five universities, three drop from first to last quartile and two rise in the opposite direction, with respect to the CIVR evaluation. In Earth sciences and in Industrial and information engineering these numbers are equal to, respectively, two and three. 1 2 3 Total Mathematics and computer sciences 16 25 12 0 53 Physics 17 14 16 5 52 Chemistry 25 17 6 3 51 Earth sciences 13 16 7 5 41 Biology 21 27 6 1 55 Medicine 22 13 10 1 46 Agricultural and veterinary sciences 12 9 5 3 29 Industrial and information engineering 15 19 5 5 44 Table 8
: Numerosity of universities for extent of shift (in quartiles) for each UDA (scenario B)
We now imagine a division of the rankings into four classes, as in the four research profile classes of universities applied by the last UK Research Assessment Exercise. The Education Funding Council for England (HEFCE) has adopted a performancebased research funding scheme 12 which does not assign any funds to universities that placed in the lowest of these four classes. Universities with an evaluation of their research profile as first class receive (under equal numbers of research staff) three times more funds of universities in the second class, which in turn receive three times as much as those in the third class. If the resource attribution mechanisms for Italian universities 10 were the same as that for the UK HEFCE, in Industrial and information engineering (as an example) three universities would not have received any funds, even though they place first in national rankings according to reliable bibliometric criteria. On the other hand, two universities that place very low in the national bibliometric classification would receive very large quantities of funds on the basis of the VTR, with very evident distortion of the reward system.
VTR versus bibliometric productivity assessment
The main limit of the peer-review evaluation method remains that of not being able to compare the research productivity of organizations without excessive costs and times. The consequence of containing costs is the extreme volatility of rankings with variation of the share of product evaluated, as stated above and as measured in a preceding study by Abramo et al. (2010). The authors' opinion is that a system of evaluation and consequent selective funding should embed productivity measurements, which makes evaluation of the total output necessary. In the hard sciences the publications indexed in such bibliometric data bases as WoS or Scopus, represent a meaningful proxy of total output (Moed, 2005), meaning that the bibliometric method permits comparative measurement of productivity. However, if rankings by quality evaluation based on peer review agree with rankings of universities based on productivity, it is evident that no conflict occurs. In this section we test for this occurrence, meaning we verify whether the research institutions evaluated as excellent in terms of quality are also necessarily those that are most efficient in research activities.
As previously, we first do a correlation analysis and then an analysis of the shifts in rankings. We apply a bibliometric indicator of productivity, defining research productivity (RPi,s) of University i in SDS s as:
, = 1 , ∑ • , , ,=1 [3]
With: = article impact index of publication j , , = fraction of authors of university i and SDS s to total co-authors of publication j (considering, if publication j falls in life science subject categories, the position of each author and the character of the co-authorship, either intra-mural or extramural 13 ). Ni,s = total number of publications authored by research staff in SDS s of university i RSi,j = Research staff of university i in SDSs s Once the productivity indicator has been measured at the level of SDS we proceed to aggregation at the UDA level, through standardization and weighting of the data for its SDSs. This method limits the distortion typical of aggregate analyses that do not take account of the varying fertility of the SDSs and their varying representation in terms of members in each UDA (Abramo et al., 2008). The research productivity (RPi,u) in a general UDA u of a general university i is thus calculated as:
, = ∑ ( , • , , ) =1 [4]
With: RPs = Average research productivity of national universities in SDS s RSi,u = Research staff of university i in UDA u nu = number of SDS in UDA u Table 9 presents the Spearman coefficients of correlation for the ranking lists obtained from the VTR and from application of this bibliometric indicator of productivity. The coefficients are statistically significant in only five UDAs out of eight (Mathematics and computer sciences, Chemistry, Biology, Medicine and Industrial and information engineering), but the values indicate a weak correlation between the two rankings. Once again, the results clearly show that the research institutions evaluated through peer review as excellent in terms of quality are not necessarily those that are most efficient in research activities.
UDA
Coefficient of correlation
Spearman correlation between VTR ranking lists and bibliometric rankings for productivity
The analysis of the rankings shifts between the two lists (Table 10) shows obvious differences. For quartile rankings, the percentages of universities with shifts vary from a minimum of 53% in Chemistry to a maximum of 77% in Physics 14 . Just as in the analysis for the preceding section, the results are uniform for values of average and median shift (equal to one quartile) and for maximum shift (equal to three quartiles). It should be noted that a shift equal to 3 quartiles means that a university in the top group of rankings by VTR would thus result in the last, or vice versa.
Conclusions
Both within the scientific community and beyond, there is unanimous agreement that resources for science should be assigned according to rigorous evaluation criteria. However there is a lively debate on which methods should be adopted to carry out such evaluations. The peer-review methodology has long been the most common. This was the approach for the first large-scale evaluation in Italy (VTR), dealing with the 2001-2003 triennium and concluded in 2006. Recently, the agency responsible prepared the guidelines for an updated national evaluation (the VQR), this time on the basis of a seven-year period and a more ample set of products, but still a peer-review type exercise.
For whatever evaluation intended to inform a research funding scheme, the conception must be of a manner to achieve the strategic objectives the policy-maker is proposing. For the Italian VTR, the objective was to identify and reward excellence: in this work we have attempted to verify the achievement of the objective. To do this we compared the rankings lists from the VTR with those obtained from evaluation simulations conducted with analogous bibliometric indicators. The analyses have highlighted notable shifts, the causes of which the authors have amply examined in previous works. The results justify very strong doubts about the reliability of the VTR rankings in representing the real excellence of Italian universities, and raise a consequent worry about the choice to distribute part of the ordinary funding for university function on the basis of these rankings. One detailed analysis by the authors shows that the VTR rankings cannot even be correlated with the average productivity of the universities. Everything seems to suggest a reexamination of the choices made for the first VTR and the proposals for the new VQR. The time seems ripe for adoption of a different approach than peer review, at least for the hard sciences, areas where publication in international journals represents a robust proxy of the research output, and where bibliometric techniques offer advantages that are difficult to dispute when compared to peer review.
,u = Number of excellent research products in UDA u authored by scientists of university i Neu = Total number of national excellent research products in UDA u RSi,u = Research staff of university i in UDA u RSu = Total national research staff in UDA u
VTR rank list of Italian "large" universities for Mathematics and computer science: E, G, A and L indicate numbers of outputs rated by VTR as excellent, good, acceptable, limited5
University
Selected
outputs
E G A L Rating
Category
rank
Absolute
rank
Absolute rank
(percentile)
Milan
28
17 10 1 0 0.914
1
4
92
Milan Polytechnic
25
16 7 2 0 0.912
2
6
90
Pisa
42
22 18 2 0 0.895
3
9
85
Rome "La Sapienza"
61
31 26 4 0 0.889
4
13
77
Bologna
35
17 15 3 0 0.880
5
16
67
Padua
31
11 17 3 0 0.852
6
23
58
Florence
31
12 15 3 1 0.839
7
25
54
Palermo
31
9 14 7 1 0.794
8
39
27
Turin
30
7 15 7 1 0.780
9
41
19
Genoa
30
7 17 4 2 0.780
9
41
19
Naples "Federico II"
43
7 26 8 2 0.767
11
44
17
Table 1: UDA
Selected
outputs
E G A L Rating
Category rank
(class)
Mathematics and computer science
23
17 5 1 0 0.939
1 out of 15 (medium)
Physics
19
10
Table 2 :
2VTR ratings for University of Rome "Tor Vergata: E, G, A and L indicate numbers of outputs rated by VTR as excellent, good, acceptable, limited
The field of observation covers the 2001-2003 triennium and is limited to the hard sciences, meaning eight out of the total 14 UDAs: Mathematics and computer science, Physics, Chemistry, Earth science, Biology, Medicine, Agricultural and veterinary sciences and Industrial and information engineering 6 . In the UDAs thus examined, over the 2001-2003 period, there were an average of 31,924 scientists distributed in 69 universities(Table 3). : Universities and research staff in the Italian academic system, byUDA; data 2001-2003 UDA
N. of SDSs Universities Research staff
Mathematics and computer sciences
10
59
3,006
Physics
8
57
2,484
Chemistry
12
58
3,057
Earth sciences
12
48
1,253
Biology
19
63
4,752
Medicine
50
57
10,301
Agricultural and veterinary sciences
30
49
2,867
Industrial and information engineering
42
60
4,204
Total
183
69
31,924
Table 3
Table 4
4shows the representativity of publications submitted, by UDA. Number of publications selected for the VTR by universities in each UDA, and their representativity(period 2001-2003) UDA
VTR
products
VTR ORP-listed
publications
(a)
Total ORP-listed
publications
(b)
a/b
Mathematics and computer science
751
711 (94.7%)
6,722 10.6%
Physics
626
596 (95.2%)
12,919 4.6%
Chemistry
758
712 (93.9%)
8,991 7.9%
Earth science
323
303 (93.8%)
3,827 7.9%
Biology
1,279
1,239 (96.9%)
8,103 15.3%
Medicine
2,644
2,574 (97.4%)
27,577 9.3%
Agriculture and veterinary science
617
571 (92.5%)
2,650 21.5%
Industrial and information engineering
909
807 (88.8%)
13,500 6.0%
Total
7,907
7,513 (95.0%)
84,289 8.9%
Table 4:
Statistics for shifts between VTR and bibliometric ranking lists (scenario A: excellent publications = top national 10% per UDA)UDA
Univ.
Percentile variations
Quartile variations
Var Max Aver. Median Var Max Aver. Median
Mathematics and computer sciences
53 98% 81
26
23
68% 3
1
1
Physics
52 96% 96
30
23
69% 3
1
1
Chemistry
51 94% 90
22
18
57% 3
1
1
Earth sciences
41 98% 100 31
25
80% 3
1
1
Biology
55 96% 91
20
15
56% 3
1
1
Medicine
46 89% 100 22
18
52% 3
1
1
Agricultural and veterinary sciences
29 93% 89
22
11
45% 3
1
0
Industrial and information engineering 44 95% 67
28
22
61% 3
1
1
Table 6:
Table 10: Statistics for shifts in rankings between VTR ranking lists and bibliometric rankings for productivityUDA
Univ.
Percentile variations
Quartile variations
Var Max Aver. Median Var
Max Aver. Median
Mathematics and computer sciences
53
96% 81
24
21
68%
3
1
1
Physics
52
100% 88
33
28
75%
3
1
1
Chemistry
51
98% 86
23
20
53%
3
1
1
Earth sciences
41
98% 100 33
25
71%
3
1
1
Biology
55
95% 67
24
20
67%
3
1
1
Medicine
46
100% 96
24
16
57%
3
1
1
Agricultural and veterinary sciences
29
100% 93
30
25
69%
3
1
1
Industrial and information engineering
44
98% 79
23
16
57%
3
1
1
12
Abramo, G., D'Angelo, C.A., Di Costa, F. (2011). National research assessment exercises: a comparison of peer review and bibliometrics rankings. Scientometrics, 89(3), 929-941. DOI: 10.1007/s11192-011-0459-x
http://vtr2006.cineca.it/index_EN.html, last accessed on July 5, 2011.
Complete list accessible at http://cercauniversita.cineca.it/php5/settori/index.php, last accessed on July 5, 2011. 4 The CIVR also organized six additional panels for "interdisciplinary sectors": Science and technology (ST) for communications and an information society; ST for food quality security, ST for nano-systems and micro-systems; aerospace ST, and ST for the sustainable development and governance. 5 Each university was also asked to provide the CIVR with sets of input and output data for the institution and its individual UDAs.
The analysis does not consider Civil engineering and architecture because WoS does not cover the full range of research output for this UDA. 7 This value includes double counts for publications co-authored by researchers from more than one UDA. 8 In theory, a university could have submitted products for only one researcher from each UDA.
Observed as of 30/06/2009, meaning a citation time window between six and eight years, certainly sufficient for the purposes of this work. 11 A possible alternative would be to standardize to the world average, as frequently observed in the literature. Standardizing citations to the median value rather than to the average, is justified by the fact that distributions of citations are highly skewed(Lundberg, 2007).
For detail: http://www.hefce.ac.uk/research/funding/qrfunding/, last accessed on July 5, 2011.
If first and last authors belong to the same university, 40% of citations are attributed to each of them; the remaining 20% are divided among all other authors. If the first two and last two authors belong to different universities, 30% of citations are attributed to first and last authors; 15% of citations are attributed to second and last author but one; the remaining 10% are divided among all others.
InTable 1it is also these two areas that are at extreme opposites in terms of differences between bibliometric rating and CIVR rating.
Evaluating research: from informed peer review to bibliometrics. G Abramo, C A D'angelo, Scientometrics. 387Abramo G., D'Angelo C.A., 2011. Evaluating research: from informed peer review to bibliometrics, Scientometrics, (87)3, 499-514.
Allocative efficiency in public research funding: Can bibliometrics help? Research Policy. G Abramo, C A D'angelo, A Caprasecca, 38Abramo, G., D'Angelo, C.A., Caprasecca, A. (2009). Allocative efficiency in public research funding: Can bibliometrics help? Research Policy, 38(1), 206-215.
Assessment of sectoral aggregation distortion in research productivity measurements. G Abramo, C A D'angelo, F Di Costa, Research Evaluation. 172Abramo, G., D'Angelo, C.A., Di Costa, F. (2008). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111- 121.
Peer review research assessment: a sensitivity analysis of performance rankings to the share of research product evacuate. G Abramo, C A D'angelo, F Viel, Scientometrics. 853Abramo, G., D'Angelo, C.A., Viel, F. (2010). Peer review research assessment: a sensitivity analysis of performance rankings to the share of research product evacuate. Scientometrics, 85(3), 705-720.
Peers reviews and bibliometric indicators: a comparative study at Norvegian University. D W Aksnes, R E Taxt, Research Evaluation. 131Aksnes, D.W., Taxt, R.E. (2004). Peers reviews and bibliometric indicators: a comparative study at Norvegian University. Research Evaluation, 13 (1), 33-41.
A heuristic approach to author name disambiguation in large-scale bibliometric databases. C A D'angelo, C Giuffrida, G Abramo, Journal of the American Society for Information Science and Technology. 622D'Angelo, C. A., Giuffrida, C., Abramo, G. (2011). A heuristic approach to author name disambiguation in large-scale bibliometric databases. Journal of the American Society for Information Science and Technology, 62(2), 257-269.
The first Italian research assessment exercise: A bibliometric perspective. M Costantini, A , Journal of Informetrics. 52ERAERA (2010). The Excellence in Research for Australia Initiative http://www.arc.gov.au/era/ Franceschet, M., Costantini, A. (2011). The first Italian research assessment exercise: A bibliometric perspective. Journal of Informetrics, 5(2), 275-291.
The philosophical basis of peer review and the suppression of innovation. D F Horrobin, Journal of the American Medical Association. 26310Horrobin, D.F. (1990). The philosophical basis of peer review and the suppression of innovation. Journal of the American Medical Association, 263(10), pp. 1438- 1441.
Lifting the crown-citation z-score. J Lundberg, Journal of Informetrics. 12Lundberg, J. (2007). Lifting the crown-citation z-score. Journal of Informetrics, 1(2), 145-154.
Problems of citation analysis. M H Macroberts, B R Macroberts, Scientometrics. 363MacRoberts, M.H., MacRoberts, B.R. (1996). Problems of citation analysis. Scientometrics, 36(3), 435-444
The impact-factors debate: the ISI's uses and limits. H F Moed, Nature. 415Moed, H.F. (2002). The impact-factors debate: the ISI's uses and limits. Nature, 415, 731-732.
Citation analysis in research evaluation. H F Moed, SpringerDordrecht, The NetherlandsMoed, H.F. (2005). Citation analysis in research evaluation. Springer, Dordrecht, The Netherlands.
Peer review. A view from the inside. H Moxam, J Anderson, Science and Technology Policy. Moxam, H., Anderson, J. (1992). Peer review. A view from the inside. Science and Technology Policy, pp. 7-15.
The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. C Oppenheim, Journal of Documentation. 53Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of Documentation, 53, 477-487.
Citation counts and the research assessment exercise V: archaeology and the 2001 RAE. C Oppenheim, M Norris, Journal of Documentation. 566Oppenheim, C., Norris, M. (2003). Citation counts and the research assessment exercise V: archaeology and the 2001 RAE. Journal of Documentation, 56 (6), 709-730.
The use and misuse of journal metrics and other citation indicators. D A Pendlebury, Scientometrics. 571Pendlebury, D.A. (2009). The use and misuse of journal metrics and other citation indicators. Scientometrics, 57(1), pp. 1-11.
Comparative analysis of a set of bibliometric indicators and central peer review criteria, Evaluation of condensed matter physics in the Netherlands. E J / Rinia, Th N Van Leeuwen, H G Van Vuren, A F J Van Raan, Report on the pilot exercise to develop bibliometric indicators for the Research Excellence Framework. 27REF. (2009). Report on the pilot exercise to develop bibliometric indicators for the Research Excellence Framework. http://www.hefce.ac.uk/pubs/hefce/2009/09_39/ Rinia, E.J., van Leeuwen, Th.N., van Vuren, H.G., van Raan, A.F.J. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria, Evaluation of condensed matter physics in the Netherlands. Research Policy, 27(1), 95-107.
Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. A F J Van Raan, Scientometrics. 621van Raan, A.F.J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), pp. 133-143.
Valutazione Triennale (2001-2003) della Ricerca italiana (italian Triennal Assessment Excercise). Vtr, Risultati delle valutazioni dei Panel di Area. last accessed onVTR (2006). Valutazione Triennale (2001-2003) della Ricerca italiana (italian Triennal Assessment Excercise). Risultati delle valutazioni dei Panel di Area. http://vtr2006.cineca.it/php5/vtr_rel_civr_rapp_panel_menu.php?info=- &&sel_lingua=EN&decritta=1&versione=2&aggiorna=0&info= (last accessed on July 5, 2010).
| []
|
[
"Surface Critical Behavior in Systems with Absorbing States",
"Surface Critical Behavior in Systems with Absorbing States"
]
| [
"Kent Baekgaard Lauritsen \nCenter for Chaos and Turbulence Studies\nNiels Bohr Institute\nBlegdamsvej 17DK-2100CopenhagenDenmark\n",
"Per Fröjdh \nDepartment of Physics\nStockholm University\nBox 6730S-113 85StockholmSweden\n\nNORDITA\nBlegdamsvej 17DK-2100CopenhagenDenmark\n",
"Martin Howard \nCenter for Chaos and Turbulence Studies\nNiels Bohr Institute\nBlegdamsvej 17DK-2100CopenhagenDenmark\n"
]
| [
"Center for Chaos and Turbulence Studies\nNiels Bohr Institute\nBlegdamsvej 17DK-2100CopenhagenDenmark",
"Department of Physics\nStockholm University\nBox 6730S-113 85StockholmSweden",
"NORDITA\nBlegdamsvej 17DK-2100CopenhagenDenmark",
"Center for Chaos and Turbulence Studies\nNiels Bohr Institute\nBlegdamsvej 17DK-2100CopenhagenDenmark"
]
| []
| We present a general scaling theory for the surface critical behavior of non-equilibrium systems with phase transitions into absorbing states. The theory allows for two independent surface exponents which satisfy generalized hyperscaling relations. As an application we study a generalized version of directed percolation with two absorbing states. We find two distinct surface universality classes associated with inactive and reflective walls. Our results indicate that the exponents associated with these two surface universality classes are closely connected. | 10.1103/physrevlett.81.2104 | [
"https://arxiv.org/pdf/cond-mat/9808335v1.pdf"
]
| 56,024,335 | cond-mat/9808335 | 32f5879b701682d060cc20f9a7505997719fbb8b |
Surface Critical Behavior in Systems with Absorbing States
31 Aug 1998 (May 22, 2018)
Kent Baekgaard Lauritsen
Center for Chaos and Turbulence Studies
Niels Bohr Institute
Blegdamsvej 17DK-2100CopenhagenDenmark
Per Fröjdh
Department of Physics
Stockholm University
Box 6730S-113 85StockholmSweden
NORDITA
Blegdamsvej 17DK-2100CopenhagenDenmark
Martin Howard
Center for Chaos and Turbulence Studies
Niels Bohr Institute
Blegdamsvej 17DK-2100CopenhagenDenmark
Surface Critical Behavior in Systems with Absorbing States
31 Aug 1998 (May 22, 2018)
We present a general scaling theory for the surface critical behavior of non-equilibrium systems with phase transitions into absorbing states. The theory allows for two independent surface exponents which satisfy generalized hyperscaling relations. As an application we study a generalized version of directed percolation with two absorbing states. We find two distinct surface universality classes associated with inactive and reflective walls. Our results indicate that the exponents associated with these two surface universality classes are closely connected.
The critical behavior of systems with boundaries has been the focus of much research in recent years [1]. So far most work on surface critical behavior and on the analysis of surface universality classes has been within the framework of equilibrium statistical mechanics. However, the same ideas and principles also apply to non-equilibrium systems. A prominent example of such a non-equilibrium process is directed percolation (DP), which is the generic model for systems with a non-equilibrium phase transition from a state with activity (e.g., with a nonzero density of particles) into a so-called absorbing state (with zero activity). An understanding of DP is important for a wide variety of different systems encompassing epidemics, chemical reactions, interface pinning/depinning, spatiotemporal intermittency, the contact process, and certain cellular automata [2]. Recently, however, studies have been made of a number of systems with absorbing states which do not belong to the DP class. One prominent example is a particular reaction-diffusion model called branching and annihilating random walks with an even number of offspring (or BAW for short, where in this paper BAW refers exclusively to the even offspring case) [3]. Other systems in the BAW class (at least in 1 + 1 dimensions) include certain probabilistic cellular automata [4], monomer-dimer models [5], non-equilibrium kinetic Ising models [6], and generalized DP with two absorbing states (DP2) [7].
In this paper we address the impact of walls on systems with phase transitions into absorbing states. We have developed a general scaling theory which allows for two independent surface exponents, which satisfy generalized hyperscaling relations. As an application, we have investigated the surface critical behavior of DP2. Our numerics indicate that DP2 exhibits a far richer surface structure than DP: we find two different surface universality classes for DP2 with inactive and reflective walls, and our numerical results indicate that the exponents associated with these two classes are closely connected. These results can be successfully contained within our scaling theory. However, we emphasize that the theory is much more general than this and should also apply to other types of systems with walls and absorbing states, e.g., to surface effects in catalytic reactions and systems exhibiting self-organized criticality [8].
Before turning to the surface critical behavior of DP2 (in 1 + 1 dimensions) and BAW, we begin by discussing the main features of the corresponding bulk systems and then identify some differences and similarities with DP. Many models in the BAW class [3][4][5][6] conserve particle number modulo 2, but this appears not to be the fundamental requirement for the emergence of the new universality class. Instead the key underlying feature seems to be the presence of a symmetry relating the various absorbing states [9]. This has been further demonstrated by Hinrichsen who introduced a generalized version of the Domany-Kinzel model with n absorbing states [7]. This model, which we will refer to as DPn, is defined on a ddimensional lattice (in space). At time t, the state s t i of the i-th site can be either active (A) or in one of n inactive states (I 1 , . . . , I n ). In 1+1 dimensions, the update probabilities P (s t+1 i |s t i−1 , s t i+1 ) are given by P (I k |I k , I k ) = 1, P (A|A, A) = 1 − nP (I k |A, A) = q, P (A|I k , A) = P (A|A, I k ) = p, P (I k |I k , A) = P (I k |A, I k ) = 1 − p, P (A|I k , I l ) = 1, where (k, l = 1, . . . , n; k = l) (see also [7] for a more complete explanation of the model). For n = 1 these rules are equivalent to the Domany-Kinzel model which belongs to the DP universality class (apart from one special point which belongs to the compact DP universality class) [10,11]. For n ≥ 2, the distinction between regions of different inactive states is preserved by demanding that they are separated by active ones. Monte Carlo simulations show that bulk DP2 belongs to the bulk BAW class in 1 + 1 dimensions [7], whereas this probably does not hold in higher dimensions.
The growth of both BAW and DP clusters in the bulk close to criticality can be summarized by a set of independent exponents. A natural choice is to consider ν ⊥ and ν which describe the divergence of the correlation lengths in space, ξ ⊥ ∼ |∆| −ν ⊥ , and time ξ ∼ |∆| −ν , where ∆ ≡ p − p c describes the deviation from criticality. We also need the order parameter exponent β, which can be defined in two a priori different ways: it is either governed by the percolation probability (the probability that a cluster grown from a finite seed never dies),
P (∆) ∼ ∆ β seed , ∆ > 0,(1)
or by the density of active sites in the steady state,
n(∆) ∼ ∆ β dens , ∆ > 0.(2)
For the case of DP, it is known that β is unique: β seed = β dens in any dimension. This follows from theoretical considerations [12,13] and has been verified by extensive numerical calculations. The relation also holds for BAW in 1 + 1 dimensions, a result first suggested by numerics and now backed up by an exact duality mapping [14]. However, this exponent equality is certainly not always true-for example it breaks down for certain systems with infinitely many absorbing states [15,16]. Furthermore, β seed = β dens for BAW in high enough dimension: if we consider the mean-field regime valid for spatial dimensions d > d c = 2, then the system is in an inactive state only for a zero branching rate, whereas any non-zero branching rate results in an active state. The steady-state density (2) approaches zero continuously (as the branching rate is reduced towards zero) with the mean-field exponent β dens = 1. Nevertheless, for d > 2 the survival probability (1) of a particle cluster will be finite for any value of the branching rate, implying that β seed = 0 in mean-field theory. This result follows from the non-recurrence of random walks in d > 2.
From the perspective of formulating field theories for BAW, the 1 + 1 dimensional case poses considerable difficulties [17]. These stem from the presence of two critical dimensions: d c = 2 (above which mean-field theory applies) and d ′ c ≈ 4/3 (where for d > d ′ c the branching reaction is a relevant process at the pure annihilation fixed point, whereas for d < d ′ c it is irrelevant there [17]). This means that the (physically interesting) spatial dimension d = 1 cannot be accessed using controlled expansions down from the upper critical dimension d c = 2. However if we assume that a (bulk) scaling theory can be properly justified (as it can be for DP, and BAW for d > d ′ c ), then it is straightforward to relate the above set of exponents to those of other quantities. Keeping the distinction between β seed and β dens , the average lifetime of finite clusters, t ∼ |∆| −τ , satisfies τ = ν − β seed , and the average mass of finite clusters,
s ∼ |∆| −γ ,(3)
leads to the following hyperscaling relation:
ν + dν ⊥ = β seed + β dens + γ.(4)
Note that (4) is consistent with the distinct upper critical dimensions for BAW and DP. Using the above mean-field values for BAW and ν ⊥ = 1/2, ν = 1, and γ = 1, we verify d c = 2. In contrast, for DP one has the mean-field exponent β seed = 1 and d c = 4.
We now turn to the surface critical behavior of DP2 and show how the above relations and exponents are modified in a semi-infinite geometry where we place a wall at x ⊥ = 0 [x = (x , x ⊥ ), with the ⊥ and directions being relative to the wall]. In the simulations we start from an absorbing state, where all sites are in the state I 1 . We then initiate a cluster by placing a seed (site in state A) next to the wall. However, the analogy with DP is no longer immediate, as our numerical measurements in 1 + 1 dimensions indicate that DP2 supports an additional surface exponent as well as an additional surface universality class. The type of surface universality class is governed by the choice of boundary condition (BC). We have studied two types of BC: the inactive BC (IBC) where the wall sites are always in the inactive state I 1 , and the reflective BC (RBC), where the wall acts like a "mirror" by letting imaginary sites next to the outer side of the wall be the mirror images of those on the inside.
By growing a DP cluster near an IBC wall, it has been observed numerically in d = 1, 2 that certain exponents are altered [18,19]. This behavior has been explained by a scaling theory [20] that explicitly takes surface critical phenomena into account and connects IBC with the ordinary transition [21]. Apart from the above (three) independent bulk exponents, an additional universal surface exponent must be included, which satisfies a generalized hyperscaling relation [20]. The survival probability (1) for a cluster started close to the wall has the form
P 1 (t, ∆) = ∆ β 1,seed ψ 1 (t/ξ ), ∆ > 0,(5)
where the subscript '1' refers to the wall. However, in analogy with the bulk case, an order parameter can also be defined by the density of active sites on the wall in the steady state:
n 1 (∆) ∼ ∆ β 1,dens , ∆ > 0.(6)
More generally the steady-state density (2) is now given by n(∆, x ⊥ ) = ∆ β dens ϕ(x ⊥ /ξ ⊥ ), where the scaling function ϕ behaves in such a way that n(∆, x ⊥ ) for x ⊥ /ξ ⊥ ≪ 1 crosses over to the surface behavior (6). For the case of DP, the surface exponents fulfill β 1,seed = β 1,dens , as can be shown by a field-theoretic derivation of an appropriate correlation function [20]. However, for DP2 this exponent equality is no longer true. Our numerical results in 1 + 1 dimensions yield two distinct surface exponents, β 1,seed = β 1,dens , although the corresponding bulk exponents coincide, as expected. The values of these surface exponents depend on the boundary conditions and by changing from IBC to RBC or vice versa, we observe that the assignment of the exponents is interchanged (see below). Further investigations are needed in order to determine whether the wall may have broken a (duality) symmetry present in the bulk (which forces the bulk exponents to coincide) and whether the operation of this symmetry relates IBC to RBC and vice versa. In contrast for surface DP, we note that IBC and RBC belong to the same surface universality class.
By keeping β 1,seed and β 1,dens distinct, we can now set up a general scaling theory for the surface critical behavior in systems with absorbing states. An ansatz for the coarse-grained density of active sites ρ 1 at the point (x, t) of a cluster grown from a single seed located next to the wall, has the form ρ 1 (x, t, ∆) = ∆ β 1,seed +β dens f 1 x/ξ ⊥ , t/ξ .
The ∆-prefactor comes from (5) for the probability that an infinite cluster can be grown from the seed, and from (2) for the (conditional) probability that the point (x, t) belongs to this cluster. The shape of the cluster is governed by the scaling function f 1 and we assume that the density is measured at a finite angle away from the wall. If the density is measured along the wall, we have instead
ρ 11 (x, t, ∆) = ∆ β 1,seed +β 1,dens f 11 x/ξ ⊥ , t/ξ ,(8)
as we pick up a factor ∆ β 1,dens rather than ∆ β dens for the probability that (x, t) at the wall belongs to the infinite cluster. In 1 + 1 dimensions, (8) reduces to ρ 11 (t, ∆) = ∆ β 1,seed +β 1,dens f 11 (t/ξ ). Starting from a seed on the wall, the average lifetime of finite clusters, t ∼ |∆| −τ1 , satisfies τ 1 = ν − β 1,seed . The average size of finite clusters follows from integrating the cluster density (7) over space and time:
s ∼ |∆| −γ1 ,(9)
where the surface (susceptibility) exponent γ 1 is related to the previously defined exponents via ν + dν ⊥ = β 1,seed + β dens + γ 1 .
The only difference from (4) is that we have now included a wall. Analogously, by integrating the cluster wall density (8) over the (d − 1)-dimensional wall and time, we obtain the average (finite) cluster size on the wall,
s wall ∼ |∆| −γ1,1 ,(11)
where ν + (d − 1)ν ⊥ = β 1,seed + β 1,dens + γ 1,1 .
Note that if the γ susceptibility exponents obtained from (10) and (12) are negative, then they should be replaced by zero in (9) and (11). The above scaling theory is generic since it allows for β 1,seed and β 1,dens to be independent surface exponents. When the scaling theory is applied to DP, it can be fully justified (with β 1,seed = β 1,dens ) [20]. However, if we apply the theory to BAW [22], it would again be desirable to obtain a secure renormalization-group justification for the scaling behavior. In particular it would be important to determine from the field-theory whether two independent surface exponents are present. However given the fundamental difficulties encountered already in the bulk fieldtheoretic analysis of BAW in 1 + 1 dimensions [17], this kind of analysis for the surface is unlikely to give a complete justification of the scaling theory. In order to confirm our scaling theory we have performed numerical simulations for DP2 in 1 + 1 dimensions with walls constrained by IBC or RBC (see [23] for details). We have also performed simulations for DP2 without a wall and obtained results for the exponents in complete agreement with [7]. There are several estimates for β dens (= β seed ) available [24]: we have used β dens = 0.922(5) [25].
We extract the critical exponents from several measured quantities. Using (5), we find that the survival probability for the cluster to be alive at time t has the following behavior at criticality (∆ = 0)
P 1 (t) ∼ t −δ 1,seed , δ 1,seed = β 1,seed /ν .(13)
Integrating the densities (7) and (8) gives expressions for the activity at criticality as function of time [23], e.g.,
N 1 (t) ∼ t κ1 , κ 1 = dχ − δ dens − δ 1,seed ,(14)
where we have introduced the envelope (or "roughness") exponent χ = ν ⊥ /ν , and δ dens = β dens /ν . Note that (14) corresponds to the hyperscaling relation (10) at criticality with γ 1 = ν (1 + κ 1 ), since s ∼ dt N 1 (t). For further confirmations of our numerical data we also considered the cluster size distributions at criticality. The cluster size s scales as s ∼ ξ d ⊥ ξ n(∆) ∼ ∆ −1/σ , with 1/σ = dν ⊥ + ν − β dens . The probability to have a cluster of size s then reads [23] p 1 (s) ∼ s −µ1 ,
µ 1 = 1 + β 1,seed dν ⊥ + ν − β dens .(15)
In Table I we list our estimates for the critical exponents for DP2, where δ 1,dens = β 1,dens /ν is obtained from (8) by measuring the activity at the wall [23], and µ = 1+β seed /(dν ⊥ +ν −β dens ) corresponds to (15) in the absence of a wall. The results are in complete accordance with our theoretical analysis: bulk exponents are unaltered whereas the wall introduces two separate surface exponents. We have also carried out bulk and surface simulations for ∆ > 0 and confirmed that our data could be collapsed according to an appropriate survival probability scaling function [see (5) for the surface case], using our exponent estimates. This numerically confirms the validity of the relation δ = β/ν for the bulk as well as for both sets of corresponding surface exponents [27]. We further observe that the IBC, RBC boundary conditions lead to different exponents thus showing the existence of two distinct surface universality classes. Furthermore, β 1,seed = β 1,dens , although by changing BCs we observe to good accuracy that β As noted above, this suggests that the two BCs for DP2 are related by a symmetry. By universality, we expect the same relations to apply to BAW [23].
By using the explicit definitions of IBC, RBC we can argue that β 1,seed and β 1,dens should indeed depend on the BCs. There will be more activity next to the wall for IBC than for RBC, since the latter can have regions of I 2 located at the wall. Once created, these I 2 regions will survive until the activity returns to the wall. Thus, from the wall density (6), it follows that β (IBC) 1,dens ≤ β (RBC) 1,dens . On the other hand, the existence of these I 2 regions implies that the survival probability (5) for IBC will be smaller than for RBC, leading to β (IBC) 1,seed ≥ β (RBC) 1,seed . However, from the observation that β 1,seed + β 1,dens is independent of the BC, it follows that the average mass on the wall (11) is the same for IBC and RBC. We have also studied several other BCs and found that these give the same scaling behavior as either RBC or IBC depending on whether or not the above-mentioned I 2 regions can disappear only at the wall or also in the bulk [23]. In terms of the BAW model, however, the distinction between the two BCs is slightly different: IBC respects the "parity" symmetry of the bulk, whereas RBC breaks it.
For DP it has been customary to investigate whether the critical exponents can be fitted by (simple) rational numbers [26]. Such a fitting has also been tried for bulk BAW with the following guesses in 1+1 dimensions: κ = χ− 2δ = 0 and χ = 4/7 [3]. These estimates lead immediately to δ = 2/7 (and β/ν ⊥ = 1/2). It is intriguing to note that our numerical results for DP2 in addition suggest that µ 1 = 3/2 for IBC and 4/3 for RBC. From Eq. (15), it then follows that δ 1,seed = 9/14 for IBC and 3/7 for RBC. We would need one more relation in order to obtain the last independent exponent. In fact, we observe numerically that 2ν − β 1,seed − β 1,dens = 3 [28], is valid to within one percent [29].
In conclusion, we have presented a generic scaling theory of surface critical behavior in systems with absorbing states. In particular we have for the first time studied the surface critical behavior of DP2, a model belonging to the BAW universality class in 1 + 1 dimensions. Numerical simulations of the DP2 model with two different types of boundary conditions have uncovered two surface universality classes. Our most important result is that two surface exponents are required to describe the surface critical behavior. The results also indicate that the exponents associated with these two surface universality classes are closely connected. We emphasize that our theory is generic for systems with absorbing states and therefore should also apply to surface effects in, for example, systems exhibiting self-organized criticality. It would also be possible to generalize our theory to allow for edges and corners, which would introduce new exponents and other hyperscaling relations.
K.B.L. acknowledges support from the Carlsberg Foundation and P.F. from the Swedish Natural Science Research Council.
TABLE I .
ICritical exponents obtained from our simulations. For comparison we also list the exponents for DP with an IBC wall in the first column[18,19,26].DP (IBC)
DP2
DP2 (IBC) DP2 (RBC)
δ dens
0.159 47(3) 0.287(5)
0.288(2)
0.291(4)
β dens
0.276 49(4) 0.922(5)
0.93(1)
0.94(2)
δ 1,seed
0.4235(3)
0.641(2)
0.426(3)
β 1,seed
0.7338(1)
2.06(2)
1.37(2)
δ 1,dens
0.4235(3)
0.415(3)
0.635(2)
β 1,dens
0.7338(1)
1.34(2)
2.04(2)
µ
1.108 25(2) 1.225(5)
µ1
1.2875(2)
1.500(3)
1.336(3)
See K For Reviews, Binder, Phase transitions and critical phenomena. C. Domb and J. LebowitzLondonAcademic Press8For reviews see K. Binder, in Phase transitions and crit- ical phenomena, Vol. 8, ed. by C. Domb and J. Lebowitz (Academic Press, London, 1983);
H W Diehl, Phase transitions and critical phenomena. C. Domb and J. LebowitzLondonAcademic Press10H. W. Diehl, in Phase transitions and critical phenomena, Vol. 10, ed. by C. Domb and J. Lebowitz (Academic Press, London, 1986);
. H W Diehl, Int. J. Mod. Phys. B. 113503H. W. Diehl, Int. J. Mod. Phys. B 11, 3503 (1997).
R Dickman, Nonequilibrium statistical mechanics in one dimension. V. Privman (CUPCambridgeR. Dickman, in Nonequilibrium statistical mechanics in one dimension, ed. V. Privman (CUP, Cambridge, 1997).
. I Jensen, Phys. Rev. E. 503623I. Jensen, Phys. Rev. E 50, 3623 (1994).
. P Grassberger, J. Phys. A. 17105P. Grassberger et al., J. Phys. A 17, L105 (1984);
. P Grassberger, J. Phys. A. 221103P. Grassberger, J. Phys. A 22, L1103 (1984).
. H H Kim, H Park, Phys. Rev. Lett. 732579H. H. Kim and H. Park, Phys. Rev. Lett. 73, 2579 (1994);
. H Park, Phys. Rev. E. 525664H. Park et al., Phys. Rev. E 52, 5664 (1995).
. N Menyhárd, G Ódor, J. Phys. A. 297739N. Menyhárd and G.Ódor, J. Phys. A 29, 7739 (1996).
. H Hinrichsen, Phys. Rev. E. 55219H. Hinrichsen, Phys. Rev. E 55, 219 (1997).
. P Bak, Phys. Rev. Lett. 59381P. Bak et al., Phys. Rev. Lett. 59, 381 (1987);
. R Dickman, Phys. Rev. E. 575095R. Dick- man et al., Phys. Rev. E 57, 5095 (1998).
. W Hwang, Phys. Rev. E. 576438W. Hwang et al., Phys. Rev. E 57, 6438 (1998).
. E Domany, W Kinzel, Phys. Rev. Lett. 53311E. Domany and W. Kinzel, Phys. Rev. Lett. 53, 311 (1984).
. W Kinzel, Z. Phys. B. 58229W. Kinzel, Z. Phys. B 58, 229 (1985).
. P Grassberger, A De La, Torre , Ann. Phys. (N.Y.). 122373P. Grassberger and A. de la Torre, Ann. Phys. (N.Y.) 122, 373 (1979).
. J L Cardy, R L Sugar, J. Phys. A. 13423J. L. Cardy and R. L. Sugar, J. Phys. A 13, L423 (1980).
. K Mussawisade, J. Phys. A. 314381K. Mussawisade et al., J. Phys. A 31, 4381 (1998).
. J F F Mendes, J. Phys. A. 273019J. F. F. Mendes et al. J. Phys. A 27, 3019 (1994).
. M A Muñoz, Phys. Rev. E. 565101M. A. Muñoz et al., Phys. Rev. E 56, 5101 (1997).
. J Cardy, U C Täuber, Phys. Rev. Lett. 774780J. Cardy and U. C. Täuber, Phys. Rev. Lett. 77, 4780 (1996);
. J. Stat. Phys. 901J. Stat. Phys. 90, 1 (1998).
. J W Essam, J. Phys. A. 291619J. W. Essam et al., J. Phys. A 29, 1619 (1996).
. K B Lauritsen, Physica A. 2471K. B. Lauritsen et al., Physica A 247, 1 (1997).
. P Fröjdh, J. Phys. A. 312311P. Fröjdh et al., J. Phys. A 31, 2311 (1998).
. H K Janssen, Z. Phys. B. 72111H. K. Janssen et al., Z. Phys. B 72, 111 (1988).
) 1,seed = 0, β (IBC) 1,dens = 3/2, whereas in DP mean-field theory: β 1. BAW mean-field theory: β (IBC. seed = β 1,dens = 3/2In BAW mean-field theory: β (IBC) 1,seed = 0, β (IBC) 1,dens = 3/2, whereas in DP mean-field theory: β 1,seed = β 1,dens = 3/2.
. P Fröjdh, M Howard, K B Lauritsen, unpublishedP. Fröjdh, M. Howard, and K. B. Lauritsen, unpublished.
. I Jensen, J. Phys. A. 308471I. Jensen, J. Phys. A 30, 8471 (1997).
. D Zhong, D Ben-Avraham, Phys. Lett. A. 209333D. Zhong and D. Ben-Avraham, Phys. Lett. A 209, 333 (1995).
. I Jensen, J. Phys. A. 297013I. Jensen, J. Phys. A 29, 7013 (1996).
Technical difficulties have so far prevented a fieldtheoretic derivation of this kind of relation for BAW. 17Technical difficulties have so far prevented a field- theoretic derivation of this kind of relation for BAW [17].
The relation ν −β 1,seed = 1 (±0.0003) found numerically for DP with a wall [18] can be rewritten 2ν − β 1,seed − β 1,dens = 2, using the DP relation β 1. seed = β 1,densThe relation ν −β 1,seed = 1 (±0.0003) found numerically for DP with a wall [18] can be rewritten 2ν − β 1,seed − β 1,dens = 2, using the DP relation β 1,seed = β 1,dens .
The other DP2 exponents then follow: β dens = β seed = 12/13, ν = 42/13, ν ⊥ = 24/13; also β 1,seed = 27/13 and β 1,dens = 18/13 for IBC and vice versa for RBC. The other DP2 exponents then follow: β dens = β seed = 12/13, ν = 42/13, ν ⊥ = 24/13; also β 1,seed = 27/13 and β 1,dens = 18/13 for IBC and vice versa for RBC.
| []
|
[
"The Killing vector field of the metric II + III on Tangent Bundle",
"The Killing vector field of the metric II + III on Tangent Bundle"
]
| [
"Melek Aras \nDepartment of Mathematics\nFaculty of Arts and Sciences\nGiresun University\n28049Turkey\n"
]
| [
"Department of Mathematics\nFaculty of Arts and Sciences\nGiresun University\n28049Turkey"
]
| []
| The main purpose of the paper is to investigate Killing vector field on the tangent bundle T (Mn) of the Riemannian manifold with respect to the Levi-Civita connection of the metric II + III . | null | [
"https://arxiv.org/pdf/1303.0182v3.pdf"
]
| 117,105,663 | 1303.0182 | aca503118119876a4be06859ec16639c22e6aa41 |
The Killing vector field of the metric II + III on Tangent Bundle
12 Mar 2013 February 6, 2014
Melek Aras
Department of Mathematics
Faculty of Arts and Sciences
Giresun University
28049Turkey
The Killing vector field of the metric II + III on Tangent Bundle
12 Mar 2013 February 6, 2014Tesor bundleRiemannian metricComplete and horizontal liftLevi-Civita connectionsKilling vector field
The main purpose of the paper is to investigate Killing vector field on the tangent bundle T (Mn) of the Riemannian manifold with respect to the Levi-Civita connection of the metric II + III .
1.Introduction
Let M n be a Riemannian manifold of class C ∞ with g. Then the set T (M n ) is tangent bundle over the manifold (M n ). We denote by ℑ p q (M n ) the set of all tensor fields of type (p, q) in M n and by π : T (M n ) → M n the naturel projection over M n . For U ⊂ M n , x i , x i ′ , i = 1, ..., n and i ′ = n + 1, ..., 2n are local coordinates in a neighborhood π −1 (U ) ⊂ T (M n ).
Let M n be a Riemannian manifold with metric g whose components in a coordinate neighborhood U are g ji . In the neighborhood π −1 (U ) of T (M n ), U being a neighborhood of M n , we put
δy h = dy h + Γ h i dx i with respect to the induced coordinates x h , y h in π −1 (U ) ⊂ T (M n ), where Γ h i = y j Γ h ji
and denoted by Γ h ji Christoffel sembols formed with g ji .Then we see that I : g ji dx j dx i II : 2g ji dx j δy i III : g ji δy j δy i are all quadratic differential forms defined globally on the tangent bundle T (M n ) over M n and that II + III : 2g ji dx j δy i + g ji δy j δy i is non-singular and consequently can be regarded as Riemannian or pseudo-Riemannian metric on the tangent bundle T (M n ) over M n .
The metric II + III has components
II + III : ( g CB ) = 0 g ji g ji g ji(1)
and consequently its contravariant components
g CB = −g ji g ji g ji 0 (2)
with respect to the adapted frame on T (M n ). The frame components of Levi-Civita connection of lift metric g are as follows [3]:
Γ h ji = Γ h ji − 1 2 y b R h bji + R h bij , Γ h ji = y b R h bji , Γ h ji = 0, Γ h ji = 0 Γ h ji = Γ h ji + 1 2 y b R h bij , Γ h ji = − 1 2 y b R h bij , Γ h ji = 1 2 y b R h bji , Γ h ji = − 1 2 y b R h bji(3)
where Γ h ji denote the Christoffel symbols constructed with g ji on M n .
Let X be a vector field in T (M n ) and X = X h X h its components with respect to the adapted frame. Then covariant derivative ∇ X has components
∇ γ X α = D γ X α + Γ α γβ X β ,(4)
Γ α γβ being given by (3)
, where D γ = A B γ ∂ B , A A B = B h B , C h B , B h B and C h B being defined by B h B = δ h i , 0 , C h B = Γ h i , δ h i .
Consider a vector field X in M n .Then its vertical lift V X, complete lift C X and horizontal lift H X have respectively components
′ X A = 0 X h , X A = X h ∂X h , X A = X h −Γ h i X i(5)
with respect to with respect to the induced coordinates in T (M n ) . Then
their components ′ X α = A α′ A X A , X α = A α A X A , X α = A α A X A
with respect to the adapted frame are given respectively by
′ X α = 0 X h , X α = X h ∇X h , X α = X h 0(6)
where ∇X h = y i ∇ i X h (see Sasaki [1]).
Killing vector fields
The covariant derivatives ∇ V X, ∇ C X and ∇ H X have respectively components
∇ V β X α = − 1 2 y s R h sij X j 0 ∇ i X h + 1 2 y s R h sij X j 0 (7) ∇ C β X α = ∇ C i X h ∇ C i X h ∇ C i X h ∇ C i X h ∇ C i X h = ∇ i X h − 1 2 y s R h sji + R h sij X j − 1 2 y s R h sij ∇ l X j y l ∇ C i X h = − 1 2 y s R h sji X j ∇ C i X h = ∇ i ∇ l X h y l + y s R h sij X j + 1 2 y s R h sij ∇X j ∇ C i X h = ∇ i X h + 1 2 y s R h sji X j (8) ∇ H β X α = ∇ i X h − 1 2 y s R h sji + R h sij X j − 1 2 y s R h sji X j y s R h sji X j 1 2 y s R h sji X j(9)
with respect to the adapted frame,because of (3) and (6).
Thus we have
Theorem 1 The vertical, complete and horizontal lifts of a vector field in M n to T (M n ) with the metric II + III are parallel if and only if the given vector field in M n is parallel [5].
Given a vector field X in T (M n ), the 1 − f orm , ω defined by ω Y = g X, Y , Y being an arbitrary element of T 1 0 (M n ), is called the covector field associated with X and denoted by X * . If X has local components X A , then the associated covector field X * of X has local components X C = g CA X A .
Let ω be a 1−f orm in M n with components ω i . Then the vertical , complete and horizontal lifts of ω to T (M n ) have respectively components [6]
V ω B = (ω i , 0) , C ω B = (∂ω i , ω i ) , H ω B = −Γ k i ω k , ω i with respect to the induced coordinates in T (M n ) .
The associated covector fields of the vertical,complete and horizontal lifts to T (M n ), with the metric to II + III , of a vector field X with components X h in M n are respectively
V X β = (X i , X i ) , C X β = (∇X i , X i + ∇X i ) , H X β = (0, X i ) (10)
with respect to the adapted frame, where X i = g ih X h . Thus the rotations of V X, C X and H X have respectively components
∇ V β X α − ∇ V α X β = ∇ i X j − ∇ j X i + R h sji − R h sij y s X h ∇ i X j − ∇ j X i 0 0 (11) ∇ C β X α − ∇ C α X β = ∇ C i X j − ∇ C j X i ∇ C i X j − ∇ C j X i ∇ C i X j − ∇ C j X i ∇ C i X j − ∇ C j X i ∇ C i X j − ∇ C j X i = (∇ i ∇ l − ∇ l ∇ i ) y l X j + R h sji − R h sij y s ∇X h + R h sij − R h sji y s X h ∇ C i X j − ∇ C j X i = (∇ i X j − ∇ j X i ) + (∇ i ∇ l − ∇ l ∇ i ) y l X j ∇ C i X j − ∇ C j X i = − 1 2 y s R h sij − R h sji X h ∇ C i X j − ∇ C j X i = 0 (12)
X α being components of X, where ∇ is the Riemannian connection of the metric g.
The Lie derivatives of the metric II + III with respect to V X, C X and H X have respectively components
∇ V β X α + ∇ V α X β = ∇ i X j + ∇ j X i ∇ i X j + ∇ j X i 0 0 (15) ∇ C β X α + ∇ C α X β = ∇ C i X j + ∇ C j X i ∇ C i X j + ∇ C j X i ∇ C i X j + ∇ C j X i ∇ C i X j + ∇ C j X i ∇ C i X j + ∇ C j X i = ∇ i ∇ l X j y l + (R hsij + R hsji ) X h y s ∇ C i X j + ∇ C j X i = (∇ i X j + ∇ j X i ) + (∇ i ∇ l X j + ∇ j ∇ l X i ) y l ∇ C i X j + ∇ C j X i = − 1 2 y s R h sij + R h sji X h ∇ C i X j + ∇ C j X i = 0 (16) ∇ H Theorem 2 Necessary and sufficient condition in order that (a) complete, (b) horizontal lifts to T (M n ) with the metric II +III , of a vector field X in M n , be a Killing vector field in T (M n ) respectively are that, (a) X is a Killing vector field with vanishing covaryant derivative in M n , (b) X is a Killing vector field with vanishing second covaryant derivative in M n .(13) with respect to the adapted frame. From (12), we see thatif the complete lift ofA vector field Xǫℑ 1 0 (M n ) is said to be a Killing vector field of a Riemannian manifold with metric g, if L X g = 0[4]. In terms of components g ji of g, X is a Killing vector field if only ifwith respect to the adapted frame in T (M n ). Since we haveas a consequence of ∇ i X j + ∇ j X i = 0 (see[4]), we conclude by means of (16) that the complete lift C X is a Killing vector field in T (M n ) if only if X is a a Killing vector field in M n .We next have (R hsij + R hsji ) X h = 0 and R h sij + R h sji X h = 0 as a consequence of the vanishing of the second covariant derivative of X . Summing up these results, we have
S Sasaki, On the Differential Geometry of Tangent Bundles of Riemannian Manifolds. IISasaki, S. On the Differential Geometry of Tangent Bundles of Riemannian Manifolds, II, Tôhoku Math. jour., 14., 14(1962), 146-155.
The Topology of Fibre Bundles. N Steenrod, NJ. Princeton Univ. Press. PrincetonSteenrod, N. The Topology of Fibre Bundles (Princeton Univ. Press. Prince- ton, NJ., 1951)
On solutions of IHPT equations on tangent bundle with the metric II+III. O Tarakci, A Gezer, A A Salimov, Math.Comput. Modelling. 50Tarakci, O., Gezer, A., and Salimov, A. A. On solutions of IHPT equations on tangent bundle with the metric II+III. Math.Comput. Modelling, 2009, 50, 953-958.
The Theory of Lie Derivatives and Its Applications. K Yano, ElsevierAmsterdamYano, K., The Theory of Lie Derivatives and Its Applications, Elsevier, Amsterdam, 1957.
On the tangent Bundles of Finsler and Riemannian Manifolds, rend. K Yano, E T Davies, Circ. Mat Palermo. 12Yano,K., and Davies,E. T. On the tangent Bundles of Finsler and Rieman- nian Manifolds, rend. Circ. Mat Palermo, 12(1963), 211-228.
. K Yano, S Ishıhara, Cotangent Tangent, Bundles, Marcel Dekker IncNew YorkYano, K. and Ishıhara, S. Tangent and Cotangent Bundles, Marcel Dekker Inc. New York, 1973.
| []
|
[
"How much did the Tourism Industry Lost? Estimating Earning Loss of Tourism in the Philippines",
"How much did the Tourism Industry Lost? Estimating Earning Loss of Tourism in the Philippines"
]
| [
"Raffy S Centeno [email protected] \nDivision Manager Planning and Monitoring Division Davao City Water District\nSenior Teacher Senior High School Department Malayan Colleges Mindanao\n\n",
"Judith P Marquez [email protected] \nDivision Manager Planning and Monitoring Division Davao City Water District\nSenior Teacher Senior High School Department Malayan Colleges Mindanao\n\n"
]
| [
"Division Manager Planning and Monitoring Division Davao City Water District\nSenior Teacher Senior High School Department Malayan Colleges Mindanao\n",
"Division Manager Planning and Monitoring Division Davao City Water District\nSenior Teacher Senior High School Department Malayan Colleges Mindanao\n"
]
| []
| The study aimed to forecast the total earnings lost of the tourism industry of the Philippines during the COVID-19 pandemic using seasonal autoregressive integrated moving average. Several models were considered based on the autocorrelation and partial autocorrelation graphs. Based on the Akaike's Information Criterion (AIC) and Root Mean Squared Error, ARIMA(1,1,1)×(1,0,1) 12 was identified to be the better model among the others with an AIC value of −414.51 and RMSE of 47884.85. Moreover, it is expected that the industry will have an estimated earning loss of around P 170.5 billion pesos if the COVID-19 crisis will continue up to July. Possible recommendations to mitigate the problem includes stopping foreign tourism but allowing regions for domestic travels if the regions are confirmed to have no cases of COVID-19, assuming that every regions will follow the stringent guidelines to eliminate or prevent transmissions; or extending this to countries with no COVID-19 cases. | null | [
"https://arxiv.org/pdf/2004.09952v1.pdf"
]
| 216,036,375 | 2004.09952 | 355689a179d9e692368157740a46599deacc7a6c |
How much did the Tourism Industry Lost? Estimating Earning Loss of Tourism in the Philippines
Raffy S Centeno [email protected]
Division Manager Planning and Monitoring Division Davao City Water District
Senior Teacher Senior High School Department Malayan Colleges Mindanao
Judith P Marquez [email protected]
Division Manager Planning and Monitoring Division Davao City Water District
Senior Teacher Senior High School Department Malayan Colleges Mindanao
How much did the Tourism Industry Lost? Estimating Earning Loss of Tourism in the Philippines
The study aimed to forecast the total earnings lost of the tourism industry of the Philippines during the COVID-19 pandemic using seasonal autoregressive integrated moving average. Several models were considered based on the autocorrelation and partial autocorrelation graphs. Based on the Akaike's Information Criterion (AIC) and Root Mean Squared Error, ARIMA(1,1,1)×(1,0,1) 12 was identified to be the better model among the others with an AIC value of −414.51 and RMSE of 47884.85. Moreover, it is expected that the industry will have an estimated earning loss of around P 170.5 billion pesos if the COVID-19 crisis will continue up to July. Possible recommendations to mitigate the problem includes stopping foreign tourism but allowing regions for domestic travels if the regions are confirmed to have no cases of COVID-19, assuming that every regions will follow the stringent guidelines to eliminate or prevent transmissions; or extending this to countries with no COVID-19 cases.
Introduction 1.Background of the Study
According to the Philippine Statistics Authority, tourism accounts to 12.7% of the countrys Gross Domestic Product in the year 2018 [1]. Moreover, National Economic Development Authority reported that 1.5% of the countrys GDP on 2018 is accounted to international tourism with Korea, China and USA having the largest numbers of tourists coming in []. In addition, Department of Tourism recorded that 7.4% of the total domestic tourists or an estimated figure of 3.97 million tourists, both foreign and domestics were in Davao Region on 2018 [2]. Also, employment in tourism in-dustry was roughly estimated to 5.4 million in 2018 which constitutes 13% of the employment in the country according to the Philippine Statistics Authority [3].
Hence, estimating the total earnings of the tourism industry in the Philippines will be very helpful in formulating necessary interventions and strategies to mitigate the effects of the COVID-19 pandemic. This paper will serve as a baseline research to describe and estimate the earnings lost of the said industry.
Problem Statement
The objective of this research is to forecast the monthly earnings loss of the tourism industry during the COVID-19 pandemic by forecasting the monthly foreign visitor arrivals using Seasonal Autoregressive Integrated Moving Average. Specifically, it aims to answer the following questions:
1. What is the order of the seasonal autoregressive intergrated moving average for the monthly foreign visitor arrivals in the Philippines? 2. How much earnings did the tourism industry lost during the COVID-19 pandemic?
Scope and Limitations
The study covers a period of approximately eight years from January 2012 to December 2019. Also, the modeling technique that was considered in this research is limited only to autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA). Other modeling techniques were not tested and considered. The research utilized longitudinal research design wherein the monthly foreign visitor arrivals in the Philippines is recorded and analyzed. A longitudinal research design is an observational research method in which data is gathered for the same subject repeatedly over a period of time [4]. Forecasting method, specifically the Seasonal Autoregressive Integrated Moving Average (SARIMA), was used to forecast the future monthly foreign visitor arrivals.
In selecting the appropriate model to forecast the monthly foreign visitor arrivals in the Philippines, the Box-Jenkins methodology was used. The data set was divided into two sets: the training set which is composed of 86 data points from January 2012 to December 2018; and testing set which is composed of 12 data points from January 2019 to December 2019. The training set was used to identify the appropriate SARIMA order whereas the testing set will measure the accuracy of the selected model using root mean squared error. The best model, in the context of this paper, was characterized to have a low Akaike's Information Criterion and low root mean squared error.
Source of Data
The data were extracted from Department of Tourism website. The data were composed of monthly foreign visitor arrivals from January 2012 to December 2019 which is composed of 98 data points.
Procedure for Box-Jenkins Methodology
Box-Jenkins methodology refers to a systematic method of identifying, fitting, checking, and using SARIMA time series models. The method is appropriate for time series of medium to long length which is at least 50 observations. The Box-Jenkins approach is divided into three stages: Model Identification, Model Estimation, and Diagnostic Checking.
Model Identification
In this stage, the first step is to check whether the data is stationary or not. If it is not, then differencing was applied to the data until it becomes stationary. Stationary series means that the value of the series fluctuates around a constant mean and variance with no seasonality over time. Plotting the sample autocorrelation function (ACF) and sample partial autocorrelation function (PACF) can be used to assess if the series is stationary or not. Also, Augmented Dickey−Fuller (ADF) test can be applied to check if the series is stationary or not. Next step is to check if the variance of the series is constant or not. If it is not, data transformation such as differencing and/or Box-Cox transformation (eg. logarithm and square root) may be applied. Once done, the parameters p and q are identified using the ACF and PACF. If there are 2 or more candidate models, the Akaike's Information Criterion (AIC) can be used to select which among the models is better. The model with the lowest AIC was selected.
Model Estimation
In this stage, parameters are estimated by finding the values of the model coefficients which provide the best fit to the data. In this research, the combination of Conditional Sum of Squares and Maximum Likelihood estimates was used by the researcher. Conditional sum of squares was utilized to find the starting values, then maximum likelihood was applied after.
Diagnostic Checking
Diagnostic checking performs residual analysis. This stage involves testing the assumptions of the model to identify any areas where the model is inadequate and if the corresponding residuals are uncorrelated. Box-Pierce and Ljung-Box tests may be used to test the assumptions. Once the model is a good fit, it can be used for forecasting.
Forecast Evaluation
Forecast evaluation involves generating forecasted values equal to the time frame of the model validation set then comparing these values to the latter. The root mean squared error was used to check the accuracy of the model. Moreover, the ACF and PACF plots were used to check if the residuals behave like white noise while the Shapiro-Wilk test was used to perform normality test.
Data Analysis
The following statistical tools were used in the data analysis of this study.
Sample Autocorrelation Function
Sample autocorrelation function measures how correlated past data points are to future values, based on how many time steps these points are separated by. Given a time series X t , we define the sample autocorrelation function, r k , at lag k as [5]
r k = N−k ∑ t=1 (X t −X)(X t+k −X) N ∑ t=1 (X t −X) 2 for k = 1, 2, ... (1)
whereX is the average of n observations .
Sample Partial Autocorrelation Function
Sample partial autocorrelation function measures the correlation between two points that are separated by some number of periods but with the effect of the intervening correlations removed in the series. Given a time series X t , the partial autocorrelation of lag k is the autocorrelation between X t and X t+k with the linear dependence of X t on X t+1 through X t+k−1 removed. The sample partial autocorrelation function is defined as [5]
φ kk = r k − h−1 ∑ j=1 φ k−1, j r k− j 1 − j−1 ∑ j=1 φ k−1, j r j (2) where φ k, j = φ k−1, j − φ k,k φ k−1,k− j , for j = 1, 2, ..., k − 1,
and r k is the sample autocorrelation at lag k.
Root Mean Square Error (RMSE)
RMSE is a frequently used measure of the difference between values predicted by a model and the values actually observed from the environment that is being modelled. These individual differences are also called residuals, and the RMSE serves to aggregate them into a single measure of predictive power. The RMSE of a model prediction with respect to the estimated variable X model is defined as the square root of the mean squared error [6]
RMSE = 1 n n ∑ i=1 (ŷ i − y i ) 2
whereŷ i is the predicted values, y i is the actual value, and n is the number of observations.
Akaike's Information Criterion (AIC)
The AIC is a measure of how well a model fits a dataset, penalizing models that are so flexible that they would also fit unrelated datasets just as well. The general form for calculating the AIC is [5] AIC p,q = −2 ln(maximized likelihood) + 2r n
where n is the sample size, r = p + q + 1 is the number of estimated parameters, and including a constant term.
Ljung−Box Q* Test
The Ljung−Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelation, r j , up to any specified time lag m. This statistic is used to test whether the residuals of a series of observations over time are random and independent. The null hypothesis is that the model does not exhibit lack of fit and the alternative hypothesis is the model exhibits lack of fit. The test statistic is defined as [5]
Q * = n(n + 2) m ∑ k=1r 2 k n − k(4)
wherer 2 k is the estimated autocorrelation of the series at lag k, m is the number of lags being tested, n is the sample size, and the given statistic is approximately Chi Square distributed with h degrees of freedom, where h = m − p − q.
Conditional Sum of Squares
Conditional sum of squares was utilized to find the starting values in estimating the parameters of the SARIMA process. The formula is given by [7] θ n = arg min θ∈ s n (θ) (5) where
s n (θ) = 1 n n ∑ t=1 e 2 t (θ) , e t (θ) = t−1 ∑ j=0 α j (θ)x t− j , and ⊂ R p is a compact set. 7. Maximum Likelihood
According to [7], once the model order has been identified, maximum likelihood was used to estimate the parameters c, φ 1 , ..., φ p , θ 1 , ..., θ q . This method finds the values of the parameters which maximize the probability of getting the data that has been observed . For SARIMA models, the process is very similar to the least squares estimates that would be obtained by minimizing
T ∑ t=1 ε 2 t (6)
where ε t is the error term.
Box−Cox Transformation
Box−Cox Transformation is applied to stabilize the variance of a time series. It is a family of transformations that includes logarithms and power transformation which depend on the parameter λ and are defined as follows [8]
y (λ) i = y λ i − 1 λ , if λ = 0 ln y i , if λ = 0 w i = y λ i , if λ = 0 ln y i , if λ = 0
where y i is the original time series values, w i is the transformed time series values using Box-Cox, and λ is the parameter for the transformation.
Statistical Software
R is a programming language and free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing [9]. R includes linear and nonlinear modeling, classical statistical tests, time-series analysis, classification modeling, clustering, etc. The 'forecast' package [7] was utilized to generate time series plots, autocorrelation function/partial autocorrelation function plots, and forecasting. Also, the 'tseries' package [10] was used to perform Augmented Dickey-Fuller (ADF) to test stationarity. Moreover, the 'lmtest' package [11] was used to test the parameters of the SARIMA model. Finally, the 'ggplot2' [12], 'tidyr' [13], and 'dplyr' [14] were used to plot time series data considered during the conduct of the research. Line plot was used to describe the behavior of the monthly foreign visitor arrivals in the Philippines. Figure 1 shows that there is an increasing trend and a seasonality pattern in the time series. Specifically, there is a seasonal increase in monthly foreign visitor arrivals every December and a seasonal decrease every September. These patterns suggest a seasonal autoregressive integrated moving average (SARIMA) approach in modeling and forecasting the monthly foreign visitor arrivals in the Philippines. Akaike Information Criterion and Root Mean Squared Error were used to identify which model was used to model and forecast the monthly foreign visitor arrivals in the Philippines. Table 1 shows the top two SARIMA models based on AIC generated using R. ARIMA (1,1,1)×(1,0,1) 12 has the lowest AIC with a value of −414.56 which is followed by ARIMA (1,1,1)×(1,0,1) 12 with an AIC value of −414.51. Model estimation was performed on both models and generated significant parameters for both models (refer to Appendix A.2). Moreover, diagnostic checking was performed to assess the model. Both models passed the checks using residual versus time plot, residual versus fitted plot, normal Q-Q plot, ACF graph, PACF graphs, Ljung-Box test, and Shapiro-Wilk test (refer to Appendix A.3). Finally, forecast evaluation was performed to measure the accuracy of the model using an out-of-sample data set (refer to Appendix A.4). ARIMA (1,1,1)×(1,0,1) 12 produced the lowest RMSE relative to ARIMA (0,1,2)×(1,0,1) 12 . Hence, the former was used to forecast the monthly foreign visitor arrivals in the Philippines.
How much Foreign Tourism Earnings was Lost dur-
ing the COVID-19 Pandemic Crisis Figure 2 shows the estimated earnings loss (in billion pesos) of the tourism industry of the Philippines every month from April 2020 to December 2020. According to the Department of Tourism, the Average Daily Expenditure (ADE) for the month in review is P 8,423.98 and the Average Length of Stay (ALoS) of tourists in the country is recorded at 7.11 nights. The figures were generated by multiplying the forecasted monthly foreign visitor arrivals, ADE, and ALoS (rounded to 7) [2]. Moreover, it is forecasted under community quarantine that the recovery time will take around four to five months (up to July) [15]. With this, the estimated earning loss of the country in terms of tourism will be around 170.5 billion pesos.
Conclusions and Recommendations 4.1 Conclusions
Based on the results presented on the study, the following findings were drawn:
1. The order of SARIMA model used to forecast the monthly foreign visitor arrival is ARIMA (1,1,1)×(1,0,1) 12 since it produced a relatively low AIC of −414.51 and the lowest RMSE of 47884.85 using an out-of-sample data. This means that the model is relatively better among other SARIMA models considered in forecasting the monthly foreign visitor arrivals in the Philippines. 2. If the COVID-19 Pandemic lasts up to five months, the tourism industry of the Philippines will have an estimated earnings loss of about P 170.5 billion. Assumptions about average daily expenditure and average length of stay of tourists were based on the Department of Tourism reports.
Recommendations
The projected P 170.5 billion loss on Philippines foreign tourism is really a huge money. Regaining such loss the soonest time, however, would only jeopardize the lives of the Filipino people. On the other hand, the government can, perhaps, reopen the Philippines domestic tourism. This would somehow help regain the countrys loss on revenue from tourism, although not fully.
However, the following recommendations, shown in scenarios/options below, may be helpful in regaining it, both in foreign and domestic tourism, and ensuring safety among Filipinos, as well.
1. Option 1: Stop foreign tourism until the availability of the vaccine, but gradually open domestic tourism starting July of 2020. In this scenario/option, the following considerations may be adhered to, viz.
(a) not all domestic tourism shall be reopened in the entire country; only those areas with zero covid-19 cases; (b) for areas where domestic tourism is allowed/reopened, appropriate guidelines should be strictly implemented by concerned departments/agencies to eliminate/prevent covid-19 transmission; and (c) digital code that would help in tracing the contacts and whereabouts of domestic tourists, as being used in China and Singapore, should be installed before the reopening of the domestic tourism.
2. Option 2: Gradual opening of foreign tourism starting July 2020 and full reopening of domestic tourism on the first semester of 2021 or when the covid-19 cases in the Philippines is already zero. However, the following considerations should be satisfied, viz.
(a) only countries with covid-19 zero cases are allowed to enter the Philippines; (b) appropriate guidelines should be strictly implemented by concerned departments/ agencies both for foreign and domestic tourism to eliminate/ prevent the spread of the said virus; and (c) digital code that would help in tracing the contacts and whereabouts of foreign tourists, as being used in China and Singapore, should be installed before reopening the foreign tourism in the Philippines.
A Appendices
A.1 Model Identification Fig. 3: Line, ACF, and PACF Plot of Monthly Visitor Arrivals Line, ACF, and PACF graph were used to identify the model to be used to forecast the monthly visitor arrivals in the Philippines. The line graph in Figure 3 shows an increasing trend which suggests a non-stationary behavior. This is supported by the ACF and PACF plots which shows a slow decay in all the lags of the former and the the first lag is significant for the latter. Moreover, the line plot slightly display an increasing variance across time. Therefore, data transformation such as differencing and Box-Cox transform were applied to the time series data. Figure 4 show the line, ACF, and PACF graph of the transformed data. The line graph shows that the data is stationary which is supported by both ACF and PACF graphs. Moreover, ACF graph suggests a seasonal pattern in the data since 12 th , 24 th , 36 th , 48 th , and 60 th lags are significant. This is also true in the case of PACF since the 12 th lag is significant. Akaike's Information Criterion (AIC) was used to identify the best SARIMA model from among the models considered. Table 2 shows the top 2 models with the least AIC, namely: ARIMA (0,1,2)×(1,0,1) 12 (Model 1) and ARIMA (1,1,1)×(1,0,1) 12 (Model 2)with an AIC of −414.56 and −414.51, respectively.
A.2 Model Estimation
The combination of conditional sum of squares and maximum likelihood estimates were used to estimate the parameters of the three moving averages and test its significance. Table 3 shows the estimated coefficients, standard errors, z-values, and p-values of each parameters of ARIMA (0,1,2)×(1,0,1) 12 . Since the p-value of the parameter is less than 0.05, there is sufficient evidence to say that the estimate of the moving averages, seasonal autoregressive, and seasonal moving average are significantly different from zero.
The combination of conditional sum of squares and maximum likelihood estimates were used to estimate the parameters of the three moving averages and test its significance. Table 4 shows the estimated coefficients, standard 12 . Since the p-value of the parameter is less than 0.05, there is sufficient evidence to say that the estimate of the moving averages, seasonal autoregressive, and seasonal moving average are significantly different from zero.
A.3 Diagnostic Checking
Residual versus Time, Residual versus Fitted, and Normal Q-Q Plot were used to perform diagnostic checking for the model whereas ACF and PACF plots of the residuals were used to check if there are remaining patterns that should be accounted by the model 1 and 2. Graphs for Model 1 are displayed in the left whereas graphs for Model 2 are displayed in the right. Figure 5 shows that the residuals and time does not display correlation between the two variables. Therefore, this scatter plot suggests that the residuals has no serial correlation, that is, there is no interdependence between time and residuals. This is supported by Ljung-Box test which suggests that the error terms behave randomly for both Model 1 (Q(20) = 20.109, df=20, Model df=4, p= 0.4511) and Model 2 (Q(20) = 20.941, df=20, Model df=4, p= 0.4006). In addition, the residuals versus fitted values scatter plot displays no visible funneling pattern which indicates that the variances of the error term are relatively equal. Moreover, the normal Q-Q plot suggests that the residuals are normally distributed since most of the the values lie along a line. This is supported by Shapiro-Wilk test which suggest that the error is normally distributed for both Model 1 (W= 0.98062, p= 0.2372) and Model 2 (W= 0.9852, p= 0.4513). Finally, Fig. 5: Line, ACF, and PACF Plot of the Two Models ACF and PACF graphs displays that all of the lags are within the acceptable limits. Therefore, all the lags are not significant which means that the residuals of the model may be considered as white noise. Hence, the residuals are assumed to be Guassian white noise. Figure 6 shows the ACF and PACF graphs of the forecast errors both models. All of the autocorrelation and partial autocorrelation are within the limits which means that these values are not significant. Therefore, the forecast errors are considered white noise.
A.4 Forecast Evaluation
Shapiro-Wilk test was used to test if the forecast errors of both models were normally distributed. The results show that there is no sufficient evidence to say that the forecast error terms are not normally distributed for both Model 1 (W= 0.0.878, p= 0.082) and Model 2 (W= 0.875, p= 0.075). This means that it can be assumed that the fore- Based on the diagnostic presented, the models satisfied all the assumptions of a seasonal autoregressive integrated moving average model. Furthermore, Model 2 is relatively accurate compared to Model 1 based on the RMSE of each model. Hence, the ARIMA (1,1,1)×(1,0,1) 12 was used to forecast the monthly visitor arrivals in the Philippines.
Fig. 1 :
1Monthly Foreign Visitor Arrivals
Fig. 2 :
2Expected Monthly Earnings Loss
Fig. 4 :
4Line, ACF, and PACF Plot of the Transformed Data
Fig. 6 :
6Line, ACF, and PACF Plot of the Two Models
arXiv:2004.09952v1 [stat.AP] 21 Apr 20202 Methodology
2.1 Research Design
Table 1 :
1AIC and RMSE of the Two Models ConsideredModel
AIC
RMSE
ARIMA (0,1,2)×(1,0,1) 12
−414.56 49517.48
ARIMA (1,1,1)×(1,0,1) 12
−414.51 47884.85
Table 2 :
2Akaike's Information Criterion of each ARIMA ModelModel
AIC
ARIMA (0,1,2)×(1,0,1) 12
−414.56
ARIMA (1,1,1)×(1,0,1) 12
−414.51
Table 3 :
3Model Estimation of Model 1 *** p < 0.001, * p < 0.05Variable
β Std. Error
z-value
ma1
−0.502
0.108
−4.662***
ma2
−0.224
0.104
−2.154*
sar1
0.993
0.011
87.960***
sma1
−0.701
0.206
−3.408***
log-likelihood
212.28
σ 2
0.0002
AIC
−414.56
Table 4 :
4Model Estimation of Model 2 *** p < 0.001, * p < 0.05 errors, z-values, and p-values of each parameters of ARIMA (1,1,1)×(1,0,1)Variable
β Std. Error
z-value
ar1
0.328
0.153
2.141*
ma1
−0.830
0.086
−9.697***
sar1
0.992
0.011
86.543***
sma1
−0.701
0.207
−3.388***
log-likelihood
212.25
σ 2
0.0002
AIC
−414.51
Table 5 :
5Ljung-Box and Kolmogorov-Smirnov Test cast errors are normally distributed. Moreover, root mean squared error was used to identify which model has better forecast accuracy. The results show that Model 2 has the lowest RMSE which means that Model 2 is relatively accurate compared to Model 1.Statistic
Model 1 Model 2
Shapiro-Wilk
0.878
0.874
RMSE
49517.48 47884.85
Philippine statistic authority: Contribution of tourism to the philippine economy is 12.7 percent. , 2019. Philippine statistic authority: Contribution of tourism to the philippine economy is 12.7 percent in 2018. Accessed: 2020-04-16.
Department of tourism-philippines: Tourism statistics. Department of tourism-philippines: Tourism statistics. Accessed: 2020-04-16.
Philippine statistic authority. , 2019. Philippine statistic authority 2018 report. Ac- cessed: 2020-04-16.
Encyclopedia of Research Design, 1 ed. N J Salkind, SAGE Publications, IncSalkind, N. J., 2010. Encyclopedia of Research Design, 1 ed. SAGE Publications, Inc.
G Box, G Jenkins, G Reinsel, G Ljung, Time Series Analysis: Forecasting and Control. Hoboken, New JerseyJohn Wiley & Sons, Inc5th edBox, G., Jenkins, G., Reinsel, G., and Ljung, G., 2016. Time Series Analysis: Forecasting and Control, 5th ed. John Wiley & Sons, Inc., Hoboken, New Jersey.
Long Short-Term Memory Networks with Python: Develop Sequence Prediction Models with Deep Learning. J Brownlee, Jason BrownleeBrownlee, J., 2017. Long Short-Term Memory Net- works with Python: Develop Sequence Prediction Mod- els with Deep Learning. Jason Brownlee.
Automatic time series forecasting: the forecast package for R. R J Hyndman, Y Khandakar, Journal of Statistical Software. 263Hyndman, R. J., and Khandakar, Y., 2008. "Automatic time series forecasting: the forecast package for R". Journal of Statistical Software, 26(3), pp. 1-22.
Box-Cox Transformation. T Daimon, SpringerBerlin Heidelberg; Berlin, HeidelbergDaimon, T., 2011. Box-Cox Transformation. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 176-178.
R Core Team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, AustriaR Core Team, 2017. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
tseries: Time Series Analysis and Computational Finance. R package version 0. A Trapletti, K Hornik, Trapletti, A., and Hornik, K., 2018. tseries: Time Se- ries Analysis and Computational Finance. R package version 0.10-43.
Diagnostic checking in regression relationships. A Zeileis, T Hothorn, 2R NewsZeileis, A., and Hothorn, T., 2002. "Diagnostic check- ing in regression relationships". R News, 2(3), pp. 7-10.
ggplot2: Elegant Graphics for Data Analysis. H Wickham, Springer-VerlagNew YorkWickham, H., 2016. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.
Package 'tidyr. H Wickham, L Henry, Rstudio , Wickham, H., Henry, L., and RStudio, 2018. "Package 'tidyr"'.
A grammar of data manipulation. H Wickham, R Francois, L Henry, K Muller, Rstudio , Wickham, H., Francois, R., Henry, L., Muller, K., and RStudio, 2018. "A grammar of data manipulation".
What quarantine measures can do? modelling the dynamics of covid-19 transmission in davao region. R Ceballos, Ceballos, R., 2020. What quarantine measures can do? modelling the dynamics of covid-19 transmission in davao region. Accessed: 2020-04-16.
| []
|
[
"Dynamics versus structure: breaking the density degeneracy in star formation",
"Dynamics versus structure: breaking the density degeneracy in star formation"
]
| [
"Richard J Parker \nAstrophysics Research Institute\nLiverpool John Moores University\n146 Brownlow HillL3 5RFLiverpoolUK\n"
]
| [
"Astrophysics Research Institute\nLiverpool John Moores University\n146 Brownlow HillL3 5RFLiverpoolUK"
]
| [
"Mon. Not. R. Astron. Soc"
]
| The initial density of individual star-forming regions (and by extension the birth environment of planetary systems) is difficult to constrain due to the "density degeneracy problem": an initially dense region expands faster than a more quiescent region due to two-body relaxation and so two regions with the same observed present-day density may have had very different initial densities. We constrain the initial densities of seven nearby star-forming regions by folding in information on their spatial structure from the Q-parameter and comparing the structure and present-day density to the results of N-body simulations. This in turn places strong constraints on the possible effects of dynamical interactions and radiation fields from massive stars on multiple systems and protoplanetary discs.We apply our method to constrain the initial binary population in each of these seven regions and show that the populations in only three -the Orion Nebula Cluster, ρ Oph and Corona Australis -are consistent with having evolved from the Kroupa universal initial period distribution and a binary fraction of unity. | 10.1093/mnras/stu2054 | [
"https://arxiv.org/pdf/1410.0004v1.pdf"
]
| 55,974,962 | 1410.0004 | 97bf85efd98dda503928a2b9bde404a3401b1473 |
Dynamics versus structure: breaking the density degeneracy in star formation
30 Sep 2014. 2014. 2 October 2014
Richard J Parker
Astrophysics Research Institute
Liverpool John Moores University
146 Brownlow HillL3 5RFLiverpoolUK
Dynamics versus structure: breaking the density degeneracy in star formation
Mon. Not. R. Astron. Soc
00030 Sep 2014. 2014. 2 October 2014(MN L A T E X style file v2.2)stars: formation -planetary systems -open clusters and associations -methods: numerical -binaries: general
The initial density of individual star-forming regions (and by extension the birth environment of planetary systems) is difficult to constrain due to the "density degeneracy problem": an initially dense region expands faster than a more quiescent region due to two-body relaxation and so two regions with the same observed present-day density may have had very different initial densities. We constrain the initial densities of seven nearby star-forming regions by folding in information on their spatial structure from the Q-parameter and comparing the structure and present-day density to the results of N-body simulations. This in turn places strong constraints on the possible effects of dynamical interactions and radiation fields from massive stars on multiple systems and protoplanetary discs.We apply our method to constrain the initial binary population in each of these seven regions and show that the populations in only three -the Orion Nebula Cluster, ρ Oph and Corona Australis -are consistent with having evolved from the Kroupa universal initial period distribution and a binary fraction of unity.
INTRODUCTION
Characterising the formation environment of stars is one of the outstanding challenges in astrophysics. If stars are predominately born in dense 'clustered' environments (e.g. Lada & Lada 2003;Lada 2010) then dynamical interactions and the radiation fields from massive stars may significantly affect the formation and evolution of planetary systems (e.g. Armitage 2000;Bonnell et al. 2001;Scally & Clarke 2001;Adams et al. 2006;Olczak et al. 2008;Parker & Quanz 2012;Rosotti et al. 2014) and the properties of binary and multiple systems (e.g. Kroupa 1995a; Kroupa et al. 1999;Marks & Kroupa 2012;Parker & Goodwin 2012).
On the other hand, if most stars are born in relative isolation (e.g. Shu, Adams & Lizano 1987), or rather in low-density environments where dynamical interactions are insignificant (e.g. Bressert et al. 2010), then planetary and binary systems may form with little or no external perturbation. Either scenario has important implications for understanding the origin of stars in the Galactic field, and for placing our Solar System in the context of exoplanetary systems (e.g. Adams 2011;Alexander et al. 2013;Davies et al. 2013, and references therein).
Ideally, we would like to compare the properties of observed star-forming regions and young clusters to simulations to gauge the effects of the star-forming environment on binary systems and fledgling planetary systems. Binary systems are particularly useful ⋆ E-mail: [email protected] because their properties in the Galactic field are well-constrained (Raghavan et al. 2010;Duchêne & Kraus 2013). In principle one can compare binary populations between star-forming regions and the Galactic field, in order to determine the type of star-forming region that produces the most 'field-like' binaries (and hence is the dominant star-forming event that produces the Galactic field; Goodwin 2010).
Unfortunately, this problem is severely complicated by uncertainty in determining the maximum density attained by starforming regions. Observations of the present-day density in starforming regions provide very few constraints on the initial density (e.g. King et al. 2012a;Moeckel et al. 2012;Gieles et al. 2012;Parker & Meyer 2012). The reason is that an initially dense region expands very quickly due to two-body relaxation, whereas a less dense region expands more slowly. Therefore, at a given age two regions with the same present-day density may have had very different densities in the past. This is the so-called "density degeneracy problem" -not enough information is available to rule out much more dense initial conditions (e.g. Marks & Kroupa 2012;Marks et al. 2014).
In this paper, we attempt to address this issue by folding in extra information on the structure of star forming regions (Cartwright & Whitworth 2004), and (where available) the relative density around massive stars with respect to the median stellar density in the region (Maschberger & Clarke 2011). We compare observational data for seven nearby star-forming regions to the results of N-body simulations where we vary the initial density to deterc 2014 RAS mine the most likely initial conditions of each region. As an example of the method, we then use these constraints to rule out the universal binary population hypothesis from Kroupa (1995a,b). We describe our simulation set-up in Section 2, we present our results in Section 3 and we conclude in Section 4.
METHOD
Star-forming region set-up
Both observations (e.g. Cartwright & Whitworth 2004;Sánchez & Alfaro 2009;Gouliermis et al. 2014) and simulations (Schmeja & Klessen 2006;Girichidis et al. 2012;Dale et al. 2013) of star-forming regions indicate that stars form with a hierarchical, or self-similar spatial distribution (i.e. they are substructured). It is almost impossible to create substructure through dynamical interactions; rather it is usually completely erased over a few crossing times (Parker et al. 2014). Therefore, in order to reproduce the substructure observed in many of the regions of interest here, we must start the simulations with substructure.
We set up substructured star forming regions using fractal distributions, following the method of Goodwin & Whitworth (2004). This method is described in detail in that paper, and in Allison et al. (2010) and Parker et al. (2014). Briefly, the fractal is built by creating a cube containing 'parents', which spawn a number of 'children' depending on the desired fractal dimension. The amount of substructure is then set by the number of children that are allowed to mature. The lower the fractal dimension, the fewer children are allowed to mature and the cube has more substructure. Fractal dimensions in the range D = 1.6 (highly substructured) to D = 3.0 (uniform distribution) are allowed. Finally, outlying particles are removed so that the cube from which the fractal was created becomes a sphere; however, the distribution is only truly spherical if D = 3.0.
All of our simulated star-forming regions have a fractal dimension D = 1.6; the Taurus association (Cartwright & Whitworth 2004) and Corona Australis (CrA, Neuhäuser & Forbrich 2008) both have fractal dimensions consistent with this value and because dynamical interactions cannot make a region more substructured, we adopt this value. However, we note that hydrodynamical simulations of star formation can produce less substructured regions (higher D values) and as we shall see, some observed regions must also have higher primordial fractal dimensions.
The velocities of stars in the fractals are also correlated on local scales, in accordance with observations (Larson 1982;André et al. 2010). The children in our fractals inherit their parents' velocity, plus a small amount of noise which successively decreases further down the fractal tree. This means that two nearby stars have very similar velocities, whereas two stars which are distant can have very different velocities. Again, this is an effort to mimic the observations of star formation, which indicate that stars in filaments have very low velocity dispersions (André et al. 2010).
In order to erase primordial substructure and to process primordial binary systems as efficiently as possible, we scale the velocities of the whole fractal to be subvirial (α vir = 0.3, where the virial ratio α vir = T/|Ω|; T and Ω are the total kinetic energy and total potential energy of the stars, respectively).
We set up our star-forming regions with three different densities. In two sets of simulations, the regions have a radius of 1 pc and contain either 1500 stars (which we will refer to as "high density" -ρ ∼ 10 4 M ⊙ pc −3 ) or 150 stars ("medium density" -ρ ∼ 10 2 M ⊙ pc −3 ). In a third set of simulations, the regions contain 300 stars and have a radius of 5 pc ("low density" -ρ ∼ 10 M ⊙ pc −3 ).
Binary population
All of our regions have an initial binary fraction of unity, i.e. everything forms in a binary. When creating the binary populations we adopt the same initial conditions as in Marks et al. (2014). The primary masses are drawn from a Kroupa (2002) IMF of the form
dN dM ∝ M −1.3 m 0 < M/M ⊙ m 1 , M −2.3 m 1 < M/M ⊙ m 2 ,(1)
where m 0 = 0.1 M ⊙ , m 1 = 0.5 M ⊙ , and m 2 = 50 M ⊙ . clusters. There are no brown dwarfs in the simulations. Secondary masses are also drawn at random from the IMF; note this is inconsistent with recent observations (Metchev & Hillenbrand 2009;Reggiani & Meyer 2013) which show a universal flat companion mass ration distribution. However, subsequent pre-main sequence eigenevolution (see below) alters the mass ratios of close binaries so that the CMRD approaches a flat distribution. Binary periods are drawn from the Kroupa (1995b) period distribution (see also Kroupa & Petr-Gotzens 2011;Marks et al. 2014) of the form f log 10 P = η log 10 P − log 10 P min δ + log 10 P − log 10 P min 2 ,
where log 10 P min is the logarithm of the minimum period in days and log 10 P min = 1. η = 2.5 and δ = 45 are the numerical constants adopted by Kroupa (1995b). This period distribution was derived from a process of "reverse engineering" N-body simulations (Kroupa 1995a,b,c); regions with low densities do not break up many binaries and hence would have an excess of wide systems (100 -10 4 au) compared to the Galactic field, as observed in Taurus (Leinert et al. 1993;Köhler & Leinert 1998), whereas more dense regions would destroy more wider binaries and the resultant separation distribution is more "field-like" (Duquennoy & Mayor 1991;Fischer & Marcy 1992;Raghavan et al. 2010). Eccentricities are drawn from a thermal distribution (Heggie 1975) of the form
f (e) = 2e.(3)
We note that the eccentricity distribution of binaries in the field is more consistent with a flat distribution (Raghavan et al. 2010;Duchêne & Kraus 2013); however, as with the mass ratios, eigenevolution alters the distribution for close systems. Finally, we apply the Kroupa (1995b) 'eigenevolution' algorithm, which accounts for tidal circularisation effects in close binaries (Mathieu 1994), and for early angular momentum transfer between the circumprimary disk and the secondary star.
We then place the binary systems at the centre of mass of each position in the fractal and we evolve the star-forming regions for 10 Myr using the kira integrator in the Starlab package (Portegies Zwart et al. 1999Zwart et al. , 2001. We do not include stellar evolution in the simulations.
RESULTS
We first demonstrate the density degeneracy and its effect on the binary properties in star-forming regions. In order to invoke a universal model of star formation and to reconcile differences between the Figure 1. Evolution of the separation distribution normalised to the binary fraction as a function of initial density. The primordial distribution (Kroupa 1995b) is shown by the dotted line and the distributions after 1 Myr are shown by the open (low density -ρ ∼ 10 M ⊙ pc −3 ), hashed (medium density -ρ ∼ 10 2 M ⊙ pc −3 ) and solid (high density -ρ ∼ 10 4 M ⊙ pc −3 ) histograms. The observed distributions for Taurus (the circles, Köhler & Leinert 1998) and the Orion Nebula Cluster (the squares, Reipurth et al. 2007) are also shown. binary populations of Taurus (Leinert et al. 1993) and the Galactic field (Duquennoy & Mayor 1991), Kroupa (1995a,b) postulated a universal initial binary population where all stars form in binaries, with an excess of systems with wide (10 2 − 10 4 au) semimajor axes with respect to the field population. In Fig. 1 we show the initial Kroupa (1995b) binary period distribution (Eqn. 2), converted to a separation distribution, by the dotted line. Depending on the maximum density attained by the region, the binaries can suffer none, little, or much dynamical destruction and the separation distribution is altered accordingly. We show the distribution (at 1 Myr) in the simulated low density regions by the open histogram, the distribution in the medium density regions by the hashed histogram and the distribution in the high density regions by the solid histogram. We also show the observational data points for Taurus (consistent with little dynamical evolution of the proposed initial period distribution; Köhler & Leinert 1998) by the circles and the Orion Nebula Cluster (ONC, consistent with significant dynamical evolution of the proposed initial period distribution; Reipurth et al. 2007), by the squares.
Evolution of density
If a star-forming region is older, it has had more time to process its primordial binary population (Marks & Kroupa 2012). Therefore, a 3 Myr old region can have a much lower density than a 1 Myr old region, even though they may have had the same initial density; the difference is that two-body relaxation has caused the older region to expand more over time. We show the evolution of density as a function of time in Fig. 2. In panel (a) we show the evolution of our high density (ρ ∼ 10 4 M ⊙ pc −3 ) regions, and in panel (b) we show the medium initial density (ρ ∼ 10 2 M ⊙ pc −3 ) regions. In panel (c) we show the evolution of the low-density regions (ρ ∼ 10 M ⊙ pc −3 )
In all panels, the median density in each of our 20 simulated regions is shown by the solid grey lines.
The regions evolve to form a bound stellar cluster, and stars which are in the very centre of the cluster have higher densities than the region median (the grey lines). We show the evolution of the averaged central density (the volume density within the half-mass radius) by the dot-dashed lines. Conversely, stars that are ejected from the regions and become unbound have significantly lower densities.
In Fig. 2 we also show the current density of several nearby regions of varying ages (see the final column of Table 1 for a key to the symbols). Marks & Kroupa (2012) and Marks et al. (2014) argue that given a universal primordial binary population, limits can be placed on the primordial density of a region by comparing the outcome of N-body simulations with the currently observed visual binary population. We indicate the best-fit initial density for each region studied in Marks & Kroupa (2012) and Marks et al. (2014) by the red symbols around t = 0 Myr (the same symbols are used as for the present-day densities -for example, Cham I is shown by the ⋆). Outside of the error bars, Marks & Kroupa (2012) reject the possibility of that density being consistent with the processing of a common binary population with 90 per cent confidence.
Taking the density in isolation, Fig. 2 shows that for ρ Oph and the ONC (the filled diamonds and squares, respectively) both a high-density region which evolves to far lower densities (panel a) and a medium-density region that remains static within the first Myr (panel b) are consistent with the observations. However, when their binary populations are considered, Marks & Kroupa (2012) show that under the assumption of a universal primordial binary population, the initial densities must be more than a factor of 10 different.
Evolution of structure
In order to break this density degeneracy, we compare the evolution of the spatial structure in our simulations, as measured by the Q-parameter (Cartwright & Whitworth 2004;Cartwright & Whitworth 2009;Cartwright 2009). The Qparameter compares the mean length of the minimum spanning tree (the shortest possible pathlength between all stars where there are no closed loops,m) to the mean separation between stars,s:
Q =m s .(4)
A region is substructured if Q < 0.8, and centrally concentrated if Q > 0.8. We show the evolution of Q in our simulations compared to the measured values in Fig. 3 at various ages (see Table 1 for a key to the symbols). The determination of the Qparameter requires only positional information; however, it can be affected by extinction and membership uncertainty (Bastian et al. 2009;Parker & Meyer 2012). Where there is an uncertainty associated with the determination of Q, we show the likely direction of the uncertainty. For example, Cartwright & Whitworth (2004) determined Q = 0.85 for ρ Oph; however, using an updated census discussed in Alves de Oliveira et al. (2012), find Q = 0.56. In our subsequent analysis, we consider any evolutionary scenario that is consistent with either value to be plausible initial conditions for that star-forming region. Similarly, depending on membership probabality, Upper Sco and CrA may have lower Q-parameters (once probable back-and foreground stars are removed), whereas the ONC likely has a higher Qparameter than that determined from the Hillenbrand & Hartmann (1998) data due to visual extinction and sample incompleteness. (2012) and King et al. (2012b), ρ obs. , the postulated initial density, ρ post. , from Marks & Kroupa (2012) and Marks et al. (2014) for the binary population of that region to be consistent with the universal primordial binary properties (Kroupa 1995a), and the symbol used in Figs. 2 . We show the median stellar volume density in each simulation by the individual grey (solid) lines, and the central density (within the half-mass radius) from twenty averaged simulations. The lefthand red symbols (at t = 0 Myr, slightly offset from one another for clarity) are the required initial densities for several nearby star-forming regions if star formation is consistent with a universal initial binary population (Marks & Kroupa 2012;Marks et al. 2014). The corresponding present-day stellar densities are shown by the black points at 1, 3 and 5 Myr, depending on the age of the region. A key to the symbols is provided in Table 1.
Finally, Cham I is slightly elongated, which means the true Qparameter is slightly higher than measured (0.71 instead of 0.66, Cartwright & Whitworth 2009). These 'alternative' measurements are shown in column 4 of Table 1 (Q alt. ). As in Fig. 2, panel (a) shows the simulations with initially high densities, panel (b) shows the medium density simulations and panel (c) shows the low density simulations. The solid grey lines are the individual simulations, and the horizontal red dashed line shows the boundary between substructured regions (Q < 0.8) and centrally concentrated regions (Q > 0.8).
We exclude unbound stars from the determination of Q for two reasons. Firstly, Q can appear artifically high when distant stars are included in the analysis, and secondly, stars that are unbound in the simulations are likely to travel far from the regions very quickly, making the comparison with observations unfair.
As pointed out in Parker & Meyer (2012) and Parker et al. (2014), the more dense a region is initially, the more readily substructure is erased, and this is apparent in Fig. 3. The most dense regions lose substructure within 1 Myr (panel a), the medium density regions lose substructure within 5 Myr (panel b) and the low density regions retain substructure for the duration of the simulations (panel c). Given the high initial densities in Fig. 3(a), only the ONC is consistent with very dense initial conditions. When the initial conditions are a factor of ∼100 less dense, the measured Qparameters for every region apart from the ONC are consistent with more quiescent, medium density initial conditions for star formation.
The Q − Σ LDR plot
Finally, we present the Q − Σ LDR plot (Parker et al. 2014) for our simulations in Fig. 4. This combines the Q-parameter with the ratio of the median surface density of the 10 most massive stars compared to the median surface density of the region as a whole (Maschberger & Clarke 2011);
Σ LDR =Σ 10 Σ all .(5)
In Fig. 4 we show the datapoints for Taurus (filled circle), ρ Oph (filled diamond) and the ONC (filled square). These are the only Table 1 (a) High initial density (ρ ∼ 10 4 M ⊙ pc −3 ) (b) Medium initial density (ρ ∼ 10 2 M ⊙ pc −3 ) (c) Low initial density (ρ ∼ 10 M ⊙ pc −3 ) Under the reasonable assumption that the velocities of stars are correlated on local scales (Larson 1982), Parker et al. (2014) showed that massive stars attain higher surface densities than the median in the region, because they act as potential wells and acquire a retinue of low-mass stars. In the high density simulated regions ( Fig. 4(a)), all of the simulations develop high Σ LDR values in addition to erasing the primordial substructure. The only observed region which is consistent with these initial conditions is the ONC, and this appears to be marginal. The other observed regions (Taurus and ρ Oph) are more consistent with a much lower initial density, as they have Σ LDR < 1 and Q < 0.8.
Discussion of individual regions
Recently, King et al. (2012b) claimed that differences between the binary separation distributions in nearby star-forming regions were likely to be primordial, as the main differences between binary populations in some regions and the corresponding separation range in the Galactic field were in the 'hard' binary regime (< 100 au), and thus unlikely to be the result of dynamical evolution. However, Marks et al. (2014) show that when the binary fraction is also considered, all of the regions dicussed in King et al. (2012b) are in fact consistent with the dynamical evolution of a common binary population and the observed differences between regions are likely due to those regions having different initial densities. Here, we combine the results shown in Figs. 2, 3 and 4 to determine the likely initial (or maximum) density of each region in Table 1 (the star forming regions presented in King et al. 2012b), and whether this density is consistent with dynamical processing of the universal initial binary population, as suggested by Marks & Kroupa (2012) and Marks et al. (2014). We summarise the results in Table 2.
ONC: The ONC has both a high Q-parameter and Σ LDR ratio, which suggests that its initial density was likelyρ ∼ 10 4 M ⊙ pc −3 . However, if its density were higher, the Q-parameter would also be higher, so 10 4 M ⊙ pc −3 is very much an upper limit on the initial density. Marks & Kroupa (2012) suggest an initial density of 68 000 M ⊙ pc −3 , with values lower than 46 000 M ⊙ pc −3 or higher than 90 000 M ⊙ pc −3 excluded with 90 per cent confidence. However, we note that the evolution of the universal binary population in our regions does appear to be consistent with the data from Reipurth et al. (2007) -compare the solid histogram with the squares in Fig. 1. ρ Oph: If we take Q = 0.56 for ρ Oph, with Σ LDR = 0.58 then this is consistent with a moderate initial density (10 2 − 10 3 M ⊙ pc −3 ), rather than high initial (or maximum densities). Marks & Kroupa (2012) suggest an initial density of 2300 M ⊙ pc −3 , with values lower than 1100 M ⊙ pc −3 or higher than 7400 M ⊙ pc −3 excluded with 90 per cent confidence. One of our medium density simulations briefly reaches a density of 2000 M ⊙ pc −3 , suggesting that the evolution of this region (and processing of binaries) could be consistent with the universal initial binary population.
Taurus:
The low Q-parameter (0.48 -Cartwright & Whitworth 2004) and low Σ LDR (0.28 using the dataset from Parker et al. 2011) suggest that dynamical evolution has not been significant in this region. Taurus is consistent with very quiescent initial conditions (ρ ∼ 10 M ⊙ pc −3 -Figs. 2(c) and 3(c)). Marks & Kroupa (2012) suggest an initial density of 350 M ⊙ pc −3 , with values lower than 140 M ⊙ pc −3 or higher than 850 M ⊙ pc −3 excluded with 90 per cent confidence. Given its current low density, low Q-parameter and low Σ LDR , Taurus is not consistent with the universal initial binary population.
IC 348: Because of its age (3 Myr), if IC 348 had initially high density, two-body relaxation would have reduced the density to values much lower than observed for this region (see Fig. 2). This, combined with the Q-parameter of 0.92 (Cartwright & Whitworth 2004) suggests a moderate initial density (10 2 − 10 3 M ⊙ pc −3 -see panel (b) of Figs. 2 and 3). Marks & Kroupa (2012) suggest an initial density of 9400 M ⊙ pc −3 , with values lower than 2700 M ⊙ pc −3 or higher than 53 000 M ⊙ pc −3 excluded with 90 per cent confidence. Such a high initial density is inconsistent with the observed structure and current density, and its binary population is probably not evolved from the Kroupa (1995b) universal binary population.
Cham I: Chamaeleon I has an age of 3 Myr, and a low density, but relatively high Q-parameter of 0.71 (Cartwright & Whitworth 2009). Figs. 2 and 3 show that none of our dynamical scenarios fit the observed values, and it is therefore likely that Cham I formed with its current density and structure -and that dynamical evolution has not altered its binary population. Marks & Kroupa (2012) suggest an initial density of 1600 M ⊙ pc −3 , with values lower than 230 M ⊙ pc −3 or higher than 13 000 M ⊙ pc −3 excluded with 90 per cent confidence. Given its current low density and lack of dynamical evolution, Cham I is not consistent with the universal binary population.
CrA: CrA has Q = 0.38 but a moderate density of 30 M ⊙ pc −3 (Neuhäuser & Forbrich 2008;King et al. 2012b). According to our evolutionary models, CrA could have evolved slightly from a highly substructured region with an initial density of ∼ 10 2 M ⊙ pc −3 . In order to be consistent with the initial universal binary population, Marks et al. (2014) suggest an initial density of 190 M ⊙ pc −3 (without providing limits). Our analysis suggests that this region is consistent with the universal initial binary population, assuming a similar magnitude in the confidence limit range as for the other regions.
Upper Sco: The Q-parameter is 0.88 (Kraus & Hillenbrand 2007) and current density is 16 M ⊙ pc −3 (King et al. 2012b); both of which imply that Upper Sco is likely to have had moderately dense initial conditions (ρ ∼ 10 2 − 10 3 M ⊙ pc −3 ). Marks et al. (2014) suggest an initial density of 4200 M ⊙ pc −3 (again without confidence limits). Our evolutionary models suggest that Upper Sco is also inconsistent with the initial densities required to process the universal initial binary population.
In summary, only ρ Oph, CrA, and possibly the ONC, are consistent with the dynamical processing of the universal initial binary population from Kroupa (1995a,b), based on the consideration of the regions' structure and current density. We suggest that the observed differences between these regions are likely to be a relic of the star formation process, although we caution that as the binaries observed in these regions are 'intermediate' they may still have undergone some degree of dynamical evolution (Parker & Goodwin 2012).
CONCLUSIONS
We have presented N-body simulations of the dynamical evolution of star-forming regions in which we follow the stellar density and spatial structure and compare the results to seven observed regions. For each individual region, we determine the likely initial density (which is usually, but not always, the maximum) based on its observed current density and spatial structure, as determined by the Q-parameter.
The spatial structure of a region is a strong constraint on the amount of dynamical evolution that has taken place, as dense regionsρ > 10 3 M ⊙ pc −3 erase structure almost immediately, intermediate density regions (ρ ∼ 10 2 − 10 3 M ⊙ pc −3 ) remove structure within 5 Myr but low-density regions (ρ < 10 M ⊙ pc −3 ) retain structure beyond the age of all of the regions considered here. Folding in the measurement of structure largely removes the density degeneracy problem in star formation, where the initial density is very difficult to constrain due to the rapid expansion of initially dense regions, and the slower expansion of more quiescent regions, both of which can result in the same present-day density from very different initial conditions.
Our results can be used to infer the likely maximum density of observed star-forming regions, which for example enables the importance of the effects of dynamical interactions and radiation from massive stars on protoplanetary discs to be ascertained (e.g. Scally & Clarke 2001;Adams et al. 2006;Rosotti et al. 2014). Recently, de Juan Ovelar et al. (2012) showed an apparent dependence of the size of protoplanetary discs on the density of the starforming environment, although their observations were limited to nearby star-forming regions. Future ALMA observations may be able to probe discs in more distant regions (e.g. Mann et al. 2014), and using the Q-parameter in tandem with the present-day density will be useful in determining whether any observed trends in disc size are due to the star-formation environment.
We also apply our method to determine which of seven nearby star-forming regions are consistent with the 'universal initial binary population' model for star formation (Kroupa 1995a,b), Table 2. Comparison of the structure and density of seven star-forming regions with N-body simulations to determine which are compatible with the universal initial binary population from Kroupa (1995a,b). From left to right, the columns show the region name, age, Q-parameter (where two values are given due to observational uncertainty, the arrow indicates the more likely value), the observed present-day density of each region as noted by Marks & Kroupa (2012) and King et al. (2012b), ρ obs. , the postulated initial density with upper and lower limits from Marks & Kroupa (2012) and Marks et al. (2014) for the binary population of that region to be consistent with the universal primordial binary properties (Kroupa 1995a,b), ρ post. , the maximum possible initial density when the Q-parameter is also considered, ρ max. , and whether or not this region is consistent with the Kroupa (1995a,b) universal initial binary population.
Region
Age a Note that an initial density of 10 4 M ⊙ pc −3 for the ONC appears to be consistent with the universal binary population in Fig. 1. However, this (relatively low) density was ruled out at 90 per cent confidence by Marks & Kroupa (2012).
based on recent numerical simulations presented in the literature (Marks & Kroupa 2012;Marks et al. 2014). We compare the density of our simulations which fit the observed regions' structure and determine whether the initial density of those simulations is high enough to process the initial binary population to resemble the binary properties observed in each region today, using the values quoted in Marks & Kroupa (2012) and Marks et al. (2014). We find that of the seven regions observed, only threeρ Oph, CrA and possibly the ONC -are consistent with the universal initial binary population model for star formation. Unfortunately, aside from discarding the universal initial binary population hypothesis in Kroupa (1995a,b), our results do not help much in assessing the type of star-forming region that contributes binaries to the Galactic field. We are still limited by the observed separation range in regions (10s -1000s au) which is small compared to the field (10 −2 − 10 5 au), and by the fact that these visual binaries are dynamically 'intermediate' systems (Heggie 1975;Hills 1975a,b) that could have evolved stochastically (especially in dense regions like the ONC, Parker & Goodwin 2012). We also have very little information on whether the regions considered are representative of those that do populate the field. Further observations of e.g. spectroscopic binaries which are not affected by dynamical evolution are desperately required in order to look for stark differences between the binary populations of the regions in question and the Galactic field.
(a ) Figure 2 .
)2High initial density (ρ ∼ 10 4 M ⊙ pc −3 ) (b) Medium initial density (ρ ∼ 10 2 M ⊙ pc −3 ) (c) Low initial density (ρ ∼ 10 M ⊙ pc −3 ) Evolution of the density in our simulated star-forming regions. In panel (a) the star-forming regions have high initial densities (ρ ∼ 10 4 M ⊙ pc −3 ), in panel (b) the regions have medium initial densities (ρ ∼ 10 2 M ⊙ pc −3 ) and in panel (c) the regions have much lower initial densities (ρ ∼ 10 M ⊙ pc −3 )
( a )Figure 3 .
a3High initial density (ρ ∼ 10 4 M ⊙ pc −3 ) (b) Medium initial density (ρ ∼ 10 2 M ⊙ pc −3 ) (c) Low initial density (ρ ∼ 10 M ⊙ pc −3 ) Evolution of structure as measured by the Q-parameter in our simulated star-forming regions. In panel (a) the star-forming regions have high initial densities (ρ ∼ 10 4 M ⊙ pc −3 ), in panel (b) the regions have medium initial densities (ρ ∼ 10 2 M ⊙ pc −3 ) and in panel (c) the regions have much lower initial densities (ρ ∼ 10 M ⊙ pc −3 ). We show the evolution of the Q-parameter in each simulation by the individual grey (solid) lines. The boundary between substructured regions and centrally concentrated regions at Q = 0.8 is shown by the horizontal dashed line. The Q-parameters measured in the star-forming regions of interest are shown by the points at 1, 3 and 5 Myr, depending on the age of the region. Where there is an uncertainty associated with the measurement of Q, we draw an arrow in the direction to indicate the possible deviation from the measured value. A key to the symbols is provided in
Figure 4 .
4Evolution of structure as measured by the Q-parameter in our simulated star-forming regions versus the relative local density around massive stars compared to the region's median (Σ LDR ). We show values at 0 Myr (plus signs), 1 Myr (open circles) and 3 Myr (crosses). We show the observed values for Taurus (the filled circle), ρ Oph (the filled diamond) and the ONC (the filled square). In panel (a) the star-forming regions have high initial densities (ρ ∼ 10 4 M ⊙ pc −3 ), in panel (b) the regions have medium initial densities (ρ ∼ 10 2 M ⊙ pc −3 ) and in panel (c) the regions have much lower initial densities (ρ ∼ 10 M ⊙ pc −3 ). The boundary between substructured regions and centrally concentrated regions at Q = 0.8 is shown by the horizontal dashed line, and Σ LDR = 1 (where the median local density around massive stars is equal to the region median) is shown by the vertical dashed line. regions in our sample for which we have a reliable census with mass estimates for each individual star in order to determine Σ LDR .
Table 1 .
1A summary of the regions with which we compare our N-body simulations. From left to right, the columns show the region name, age, Q-parameter, an alternative determination where applicable, Q alt (see text for details), the Σ LDR ratio (if available), the references for Q, Q alt , Σ LDR , the observed present-day density of each region as noted byMarks & Kroupa
c 2014 RAS, MNRAS 000, 1-8
ACKNOWLEDGEMENTSI am grateful to the referee, Cathie Clarke, for her comments and suggestions which have led to a more interesting paper. I acknowledge support from the Royal Astronomical Society in the form of a research fellowship.
. F C Adams, ARA&A. 4847Adams F. C., 2011, ARA&A, 48, 47
. F C Adams, E M Proszkow, M Fatuzzo, P C Myers, ApJ. 641504Adams F. C., Proszkow E. M., Fatuzzo M., Myers P. C., 2006, ApJ, 641, 504
. R Alexander, I Pascucci, S Andrews, P Armitage, L Cieza, arXiv:1311.1819Alexander R., Pascucci I., Andrews S., Armitage P., Cieza L., 2013, arXiv: 1311.1819
. R J Allison, S P Goodwin, R J Parker, Portegies Zwart, S F De Grijs, R , MNRAS. 4071098Allison R. J., Goodwin S. P., Parker R. J., Portegies Zwart S. F., de Grijs R., 2010, MNRAS, 407, 1098
. C Alves De Oliveira, E Moraux, J Bouvier, H Bouy, A&A. 539151Alves de Oliveira C., Moraux E., Bouvier J., Bouy H., 2012, A&A, 539, A151
. P André, A Men'shchikov, S Bontemps, V Könyves, F Motte, N Schneider, P Didelon, V Minier, P Saraceno, D Ward-Thompson, A&A. 518102André P., Men'shchikov A., Bontemps S., Könyves V., Motte F., Schneider N., Didelon P., Minier V., Saraceno P., Ward- Thompson D., et al. 2010, A&A, 518, L102
. P J Armitage, A&A. 362968Armitage P. J., 2000, A&A, 362, 968
. N Bastian, M Gieles, B Ercolano, R Gutermuth, 392868MN-RASBastian N., Gieles M., Ercolano B., Gutermuth R., 2009, MN- RAS, 392, 868
. I A Bonnell, M R Bate, C J Clarke, J E Pringle, 323785MN-RASBonnell I. A., Bate M. R., Clarke C. J., Pringle J. E., 2001, MN- RAS, 323, 785
. E Bressert, N Bastian, R Gutermuth, S T Megeath, L Allen, Ii N J Evans, L M Rebull, J Hatchell, D Johnstone, T L Bourke, L A Cieza, P M Harvey, B Merin, T P Ray, N F H Tothill, MNRAS. 40954Bressert E., Bastian N., Gutermuth R., Megeath S. T., Allen L., Evans, II N. J., Rebull L. M., Hatchell J., Johnstone D., Bourke T. L., Cieza L. A., Harvey P. M., Merin B., Ray T. P., Tothill N. F. H., 2010, MNRAS, 409, L54
. A Cartwright, MNRAS. 4001427Cartwright A., 2009, MNRAS, 400, 1427
. A Cartwright, A P Whitworth, MNRAS. 348589Cartwright A., Whitworth A. P., 2004, MNRAS, 348, 589
. A Cartwright, A P Whitworth, MNRAS. 392341Cartwright A., Whitworth A. P., 2009, MNRAS, 392, 341
. J E Dale, B Ercolano, I A Bonnell, MNRAS. 430234Dale J. E., Ercolano B., Bonnell I. A., 2013, MNRAS, 430, 234
. M B Davies, F C Adams, P Armitage, J Chambers, E Ford, A Morbidelli, S N Raymond, D Veras, arXiv:1311.6816Davies M. B., Adams F. C., Armitage P., Chambers J., Ford E., Morbidelli A., Raymond S. N., Veras D., 2013, arXiv: 1311.6816
. M De Juan Ovelar, J Kruijssen, E Bressert, L Testi, N Bastian, Cánovas Cabrera, H , A&A. 5461de Juan Ovelar M., Kruijssen J., Bressert E., Testi L., Bastian N., Cánovas Cabrera H., 2012, A&A, 546, L1
. G Duchêne, A Kraus, ARA&A. 51269Duchêne G., Kraus A., 2013, ARA&A, 51, 269
. A Duquennoy, M Mayor, A&A. 248485Duquennoy A., Mayor M., 1991, A&A, 248, 485
. D A Fischer, G W Marcy, ApJ. 396178Fischer D. A., Marcy G. W., 1992, ApJ, 396, 178
. M Gieles, N Moeckel, C J Clarke, MNRAS. 42611Gieles M., Moeckel N., Clarke C. J., 2012, MNRAS, 426, L11
. P Girichidis, C Federrath, R Allison, R Banerjee, R S Klessen, MNRAS. 4203264Girichidis P., Federrath C., Allison R., Banerjee R., Klessen R. S., 2012, MNRAS, 420, 3264
. S P Goodwin, Royal Society of London Philosophical Transactions Series A. 368851Goodwin S. P., 2010, Royal Society of London Philosophical Transactions Series A, 368, 851
. S P Goodwin, A P Whitworth, A&A. 413929Goodwin S. P., Whitworth A. P., 2004, A&A, 413, 929
. D A Gouliermis, S Hony, R S Klessen, MNRAS. 4393775Gouliermis D. A., Hony S., Klessen R. S., 2014, MNRAS, 439, 3775
. D C Heggie, MNRAS. 173729Heggie D. C., 1975, MNRAS, 173, 729
. L A Hillenbrand, L W Hartmann, ApJ. 492540Hillenbrand L. A., Hartmann L. W., 1998, ApJ, 492, 540
. J G Hills, AJ. 80809Hills J. G., 1975a, AJ, 80, 809
. J G Hills, AJ. 801075Hills J. G., 1975b, AJ, 80, 1075
. R R King, R J Parker, J Patience, S P Goodwin, 4212025MN-RASKing R. R., Parker R. J., Patience J., Goodwin S. P., 2012a, MN- RAS, 421, 2025
. R R King, S P Goodwin, R J Parker, J Patience, 4272636MN-RASKing R. R., Goodwin S. P., Parker R. J., Patience J., 2012b, MN- RAS, 427, 2636
. R Köhler, C Leinert, A&A. 331977Köhler R., Leinert C., 1998, A&A, 331, 977
. A L Kraus, L A Hillenbrand, ApJ. 662413Kraus A. L., Hillenbrand L. A., 2007, ApJ, 662, 413
. P Kroupa, MNRAS. 2771491Kroupa P., 1995a, MNRAS, 277, 1491
. P Kroupa, MNRAS. 2771507Kroupa P., 1995b, MNRAS, 277, 1507
. P Kroupa, MNRAS. 2771522Kroupa P., 1995c, MNRAS, 277, 1522
. P Kroupa, Science. 29582Kroupa P., 2002, Science, 295, 82
. P Kroupa, M G Petr, M J Mccaughrean, New Astronomy. 4495Kroupa P., Petr M. G., McCaughrean M. J., 1999, New Astron- omy, 4, 495
. P Kroupa, M G Petr-Gotzens, A&A. 52992Kroupa P., Petr-Gotzens M. G., 2011, A&A, 529, A92
. C J Lada, Royal Society of London Philosophical Transactions Series A. 368713Lada C. J., 2010, Royal Society of London Philosophical Trans- actions Series A, 368, 713
. C J Lada, E A Lada, ARA&A. 4157Lada C. J., Lada E. A., 2003, ARA&A, 41, 57
. R B Larson, MNRAS. 159Larson R. B., 1982, MNRAS, 200, 159
. C Leinert, H Zinnecker, N Weitzel, J Christou, S T Ridgway, R Jameson, M Haas, R Lenzen, A&A. 278129Leinert C., Zinnecker H., Weitzel N., Christou J., Ridgway S. T., Jameson R., Haas M., Lenzen R., 1993, A&A, 278, 129
. R K Mann, Di Francesco, J Johnstone, D Andrews, S M Williams, J P Bally, J Ricci, L Hughes, A M Matthews, B C , ApJ. 78482Mann R. K., Di Francesco J., Johnstone D., Andrews S. M., Williams J. P., Bally J., Ricci L., Hughes A. M., Matthews B. C., 2014, ApJ, 784, 82
. M Marks, P Kroupa, A&A. 5438Marks M., Kroupa P., 2012, A&A, 543, A8
. M Marks, N Leigh, M Giersz, S Pfalzner, J Pflamm-Altenburg, S Oh, MNRAS. 4413503Marks M., Leigh N., Giersz M., Pfalzner S., Pflamm-Altenburg J., Oh S., 2014, MNRAS, 441, 3503
. T Maschberger, C J Clarke, MNRAS. 416541Maschberger T., Clarke C. J., 2011, MNRAS, 416, 541
. R D Mathieu, ARA&A. 32465Mathieu R. D., 1994, ARA&A, 32, 465
. S A Metchev, L A Hillenbrand, ApJS. 18162Metchev S. A., Hillenbrand L. A., 2009, ApJS, 181, 62
. N Moeckel, C Holland, C J Clarke, I A Bonnell, 425450MN-RASMoeckel N., Holland C., Clarke C. J., Bonnell I. A., 2012, MN- RAS, 425, 450
The Corona Australis Star Forming Region. R Neuhäuser, J Forbrich, 735Neuhäuser R., Forbrich J., 2008, The Corona Australis Star Form- ing Region. p. 735
. C Olczak, S Pfalzner, A Eckart, A&A. 488191Olczak C., Pfalzner S., Eckart A., 2008, A&A, 488, 191
. R J Parker, J Bouvier, S P Goodwin, E Moraux, R J Allison, S Guieu, M Güdel, MNRAS. 4122489Parker R. J., Bouvier J., Goodwin S. P., Moraux E., Allison R. J., Guieu S., Güdel M., 2011, MNRAS, 412, 2489
. R J Parker, S P Goodwin, MNRAS. 424272Parker R. J., Goodwin S. P., 2012, MNRAS, 424, 272
. R J Parker, T Maschberger, C Alves De Oliveira, 4263079MN-RASParker R. J., Maschberger T., Alves de Oliveira C., 2012, MN- RAS, 426, 3079
. R J Parker, M R Meyer, MNRAS. 427637Parker R. J., Meyer M. R., 2012, MNRAS, 427, 637
. R J Parker, S P Quanz, MNRAS. 4192448Parker R. J., Quanz S. P., 2012, MNRAS, 419, 2448
. R J Parker, N J Wright, S P Goodwin, M R Meyer, MNRAS. 438620Parker R. J., Wright N. J., Goodwin S. P., Meyer M. R., 2014, MNRAS, 438, 620
. M J Pecaut, E E Mamajek, E J Bubar, ApJ. 746154Pecaut M. J., Mamajek E. E., Bubar E. J., 2012, ApJ, 746, 154
. Portegies Zwart, S F Mcmillan, S L W Hut, P Makino, J , MNRAS. 321199Portegies Zwart S. F., McMillan S. L. W., Hut P., Makino J., 2001, MNRAS, 321, 199
. Portegies Zwart, S F Makino, J Mcmillan, S L W , Hut P. 348117A&APortegies Zwart S. F., Makino J., McMillan S. L. W., Hut P., 1999, A&A, 348, 117
. D Raghavan, H A Mcmaster, T J Henry, D W Latham, G W Marcy, B D Mason, D R Gies, R J White, T A Ten Brummelaar, ApJSS. 1901Raghavan D., McMaster H. A., Henry T. J., Latham D. W., Marcy G. W., Mason B. D., Gies D. R., White R. J., ten Brummelaar T. A., 2010, ApJSS, 190, 1
. M M Reggiani, M R Meyer, A&A. 553124Reggiani M. M., Meyer M. R., 2013, A&A, 553, A124
. B Reipurth, M M Guimarães, M S Connelley, J Bally, AJ. 1342272Reipurth B., Guimarães M. M., Connelley M. S., Bally J., 2007, AJ, 134, 2272
. G P Rosotti, J E Dale, M De Juan Ovelar, D A Hubber, J M D Kruijssen, B Ercolano, S Walch, MNRAS. 4412094Rosotti G. P., Dale J. E., de Juan Ovelar M., Hubber D. A., Krui- jssen J. M. D., Ercolano B., Walch S., 2014, MNRAS, 441, 2094
. N Sánchez, E J Alfaro, ApJ. 6962086Sánchez N., Alfaro E. J., 2009, ApJ, 696, 2086
. A Scally, C Clarke, MNRAS. 325449Scally A., Clarke C., 2001, MNRAS, 325, 449
. S Schmeja, R S Klessen, A&A. 449151Schmeja S., Klessen R. S., 2006, A&A, 449, 151
. F H Shu, F C Adams, S Lizano, ARA&A. 2523Shu F. H., Adams F. C., Lizano S., 1987, ARA&A, 25, 23
| []
|
[
"Teaching a Machine to Read Maps with Deep Reinforcement Learning",
"Teaching a Machine to Read Maps with Deep Reinforcement Learning"
]
| [
"Gino Brunner [email protected] \nETH Zurich\n\n",
"Oliver Richter [email protected] \nETH Zurich\n\n",
"Yuyi Wang [email protected] \nETH Zurich\n\n",
"Roger Wattenhofer [email protected] \nETH Zurich\n\n"
]
| [
"ETH Zurich\n",
"ETH Zurich\n",
"ETH Zurich\n",
"ETH Zurich\n"
]
| []
| The ability to use a 2D map to navigate a complex 3D environment is quite remarkable, and even difficult for many humans. Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community. In this paper we teach a reinforcement learning agent to read a map in order to find the shortest way out of a random maze it has never seen before. Our system combines several state-of-theart methods such as A3C and incorporates novel elements such as a recurrent localization cell. Our agent learns to localize itself based on 3D first person images and an approximate orientation angle. The agent generalizes well to bigger mazes, showing that it learned useful localization and navigation capabilities. | 10.1609/aaai.v32i1.11645 | [
"https://arxiv.org/pdf/1711.07479v1.pdf"
]
| 2,422,956 | 1711.07479 | 9c280a5eef6659486d8ff23f25f5a23786efcac0 |
Teaching a Machine to Read Maps with Deep Reinforcement Learning
Gino Brunner [email protected]
ETH Zurich
Oliver Richter [email protected]
ETH Zurich
Yuyi Wang [email protected]
ETH Zurich
Roger Wattenhofer [email protected]
ETH Zurich
Teaching a Machine to Read Maps with Deep Reinforcement Learning
The ability to use a 2D map to navigate a complex 3D environment is quite remarkable, and even difficult for many humans. Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community. In this paper we teach a reinforcement learning agent to read a map in order to find the shortest way out of a random maze it has never seen before. Our system combines several state-of-theart methods such as A3C and incorporates novel elements such as a recurrent localization cell. Our agent learns to localize itself based on 3D first person images and an approximate orientation angle. The agent generalizes well to bigger mazes, showing that it learned useful localization and navigation capabilities.
Introduction
One of the main success factors of human evolution is our ability to craft and use complex tools. Not only did this ability give us a motivation for social interaction by teaching others how to use different tools, it also enhanced our thinking capabilities, since we had to understand ever more complex tools. Take a map as an example; a map helps us navigate places we have never seen before. However, we first need to learn how to read it, i.e., we need to associate the content of a two-dimensional map with our threedimensional surroundings. With algorithms becoming increasingly capable of learning complex relations, a way to make machines intelligent is to teach them how to use already existing tools. In this paper, we teach a machine how to read a map with deep reinforcement learning.
The agent wakes up in a maze. The agent's view is an image: the maze rendered from the agent's perspective, like a dungeon in a first person video game. This rendered image is provided by the DeepMind Lab environment . The agent can be controlled by a human, or as in our case, by a complex deep reinforcement learning architecture. 1 The agent can move (forward, backward, left, right) and rotate (left, right), and its view image will change accordingly. In addition, the agent gets to see a map of the maze, also an image, as can be seen in Figure 1. One location on the map is marked with an "X" -the agent's target. The crux is that the agent does not know where on the map it currently is. Several locations on the map might correspond well with the current view. Thus the agent needs to move around to learn its position and then move to the target, as illustrated in Figures 6 and 8. We do equip the agent with an approximate orientation angle, i.e., the agent roughly knows the direction it is moving or looking. In the map, up is always north. During training the agent learns which approximate orientation corresponds to north.
A complex multi-stage task, such as navigating a maze with the help of a map, can be naturally decomposed into several subtasks: (i) The agent needs to observe its 3D environment and compare it to the map to determine its most likely position. (ii) The agent needs to understand the map, or in our case associate symbols on the map with rewards and thereby gain an understanding of what a wall is, what navigable space is, and what the target is. (iii) Finally the agents needs to learn how to follow a plan in order to reach the target.
Our contribution is as follows: We present a novel modular reinforcement learning architecture that consists of a reactive agent and several intermediate subtask modules. Each of these modules is designed to solve a specific subtask. The modules themselves can contain neural networks or alternatively implement exact algorithms or heuristics. Our presented agent is capable of finding the target in random mazes roughly three times the size of the largest mazes it has seen during training.
Further contributions include:
• The Recurrent Localization Cell that outputs a location probability distribution based on an estimated stream of visible local maps.
• A simple mapping module that creates a visible local 2D map from 3D RGB input. The mapping module is robust, even if the agent's compass is inaccurate.
Related Work
Reinforcement learning in relation to AI has been studied since the 1950's (Minsky 1954). Important early work on reinforcement learning includes the temporal difference learning method by Sutton (1984;, which is the basis for actor-critic algorithms (Barto, Sutton, and Anderson 1983) and Q-learning techniques (Watkins 1989;Watkins and Dayan 1992). First works using artificial neural networks for reinforcement learning include (Williams 1992) and (Gullapalli 1990). For an in-depth overview of reinforcement learning we refer the interested readers to (Kaelbling, Littman, and Moore 1996), (Sutton and Barto 1998) and (Szepesvári 2010). The current deep learning boom was started by, among other contributions, the backpropagation algorithm (Rumelhart et al. 1988) and advances in computing power and GPU frameworks. However, deep learning could not be applied effectively to reinforcement learning until recently. Mnih et al. (2015) introduced the Deep-Q-Network (DQN) that uses experience replay and target networks to stabilize the learning process. Since then, several extensions to the DQN architecture have been proposed, such as the Double Deep-Q-Network (DDQN) ( van Hasselt, Guez, and Silver 2016) and the dueling network architecture (Wang et al. 2016). These networks are based on using replay buffers to stabilize learning, such as prioritized experience replay (Schaul et al. 2015). The state-of-the-art A3C ) relies on asynchronous actor-learners to stabilize learning. In our system, we use A3C learning on a modified network architecture to train our reactive agent and the localization module in an on-policy manner. We also make use of (prioritized) replay buffers to train our agent off policy.
A major challenge in reinforcement learning are environments with delayed or sparse rewards. An agent that never gets a reward can never learn good behavior. Thus Jaderberg et al. Mirowski et al. (2016) introduced auxiliary tasks that let the agent learn based on intermediate intrinsic pseudo-rewards, such as predicting the depth from a 3D RGB image, while simultaneously trying to solve the main task, e.g., finding the exit in a 3D maze. The policies learned by the auxiliary tasks are not directly used by the agent, but solely serve the purpose of helping the agent learn better representations which improves its performance on the main task. The idea of auxiliary tasks is inspired by prior work on temporal abstractions, such as options (Sutton, Precup, and Singh 1999), whose focus was on learning temporal abstractions to improve high-level learning and planning. In our work we introduce a modularized architecture that incorporates intermediate subtasks, such as localization, local map estimation and global map interpretation. In contrast to (Jaderberg et al. 2016), our reactive agent directly uses the outputs of these modules to solve the main task. Note that we use an auxiliary task inside our localization module to improve the local map estimation. Kulkarni et al. (2016) introduced a hierarchical version of the DQN to tackle the challenge of delayed and sparse rewards. Their system operates at different temporal scales and allows the definition of goals using entity relations. The policy is learned in such a way to reach these goals. We use a similar approach to make our agent follow a plan, such as, "go north".
Mapping and localization has been extensively studied in the domain of robotics (Thrun, Burgard, and Fox 2005). A robot creates a map of the environment from sensory input (e.g., sonar or LIDAR) and then uses this map to plan a path through the environment. Subsequent works have combined these approaches with computer vision techniques (Fuentes-Pacheco, Ascencio, and Rendón-Mancha 2015) that use RGB(-D) images as input. Machine learning techniques have been used to solve mapping and planning separately, and later also tackled the joint mapping and planning problem (Elfes 1989). Instead of separating mapping and planning phases, reinforcement learning methods aimed at directly learning good policies for robotic tasks, e.g., for learning human-like motor skills (Peters and Schaal 2008).
Recent advances in deep reinforcement learning have spawned impressive work in the area of mapping and localization. The UNREAL agent (Jaderberg et al. 2016) uses auxiliary tasks and a replay buffer to learn how to navigate a 3D maze. Mirowski et al. (2016) came up with an agent that uses different auxiliary tasks in an online manner to understand if navigation capabilities manifest as a biproduct of solving a reinforcement learning problem. Zhu et al. (2017) tackled the problems of generalization across tasks and data inefficiency. They use a realistic 3D environment with physics engine to gather training data efficiently. Their model is capable of navigating to a visually specified target. In contrast to other approaches, they use a memoryless feed-forward model instead of recurrent models. simulated a robot that navigates through a real 3D environment. They focus on the architectural problem of learning mapping and planning in a joint manner, such that the two phases can profit from knowing each other's needs. Their agent is capable of creating an internal 2D representation of the local 3D environment, similar to our local visible map. In our work a global map is given, and the agent learns to interpret and read that map to reach a certain target location. Thus, our agent is capable of following complicated long range trajectories in an approximately shortest path manner. Furthermore, their system is trained in a fully supervised manner, whereas our agent is trained with reinforcement learning. Bhatti et al. (2016) augment the standard DQN with semantic maps in the VizDoom (Kempka et al. 2016) environment. These semantic maps are constructed from 3D RGB-D input, and they employ techniques such as standard computer vision based object recognition and SLAM. They showed that this results in better learned policies. The task of their agent is to eliminate as many opponents as possible before dying. In contrast, our agent needs to escape from a complex maze. Furthermore, our environments are designed to provide as little semantic information as possible to make the task more difficult for the agent; our agent needs to construct its local visible map based purely on the shape of its surroundings.
Architecture
Many complex tasks can be divided into easier intermediate tasks which when all solved individually solve the complex task. We use this principle and apply it to neural network architecture design. In this section we first introduce our concept of modular intermediate tasks, and then discuss how we implement the modular tasks in our map reading architecture.
V {p i loc } N i=1 {p i loc } N i=1 r t-1
STTD ? H loc Figure 1: Architecture overview and interplay between the four modules.α is the discretized angle, a t−1 is the last action taken, r t−1 is the last reward received, {p loc i } N i=1 is the estimated location probability distribution over the N possible discrete locations, H loc is the entropy of the estimated location probability distribution, STTD is the short term target direction suggested by the map interpretation network, V is the estimated state value and π is the policy output from which the next action a t is sampled.
Modular Intermediate Tasks
An intermediate task module can be any information processing unit that takes as input either sensory input and/or the output of other modules. A module is defined and designed after the intermediate task it solves and can consist of trainable and hard coded parts. Since we are dealing with neural networks, the output and therefore the input of a module can be erroneous. Each module adjusts its trainable parameters to reduce its error independent of other modules. We achieve this by stopping error back-propagation on module boundaries. Note that this separation has some advantages and drawbacks:
• Each module performance can be evaluated and debugged individually.
• Small intermediate subtask modules have short credit assignment paths, which reduces the problem of exploding and vanishing gradients during back-propagation.
• Modules cannot adjust their output to fit the input needs of the next module. This has to be achieved through interface design, i.e., intermediate task specification.
Our neural network architecture consists of four modules, each dedicated to a specific subtask. We first give an overview of the interplay between the modules before describing them in detail in the following sections. The architecture overview is sketched in Figure 1.
The first module is the visible local map network; it takes the raw visual input from the 3D environment and creates for each frame a two dimensional map excerpt of the currently visible surroundings. The second module, the recurrent localization cell, takes the stream of visible local map excerpts
CNN CNN Visible Local Map F C F C F C F C Visual Input Visible Field
Map Excerpt Figure 2: The visible local map network: The RGB pixel input is passed through two convolutional neural network (CNN) layers and a fully connected (FC) layer before being concatenated to the discretized angleα and further processed by fully connected layers and a gating operation.
and integrates it into a local map estimation. This local map estimation is compared to the global map to get a probability distribution over the discretized possible locations. The third module is called map interpretation network; it learns to interpret the global map and outputs a short term target direction for the estimated position. The last module is a reactive agent that learns to follow the estimated short term target direction to ultimately find the exit of the maze. We allow our agent to have access to a discretized anglê α describing the direction it is facing, comparable to a robot having access to a compass. Furthermore, we do not limit ourself to completely unsupervised learning and allow the agent to use a discretized version of its actual position during training. This could be implemented as a robot training on the network with the help of a GPS signal. The robot could train as long as the accuracy of the GPS signal is below a certain threshold and act on the trained network as soon as the GPS signal gets inaccurate or totally lost. We leave such a practical implementation of our algorithm to future work and focus here on the algorithmic structure itself.
We now describe each module architecture individually before we discuss their joint training in Section 3.6. If not specified otherwise, we use rectified linear unit activations after each layer.
Visible Local Map Network
The visible local map network preprocesses the raw visual RGB input from the environment through two convolutional neural network layers followed by a fully connected layer. We adapted this preprocessing architecture from (Jaderberg et al. 2016). The thereby generated features are concatenated to a 3-hot discretized encodingα of the orientation angle α, i.e., we input the angle as n-dimensional vector where each dimension represents a discrete state of the angle, with n = 30. We set the three vector components that represent the discrete angle values closest to the actual angle to one while the remaining components are set to zero, e.g. α = [0 . . . 01110 . . . 0]. We used a 3-hot instead of a 1-hot encoding to smooth the input. Note that this encoding has an average quantization error of 6 degrees.
The discretized angle and preprocessed visual features are passed through a fully connected layer to get an intermediate representation from which two things are estimated: Figure 3: Sketch of the information flow in the recurrent localization cell. The last egomotion estimation s t−1 , the discretized angleα, the last action a t−1 and reward r t−1 are passed through two fully connected (FC) layers and combined with a two dimensional convolution between the former local map estimation LM est t−1 and the current visible local map input to get the new egomotion estimation s t . This egomotion estimation is used to shift the previously estimated local map LM est t−1 and the previous map feedback local map LM mf b t−1 . A weighted and clipped combination of these local map estimations, LM est+mf b t−1 , is convolved with the full map to get the estimated location probability distribution {p loc i } N i=1 . Recurrent connections are marked by empty arrows.
a t-1 r t-1 {p i loc } N i=1 V s t-1 F C F C s o f t m a x s t M softmax ∑ . LM est LM mfb LM est+mfb
1. A reconstruction of the map excerpt that corresponds to the current visual input 2. The current field of view, which is used to gate the estimated map excerpt such that only estimates which lie in the line of sight make it into the visible local map. This gating is crucial to reduce noise in the visible local map output.
See Figure 2 for a sketch of the visible local map network architecture.
Recurrent Localization Cell
Moving around in the environment, the agent generates a stream of visible local map excerpts like the output in Figure 2 or the visible local map inputṼ in Figure 3. The recurrent localization cell then builds an egocentric local map out of this stream and compares it to the actual map to estimate the current position. The agent has to predict its egomotion to shift the egocentric estimated local map accordingly. We refer to Figure 3 for a sketch of the architecture described hereafter.
Let M be the current map,Ṽ the output of the visible local map network,α the discretized 3-hot encoded orientation angle, a t−1 the 1-hot encoded last action taken, r t−1 the extrinsic reward received by taking action a t−1 , LM est t the estimated local map at time step t, LM mf b t the map feedback local map at time step t, LM est+mf b t the estimated local map with map feedback at time step t, s t the estimated necessary shifting (or estimated egomotion) at time step t and {p loc i } N i=1 the discrete estimated location probability distribution. Then we can describe the functionality of the recurrent localization cell by the following equations:
s t = sof tmax(f (s t−1 ,α, a t−1 , r t−1 ) + LM est t−1 * Ṽ ) LM est t = LM est t−1 * s t +Ṽ +0.5 −0.5 LM est+mf b t = LM est t + λ · LM mf b t−1 * s t +0.5 −0.5 {p loc i } N i=1 = sof tmax m * LM est+mf b t LM mf b t = N i=1 p i · g(m, i)
Here, f (·) is a two layer feed forward neural network, * denotes a two dimensional discrete convolution with stride one in both dimensions, [·] +0.5 −0.5 denotes a clipping to [−0.5, +0.5], λ is a trainable map feedback parameter and g(m, i) extracts from the map m the local map around location i.
Map Interpretation Network
The goal of the map interpretation network is to find rewarding locations on the map and construct a plan to get to these locations. We achieve this in three stages: First, the network passes the map through two convolutional layers followed by a rectified linear unit activation to create a 3-channel reward map. The channels are trained (as discussed in Section 3.6) to represent wall locations, navigable locations and target locations respectively. This reward map is then area averaged, rectified and passed to a parameter free 2D shortest path planning module which outputs for each of the discrete locations on the map a distribution over {North, East, South, West}, i.e., a short term target direction (STTD), as well as a measure of distance to the nearest target location. This plan is then multiplied with the estimated location probability distribution to get the smooth STTD and target distance of the currently estimated location. Note that planning for each possible location and querying the plan with the full location probability distribution helps to resolve the exploitation-exploration dilemma of the reactive agent:
• An uncertain location probability distribution close to the uniform distribution will result in an uncertain STTD distribution over {North, East, South, West}, thereby encouraging exploration.
• A location probability distribution over locations with similar STTD will accumulate these similarities and result in a clear STTD for the agent, even though the location might still be unclear (exploitation).
Reactive Agent and Intrinsic Reward
As mentioned, the reactive agent faces two partially contradicting goals: following the STTD (exploitation) and improving the localization by generating information rich visual input (exploration), e.g., no excessive staring at walls. The agent learns this trade off through reinforcement learning, i.e., by maximizing the expected sum of rewards. The rewards we provide here are extrinsic rewards from the environment (negative reward for running into walls, positive reward for finding the target) as well as intrinsic rewards linked to the short term goal inputs of the reactive agent. These short term goal inputs are the STTD distribution over {North, East, South, West} and the measure of distance to the nearest target location from the map interpretation network as well as the normalized entropy H loc of the discrete location probability distribution {p loc i } N i=1 . H loc represents a measure of location uncertainty which is linked to the need for exploration.
The intrinsic reward consists of two parts to encourage both exploration and exploitation. The exploration intrinsic reward I explor t in each timestep t is the difference in location probability distribution entropy to the previous timestep:
I explor t = H loc t−1 − H loc t
Note that this reward is positive if and only if the location probability distribution entropy decreases, i.e., when the agent gets more certain about its position.
The exploitation intrinsic reward should be a measure of how well the egomotion of the agent aligns with the STTD. For this we calculate an approximate two dimensional egomotion vector e t from the egomotion probability distribution estimation s t . Similarly we calculate a STTD vector d t−1 from the STTD distribution over {N orth, East, South, W est} of the previous timestep. We calculate the exploitation intrinsic reward I exploit t as dot product between the two vectors:
I exploit t = e t T · d t−1
Note that this reward is positive if and only if the angle difference between the two vectors is no bigger than 90 degrees, i.e., if the estimated egomotion was in the same direction as suggested by the STTD in the timestep before.
As input to the reactive agent we concatenate the discretized 3-hot angleα, the last extrinsic reward and the location probability distribution entropy H loc to the STTD distribution and the estimated target distance. The agent itself is a simple feed-forward network consisting of two fully connected layers with rectified linear unit activation followed by a fully connected layer for the policy and a fully connected layer for the estimated state value respectively. The agents next action is sampled from the softmax-distribution over the policy outputs.
Training Losses
To train our agent, we use a combination of on-policy losses, where the data is generated from rollouts in the environment, and off-policy losses, where we sample the data from a replay memory. More specifically, the total loss is the sum of the four module specific losses: 1. L vlm , the off-policy visible local map loss 2. L loc , the on-policy localization loss 3. L rm , the off-policy reward map loss and 4. L a , the on-policy reactive agents acting loss We train our agent as asynchronous advantage actor critic, or A3C, with additional losses; similar to DeepMind's UN-REAL agent (Jaderberg et al. 2016):
In each training iteration, every thread rolls out up to 20 steps in the environment and accumulates the localization loss L loc and acting loss L a . For each step, an experience frame is pushed to an experience history buffer of fixed length. Each experience frame contains all inputs the network requires as well as the current discretized true position. From this experience history, frames are sampled and inputs replayed through the network to calculate the visible local map loss L vlm and the reward map loss L rm . We now describe each loss in more detail.
The outputṼ of the visible local map network is trained to match the visible excerpt of the map V , constructed from the discretized location and angle. In each training iteration 20 experience frames are uniformly sampled from the experience history and the visible local map loss is calculated as the sum of L2 distances between visible local map outputs V k and targets V k :
L vlm = k∈S ||Ṽ k − V k || 2
Here, S denotes the set of sampled frame indices. Our localization loss L loc is trained on the policy rollouts in the environment. For each step, we compare the estimated position to the actual position in two ways, which results in a cross entropy location loss L loc,xent and a distance location loss L loc,d . The cross entropy location loss is the cross entropy between the location probability distribution {p loc i } N i=1 and a 1-hot encoding of the actual position. The distance loss L loc,d is calculated at each step as the L2 distance between the actual two dimensional cell position coordinates c pos and the estimated centroid of all possible cells i weighted by their corresponding probability p loc i :
L loc,d = c pos − N i=1 p loc i · c i 2
In addition to training the location estimation directly we also assign an auxiliary local map loss L loc,lm to help with the local map construction. We calculate the local map loss only once per training iteration as L2 distance between the last estimated local map LM est and the actual local map at that point in time.
The goal of the reward map loss L rm is to have the three channels of the reward map represent wall locations, free space locations and target locations respectively. To do this, we leverage the setting that running into a wall gives a negative extrinsic reward, moving in open space gives no extrinsic reward and finding the target gives a positive extrinsic reward. Therefore the problem can be transformed into estimating an extrinsic reward. Each training iteration we sample 20 frames from the experience history. This sampling is independent from the visible local map loss sampling and skewed to have in expectation equally many frames with positive, negative and zero extrinsic reward. For each frame, the frames map is passed through the convolution layers of the map interpretation network to create the corresponding reward map while the visual input and localization state saved in the frame are fed through the network to get the estimated location probability distribution. The reward map loss is the cross entropy prediction error of the reward at the estimated position.
Our reactive agent's acting loss is equivalent to the A3C learning described by . We also adapted an action repeat of 4 and a frame rate of 15 fps. The whole network is trained by RMSprop gradient descent with gradient back propagation stopped at module boundaries, i.e., each module is only trained on its module specific loss.
Environment and Results
To evaluate our architecture we created a training and test set of mazes with the corresponding black and white maps in the DeepMind Lab environment. The mazes are quadratic grid mazes with each maze cell being either a wall, an open space, the target or the spawn position. The training set consists of 100 mazes of different sizes; 20 mazes each in the sizes 5x5, 7x7, 9x9, 11x11 and 13x13 maze cells. The test set consists of 900 mazes; 100 in each of the sizes 5x5, 7x7, 9x9, 11x11, 13x13, 15x15, 17x17, 19x19 and 21x21. Note that the outermost cells in the mazes are always walls, therefore the maximal navigable space of a 5x5 maze is 3x3 maze cells. Thus the navigable space for the biggest test mazes is roughly 3 times larger than for the biggest training mazes.
For the localization, we used a location cell granularity 3 times finer than the maze cells, which results in a total of N =63x63=3969 discrete location states on the biggest 5 7 9 11 13 15 17 19 21 0 2,000 4,000
Maze width Steps needed Figure 5: All the results of the (at most 100) successful tests for each maze size. Every single test is represented by an "x". The line connects the arithmetic averages of each maze size. The distance between origin and target grows linearly with maze size, as does the number of steps. 21x21 mazes. We train our agent starting on small mazes and increase the maze sizes as the agent gets better. More specifically we use 16 asynchronous agent training threads from which we start 8 on the smallest (5x5) training mazes while the other training threads are started 2 each on the other sizes (7x7, 9x9, 11x11 and 13x13). This prevents the visible local map network from overfitting on the small 5x5 mazes. The thread agents are placed into a randomly sampled maze of their currently associated maze size and try to find the exit, while counting their steps. A step is one interaction with the environment, i.e., sampling an action from the agents policy π and receiving the corresponding next visual input, discretized angle and extrinsic reward from the environment. A step is not the same as a location or maze grid cell; as agents accelerate, there is no direct correlation between steps and actual walked distance. We consider each sampled maze an episode start. The episode ends successfully if the agent manages to find the target and the steps needed are stored. If the agent does not find the exit in 4500 steps, the episode ends as not successful. After an episode ends, a new episode is started, i.e., a new maze is sampled. Note that in this setting the agent is always placed in a newly sampled maze and not in the same maze as in (Jaderberg et al. 2016) and(Mirowski et al. 2016).
For each thread we calculate a moving average of steps needed to end the episodes. Once this moving average falls below a maze size specific threshold, the thread is transferred to train on mazes of the next bigger size. Once a thread's moving average of steps needed in the biggest training mazes (13x13) falls below the threshold, the thread is stopped and its training is considered successful. Once all threads reach this stage, the overall training is considered successful and the agent is fully trained. We calculate the moving average over the last 50 episodes and use 60, 100, 140, 180 and 220 steps as threshold for the maze sizes 5x5, 7x7, 9x9, 11x11 and 13x13, respectively. Figure 4 shows the training performance of 8 actor threads. One can see that the agents sometimes overfit their policies which results in temporarily decreased performance even though the maze size did not increase. In the end however, all threads reach good performance.
The trained agent is tested on the 900 test set mazes, the Maze size 5x5 7x7 9x9 11x11 13x13 15x15 17x17 19x19 21x21 Targets found 100% 100% 100% 99% 99% 98% 93% 93% 91% Table 1: Percentage of targets found in the test mazes. Up to size 9x9 the agent always finds the target. More interestingly, the agent is able to find more than 90% of the targets in mazes that are bigger than any maze it has seen during training.
1 2 3 4 Figure 6: Example trajectories walked by the agent. Note that the agent walks close to the shortest path and its continuous localization and planning lets the agent find the path to the target even after it took a wrong turn. Maze width Steps needed Figure 7: Comparison of our agent (blue lines) to an agent that has perfect position information and an optimal short term target direction input (red lines). The solid lines count all steps (turns and moves). The solid blue line is the same as the average line of Figure 5. The dashed lines do not count the steps in which the agent turns. The figure shows that the overhead is mostly because of turning, as our agent needs to "look around" to localize itself. number of required steps per maze size are plotted in Figure 5. We stop a test after 4,500 steps, but even for the biggest test mazes (21x21) the agent found more than 90% of the targets within these 4,500 steps. See Table 1 for the percentage of exits found in all maze sizes.
If the agent finds the exit it does so in almost shortest path manner, as can be seen in Figure 6. However, the agent needs a considerable number of steps to localize itself. To evaluate this localization overhead, we trained an agent consisting solely of the reactive agent module with access to the perfect location and optimal short term target direction and plotted its average performance on the test set in Figure 7. The figure shows a large gap between the full agent and the agent with access to the perfect position. This is due to turning actions, which the full agent performs to localize itself, i.e., it continuously needs to look around to know where it is. For the localization in the beginning of an episode, the agent also mainly relies on turning as can be seen in four example frames in Figure 8.
Conclusion
We have presented a deep reinforcement learning agent that can localize itself on a 2D map based on observations of its 3D surroundings. The agent manages to find the exit in mazes with high success rate, even in mazes substantially larger than it has ever seen during training. The agent often finds the shortest path, showing that the agent can continuously retain a good localization. The architecture of our system is built in a modular fashion. Each module deals with a subtask of the maze problem and is trained in isolation. This modularity allows for a structured architecture design, where a complex task is broken down into subtasks, and each subtask is then solved by a module. Modules consist of general architectures, e.g., MLPs, or more task-specific networks such as our recurrent localization cell. It is also possible to use deterministic algorithm modules, such as in our shortest path planning module. Architecture design is aided by the possibility to easily replace each module by ground truth values, if available, to find sources of bad performance.
Our agent is designed for a specific task. We plan to make our modular architecture more general and apply it to other tasks, such as playing 3D games. Since modules can be swapped out and arranged differently, it would be interesting to equip an agent with many modules and let it learn which module to use in which situation.
A Visible Local Map Network Implementation Details
The visible local map network passes the RGB visual input through two convolutional layers and a fully connected layer to extract visual features. These visual features are concatenated to the discretized angle inputα and passed through another fully connected layer to get an intermediate representation from which a local map excerpt and the currently visible field are estimated. The estimated map excerpt is constructed by passing the intermediate representation through a fully connected layer with a clipping nonlinearity that clips the output to [−0.5, 0.5]. The visible field estimation is achieved as well by passing the intermediate representation through a fully connected layer, but here we use a rectified pseudo-sigmoidal nonlinearity such that each output component lies within [0, 1]. We achieve this rectified pseudosigmoidal activation by clipping each output to [−0.5, 0.5] and then adding 0.5. We use this rectified pseudo-sigmoidal activation because it is able to fully close (0) and open (1) the estimated gate. To get the visible local map estimation we multiply the estimated map excerpt component-wise with the visible field estimation, i.e., we gate the estimated map excerpt with the visible field estimation. Note that we do not directly train the network to estimate the visible field, but rather used this term to give a logical intuition why the gating operation is effective.
B Recurrent Localization Cell Implementation Details
We estimate the egomotion s t as a discrete probability distribution over a 3x3 grid. The grid cells represent the estimated probability that the agent moved North-West, North, North-East and so on. The cell in the middle of the 3x3 grid represents the estimated probability that the agent stayed in place. We get a rough estimate of the egomotion by using the visible local map input as a two dimensional convolution filter and shift it over the previously estimated local map. This rough estimate is fine tuned by the outputs of a small feed forward neural network which takes as inputs the previously estimated egomotion probability distribution (s t−1 ), the current discretized angle (α), the last action taken (a t−1 ) and the last extrinsic reward received (r t−1 ). We use a softmax over the estimated egomotion logits to get the estimated egomotion probability distribution s t . We then use the estimated egomotion probability distribution as two dimensional convolution filter to shift the previously estimated local map (LM est t−1 ) and a map feedback (LM mf b t−1 ) from the previously estimated position. We add the new visible local map to the shifted previously estimated map and clip it to the range [−0.5, +0.5] to get the new local map estimation LM est t . The shifted map feedback is weighted by a trainable parameter λ and added to the new local map estimation to get the local map estimation with map feedback LM est+mf b t , which is again clipped to [−0.5, +0.5]. Note that we differentiate between the estimated local map LM est t , which is recurrently passed to the next step, and the estimated local map with map feedback LM est+mf b t , which is only used for the immediate localization. We do not include the map feedback directly into the estimated local map since otherwise a map feedback from an incorrect position could alter the construction of the estimated local map. The map feedback is merely used to complete the estimated local map.
We rasterize and scale down the full map to fit the desired location granularity and range, i.e., we scale it to have N discrete cells in the range [−0.5, +0.5]; −0.5 representing black, +0.5 representing white. To localize the agent, we then simply use the estimated local map with map feedback as two dimensional convolution filter and slide it over the zero-padded rasterized map to get for each possible location cell the correlation of the surrounding local map with the local map estimation. We use a softmax over the N correlation outputs to get the location probability distribution {p loc i } N i=1 . Finally, the feedback from the map is extracted for the next localization step: we get for each location cell in the rasterized map the corresponding local map surrounding it and sum up these N local maps, each weighted by the probability of the agent being in the corresponding cell.
C Shortest Path Planning Algorithm
We here present our implementation of the shortest path planning algorithm. Note that the deterministic shortest path planner was not the main focus of our work. Any algorithm that takes a grid maze as input and outputs for each location in the maze the shortest path direction for the next step to the nearest exit would work.
We first replaced the reward estimation in each location of the reward map with the average over the corresponding maze cell and used a sharp softmax to classify each location cell into either target cell, navigable cell or wall cell. We assigned the values 1.0, 0.99 and 0 to cells that are target cells, navigable cells and wall cells respectively. We used this module in a recursive multiplicative way to assign each location cell a value v corresponding to the distance to the nearest target cell. More precisely we did 200 iterations where in each iteration we calculated for each cell i:
v i k = v i k−1 · max j∈Ni v j k−1
Here, k ∈ {1, ..., 200} denotes the current iteration and N i denotes the set of neighboring cells of i to the North, East, South and West. In the final iteration, we return for each cell i a sharp softmax over the four values in N i , which is the desired short term target direction and 1.0 − v i k , which is the desired measure of distance to the nearest target. Figure 9 shows a screenshot of our agent in a maze from the test set.
D Screenshot of Full Agent Navigating a Maze
Figure 4 :
4Training performance of 8 actor threads that start training on 5x5 mazes. The vertical black lines mark jumps to larger mazes of the thread in blue.
Figure 8 :
8Four example frames to illustrate the typical behavior of the agent: The red line is the trace of its actual position, while the shades of blue represent its position estimate. The darker the blue, the more confident the agent is to be in this location. Frame 1 shows the agent's true starting position as a red dot, frame 2 shows several similar locations identified after a bit of turning, in frame 3 the agent starts to understand the true location, and in frame 4 it has moved.
AcknowledgmentsWe would like to thank the anonymous reviewers for their helpful comments.Figure 9: Example of our agent moving through a maze of size 21x21 during testing. The Visual Input is the 3D pixel input the agent receives. The Map is the map of the maze showing the agent's position estimation (blue dot) and the walked trajectory (red trace). The agent only sees the "X". The Reward Map shows the agent's estimation of where to get reward. The displayed map is shows positive reward estimations in green, 0 reward estimations in red and negative reward estimations in blue. The agent has successfully learned that reaching the "X" gives positive reward, and walking into walls yields negative reward. On the top right we can see how the agent estimates its (visible) local map. It then uses the local map estimate to localize itself within the maze map. One can see that the agent is currently confident of his position (only a single blue dot in the map), and that its localization is indeed very accurate, as can be seen by comparing its estimated position ([23 17]) and its actual position ([22 17]). The bottom right shows the probability distribution over the possible actions for the current state (policy).
Neuronlike adaptive elements that can solve difficult learning control problems. Sutton Barto, Anderson ; Barto, A G Sutton, R S Anderson, C W , IEEE Trans. Systems, Man, and Cybernetics. 135Barto, Sutton, and Anderson 1983] Barto, A. G.; Sutton, R. S.; and Anderson, C. W. 1983. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Systems, Man, and Cybernetics 13(5):834-846.
. [ Beattie, Deepmind lab. CoRR abs/1612.03801[Beattie et al. 2016] Beattie, C.; Leibo, J. Z.; Teplyashin, D.; Ward, T.; Wainwright, M.; Küttler, H.; Lefrancq, A.; Green, S.; Valdés, V.; Sadik, A.; Schrittwieser, J.; Anderson, K.; York, S.; Cant, M.; Cain, A.; Bolton, A.; Gaffney, S.; King, H.; Hassabis, D.; Legg, S.; and Petersen, S. 2016. Deepmind lab. CoRR abs/1612.03801.
Playing doom with slam-augmented deep reinforcement learning. [ Bhatti, CoRR abs/1612.00380[Bhatti et al. 2016] Bhatti, S.; Desmaison, A.; Miksik, O.; Nardelli, N.; Siddharth, N.; and Torr, P. H. S. 2016. Play- ing doom with slam-augmented deep reinforcement learn- ing. CoRR abs/1612.00380.
Using occupancy grids for mobile robot perception and navigation. A Elfes, IEEE Computer. 226[Elfes 1989] Elfes, A. 1989. Using occupancy grids for mobile robot perception and navigation. IEEE Computer 22(6):46-57.
Visual simultaneous localization and mapping: a survey. Ascencio -Pacheco, Rendón-Mancha ; Fuentes-Pacheco, J Ascencio, J R Rendón-Mancha, J M , Artif. Intell. Rev. 431-Pacheco, Ascencio, and Rendón-Mancha 2015] Fuentes-Pacheco, J.; Ascencio, J. R.; and Rendón-Mancha, J. M. 2015. Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev. 43(1):55-81.
A stochastic reinforcement learning algorithm for learning real-valued functions. V Gullapalli, Neural Networks. 36Gullapalli[Gullapalli 1990] Gullapalli, V. 1990. A stochastic reinforce- ment learning algorithm for learning real-valued functions. Neural Networks 3(6):671-692.
Vizdoom: A doombased AI research platform for visual reinforcement learning. [ Gupta, CoRR abs/1611.05397IEEE Conference on Computational Intelligence and Games. Kaelbling, Littman, and Moore; Santorini, Greece4Reinforcement learning with unsupervised auxiliary tasks[Gupta et al. 2017] Gupta, S.; Davidson, J.; Levine, S.; Suk- thankar, R.; and Malik, J. 2017. Cognitive mapping and planning for visual navigation. CoRR abs/1702.03920. [Jaderberg et al. 2016] Jaderberg, M.; Mnih, V.; Czarnecki, W. M.; Schaul, T.; Leibo, J. Z.; Silver, D.; and Kavukcuoglu, K. 2016. Reinforcement learning with unsupervised auxil- iary tasks. CoRR abs/1611.05397. [Kaelbling, Littman, and Moore 1996] Kaelbling, L. P.; Littman, M. L.; and Moore, A. W. 1996. Reinforcement learning: A survey. J. Artif. Intell. Res. 4:237-285. [Kempka et al. 2016] Kempka, M.; Wydmuch, M.; Runc, G.; Toczek, J.; and Jaskowski, W. 2016. Vizdoom: A doom- based AI research platform for visual reinforcement learn- ing. In IEEE Conference on Computational Intelligence and Games, CIG 2016, Santorini, Greece, September 20-23, 2016, 1-8.
Theory of neuralanalog reinforcement systems and its application to the brain model problem. Princeton University. [ Kulkarni, CoRR abs/1611.03673Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Barcelona, Spain518Learning to navigate in complex environments. Mnih et al. 2015. and Hassabis, D. 2015. Humanlevel control through deep reinforcement learning[Kulkarni et al. 2016] Kulkarni, T. D.; Narasimhan, K.; Saeedi, A.; and Tenenbaum, J. 2016. Hierarchical deep re- inforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural In- formation Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 3675-3683. [Minsky 1954] Minsky, M. L. 1954. Theory of neural- analog reinforcement systems and its application to the brain model problem. Princeton University. [Mirowski et al. 2016] Mirowski, P.; Pascanu, R.; Viola, F.; Soyer, H.; Ballard, A. J.; Banino, A.; Denil, M.; Goroshin, R.; Sifre, L.; Kavukcuoglu, K.; Kumaran, D.; and Hadsell, R. 2016. Learning to navigate in complex environments. CoRR abs/1611.03673. [Mnih et al. 2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Ried- miller, M. A.; Fidjeland, A.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S.; and Hassabis, D. 2015. Human- level control through deep reinforcement learning. Nature 518(7540):529-533.
Reinforcement learning of motor skills with policy gradients. [ Mnih, CoRR abs/1602.01783Neural Networks. 214Asynchronous methods for deep reinforcement learning[Mnih et al. 2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T. P.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. CoRR abs/1602.01783. [Peters and Schaal 2008] Peters, J., and Schaal, S. 2008. Re- inforcement learning of motor skills with policy gradients. Neural Networks 21(4):682-697.
Reinforcement learning -an introduction. Adaptive computation and machine learning. D E Rumelhart, G E Hinton, R J Williams, Cognitive modeling. 53MIT PressLearning representations by back-propagating errors[Rumelhart et al. 1988] Rumelhart, D. E.; Hinton, G. E.; Williams, R. J.; et al. 1988. Learning representations by back-propagating errors. Cognitive modeling 5(3):1. [Schaul et al. 2015] Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2015. Prioritized experience replay. CoRR abs/1511.05952. [Sutton and Barto 1998] Sutton, R. S., and Barto, A. G. 1998. Reinforcement learning -an introduction. Adaptive computation and machine learning. MIT Press.
Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Precup Sutton, R S Singh ; Sutton, D Precup, S P Singh, Artif. Intell. 1121-2Sutton, Precup, and Singh 1999] Sutton, R. S.; Precup, D.; and Singh, S. P. 1999. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learn- ing. Artif. Intell. 112(1-2):181-211.
Learning to predict by the methods of temporal differences. R S Sutton, R S Sutton, Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers. 3Machine LearningSutton, R. S. 1984. Temporal credit assign- ment in reinforcement learning. [Sutton 1988] Sutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine Learning 3:9-44. [Szepesvári 2010] Szepesvári, C. 2010. Algorithms for Re- inforcement Learning. Synthesis Lectures on Artificial In- telligence and Machine Learning. Morgan & Claypool Pub- lishers.
Burgard Thrun, S Fox ; Thrun, W Burgard, D Fox, Probabilistic robotics. MIT pressThrun, Burgard, and Fox 2005] Thrun, S.; Burgard, W.; and Fox, D. 2005. Probabilistic robotics. MIT press.
Dueling network architectures for deep reinforcement learning. Guez Hasselt, H Silver ; Van Hasselt, A Guez, D Silver, Wang, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USA; New York City, NY, USAProceedings of the 33nd International Conference on Machine LearningHasselt, Guez, and Silver 2016] van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep reinforcement learning with double q-learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., 2094-2100. [Wang et al. 2016] Wang, Z.; Schaul, T.; Hessel, M.; van Hasselt, H.; Lanctot, M.; and de Freitas, N. 2016. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33nd International Conference on Ma- chine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, 1995-2003.
. C J Watkins, P Dayan, Machine learning. 83-4Q-learningand Dayanand Dayan 1992] Watkins, C. J., and Dayan, P. 1992. Q-learning. Machine learning 8(3-4):279-292.
Learning from delayed rewards. C J C H Watkins, King's College, CambridgePh.D. DissertationWatkins, C. J. C. H. 1989. Learning from delayed rewards. Ph.D. Dissertation, King's College, Cam- bridge.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams, Machine Learning. 8Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine Learning 8:229-256.
Targetdriven visual navigation in indoor scenes using deep reinforcement learning. Y Zhu, R Mottaghi, E Kolve, J J Lim, A Gupta, L Fei-Fei, A Farhadi, 2017 IEEE International Conference on Robotics and Automation. Singapore, Singapore[Zhu et al. 2017] Zhu, Y.; Mottaghi, R.; Kolve, E.; Lim, J. J.; Gupta, A.; Fei-Fei, L.; and Farhadi, A. 2017. Target- driven visual navigation in indoor scenes using deep rein- forcement learning. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singa- pore, May 29 -June 3, 2017, 3357-3364.
| []
|
[
"NILPOTENCY AND STRONG NILPOTENCY FOR FINITE SEMIGROUPS",
"NILPOTENCY AND STRONG NILPOTENCY FOR FINITE SEMIGROUPS"
]
| [
"J Almeida ",
"M Kufleitner ",
"M H Shahzamanian "
]
| []
| []
| Nilpotent semigroups in the sense of Mal'cev are defined by semigroup identities. Finite nilpotent semigroups constitute a pseudovariety, MN, which has finite rank. The semigroup identities that define nilpotent semigroups, lead us to define strongly Mal'cev nilpotent semigroups. Finite strongly Mal'cev nilpotent semigroups constitute a non-finite rank pseudovariety, SMN. The pseudovariety SMN is strictly contained in the pseudovariety MN but all finite nilpotent groups are in SMN. We show that the pseudovariety MN is the intersection of the pseudovariety BG nil with a pseudovariety defined by a κ-identity. We further compare the pseudovarieties MN and SMN with the Mal'cev product J m G nil . | 10.1093/qmath/hay059 | [
"https://arxiv.org/pdf/1707.06868v1.pdf"
]
| 119,639,595 | 1707.06868 | e7374f8535cab99c9bb9d3ff45d062ff90cc65de |
NILPOTENCY AND STRONG NILPOTENCY FOR FINITE SEMIGROUPS
21 Jul 2017
J Almeida
M Kufleitner
M H Shahzamanian
NILPOTENCY AND STRONG NILPOTENCY FOR FINITE SEMIGROUPS
21 Jul 2017arXiv:1707.06868v1 [math.GR]
Nilpotent semigroups in the sense of Mal'cev are defined by semigroup identities. Finite nilpotent semigroups constitute a pseudovariety, MN, which has finite rank. The semigroup identities that define nilpotent semigroups, lead us to define strongly Mal'cev nilpotent semigroups. Finite strongly Mal'cev nilpotent semigroups constitute a non-finite rank pseudovariety, SMN. The pseudovariety SMN is strictly contained in the pseudovariety MN but all finite nilpotent groups are in SMN. We show that the pseudovariety MN is the intersection of the pseudovariety BG nil with a pseudovariety defined by a κ-identity. We further compare the pseudovarieties MN and SMN with the Mal'cev product J m G nil .
Introduction
Mal'cev [13] and independently Neumann and Taylor [15] have shown that nilpotent groups can be defined by semigroup identities (that is, without using inverses). This leads to the notion of a nilpotent semigroup (in the sense of Mal 'cev).
For a semigroup S with elements x, y, z 1 , z 2 , . . . one recursively defines two sequences by λ 0 = x, ρ 0 = y and λ n+1 = λ n z n+1 ρ n , ρ n+1 = ρ n z n+1 λ n . A S semigroup is said to be nilpotent if there exists a positive integer n such that λ n (x, y, z 1 , . . . , z n ) = ρ n (x, y, z 1 , . . . , z n ) for all x, y in S and z 1 , . . . , z n in S 1 . The smallest such n is called the nilpotency class of S. Clearly, null semigroups are nilpotent in the sense of Mal 'cev. A pseudovariety of semigroups is a class of finite semigroups closed under taking subsemigroups, homomorphic images and finite direct products. The finite nilpotent semigroups constitute a pseudovariety which is denoted by MN [18]. In [5], the rank of the pseudovariety MN and some classes defined by several of the variants of Mal'cev nilpotent semigroups are investigated and they are compared.
Let S be a semigroup. In this paper, we introduce a further variant of Mal'cev nilpotency, that we call strong Mal'cev nilpotency. For semigroups, the new notion is strictly stronger than Mal'cev nilpotency, but it coincides with nilpotency for groups. Strongly Mal'cev nilpotent semigroups constitute a pseudovariety which we denote by SMN. We show that G nil ⫋ SMN ⫋ MN where G nil is the pseudovariety of all finite nilpotent groups. Higgins and Margolis showed that ⟨A ∩ Inv⟩ ⫋ A ∩ ⟨Inv⟩ [9]. In [5], it is proved that ⟨A ∩ Inv⟩ ⫋ A ∩ MN. We show that, in fact, ⟨A ∩ Inv⟩ ⫋ A ∩ SMN.
The paper [5] also shows that MN is defined by the pseudoidentity φ ω (x) = φ ω (y), where φ is the continuous endomorphism of the free profinite semigroup on {x, y, z, t} such that φ(x) = xzytyzx, φ(y) = yzxtxzy, φ(z) = z, and φ(t) = t. In particular, the pseudovariety MN has finite rank. We prove that the pseudovariety SMN has infinite rank and, therefore, it is nonfinitely based. In this paper, we also show that the pseudovariety MN is the intersection of BG nil with a pseudovariety defined by a κ-identity.
Note that the following chain of proper inclusions holds:
G nil ⫋ SMN ⫋ MN ⫋ BG nil .
On the other hand, it is part of a celebrated result that BG = J m G where m stands for Mal'cev product [16]. In contrast, the inclusion J m H ⫋ BH is strict for every proper subpseudovariety H of G [9].
Preliminaries
For standard notation and terminology relating to finite semigroups, we refer the reader to [7]. A completely 0-simple finite semigroup S is isomorphic with a regular Rees matrix semigroup M 0 (G, n, m; P ), where G is a maximal subgroup of S, P is the m × n sandwich matrix with entries in G θ and n and m are positive integers. The nonzero elements of S are denoted (g; i, j), where g ∈ G, 1 ≤ i ≤ n and 1 ≤ j ≤ m; the zero element is denoted θ.
The (j, i)-entry of P is denoted p ji . The set of nonzero elements is denoted M(G, n, m; P ). If all elements of P are nonzero then M(G, n, m; P ) is a subsemigroup and every completely simple finite semigroup is of this form. If P = I n , the n × n identity matrix, then S is an inverse semigroup. Jespers and Okniński proved that a completely 0-simple semigroup M 0 (G, n, m; P ) is Mal'cev nilpotent if and only if n = m, P = I n and G is a nilpotent group [10,Lemma 2.1].
The next lemma is a necessary and sufficient condition for a finite semigroup not to be nilpotent [11,Lemma 2.2].
Lemma 2.1. A finite semigroup S is not Mal'cev nilpotent if and only if
there exist a positive integer m, distinct elements x, y ∈ S, and elements w 1 , w 2 , . . . , w m ∈ S 1 such that x = λ m (x, y, w 1 , w 2 , . . . , w m ) and y = ρ m (x, y, w 1 , w 2 , . . . , w m ).
Assume that a finite semigroup S has a proper ideal M = M 0 (G, n, n; I n ) and n > 1. The action Γ on the R-classes of M in [12] is used. In this paper, we consider the dual definition of the action Γ as in [5]. The action Γ is defined to be the action of S on the L-classes of M , that is a representation (a semigroup homomorphism) Γ ∶ S → T , where T denotes the full transformation semigroup on the set {1, . . . , n} ∪ {θ}. The definition is as follows, for 1 ≤ j ≤ n and s ∈ S, Γ(s)(j) = j ′ if (g; i, j)s = (g ′ ; i, j ′ ) for some g, g ′ ∈ G, 1 ≤ i ≤ n θ otherwise and Γ(s)(θ) = θ. We call the representation Γ the L M -representation of S. For every s ∈ S, Γ(s) can be written as a product of orbits which are cycles of the form (j 1 , j 2 , . . . , j k ) or sequences of the form (j 1 , j 2 , . . . , j k , θ), where 1 ≤ j 1 , . . . , j k ≤ n. The latter orbit means that Γ(s)(j i ) = j i+1 for 1 ≤ i ≤ k − 1, Γ(s)(j k ) = θ, Γ(s)(θ) = θ and there does not exist 1 ≤ r ≤ n such that Γ(s)(r) = j 1 . Orbits of the form (j) with j ∈ {1, . . . , n} are written explicitly in the decomposition of Γ(s). By convention, we omit orbits of the form (j, θ) in the decomposition of Γ(s) (this is the reason for writing orbits of length one). If Γ(s)(j) = θ for every 1 ≤ j ≤ n, then we simply denote Γ(s) by θ.
If the orbit ε appears in the expression of Γ(s) as a product of disjoint orbits, then we denote this by ε ⊆ Γ(s). If Γ(s)(j 1,1 ) = j 1,2 , Γ(s)(j 1,2 ) = j 1,3 , . . . , Γ(s)(j 1,p 1 −1 ) = j 1,p 1 , . . . , Γ(s)(j q,1 ) = j q,2 , . . . , Γ(s)(j q,pq−1 ) = j q,pq , then we write [j 1,1 , j 1,2 , . . . , j 1,p 1 ; . . . ; j q,1 , j q,2 , . . . , j q,pq ] ⊑ Γ(s).
Note that, if g ∈ G and 1 ≤ n 1 , n 2 ≤ n with n 1 ≠ n 2 then Γ((g; n 1 , n 2 )) = (n 1 , n 2 , θ) and Γ((g; n 1 , n 1 )) = (n 1 ).
Therefore, if the group G is trivial, then the elements of M may be viewed as transformations. Also, for every s ∈ S, we recall a map
Ψ(s) ∶ {1, . . . , n} ∪ {θ} → G ∪ {θ} as follows Ψ(s)(j) = g if Γ(s)(j) ≠ θ and (1 G ; i, j)s = (g; i, Γ(s)(j))
for some 1 ≤ i ≤ n, otherwise Ψ(s)(j) = θ. It is straightforward to verify that Ψ is well-defined. Let T be a semigroup with a zero θ T and let M be a regular Rees matrix semigroup M 0 ({1}, n, n; I n ). Let ∆ be a representation of T in the full transformation semigroup on the set {1, . . . , n}∪{θ} such that for every t ∈ T ,
∆(t)(θ) = θ, ∆ −1 (θ) = {θ T }, and ∆(t) restricted to {1, . . . , n} ∖ ∆(t) −1 (θ) is injective. The semigroup S = M ∪ ∆ T is the θ-disjoint union of M and T
(that is the disjoint union with the zeros identified). The multiplication is such that T and M are subsemigroups,
(1; i, j) t = (1; i, ∆(t)(j)) if ∆(t)(j) ≠ θ θ otherwise, and t(1; i, j) = (1; i ′ , j) if ∆(t)(i ′ ) = i θ otherwise.
For more details see [12]. Let V be a pseudovariety of finite semigroups. A pro-V semigroup is a compact semigroup that is residually in V. In case V consists of all finite semigroups, we call pro-V semigroups profinite semigroups. We denote by Ω A V the free pro-V semigroup on the set A and by Ω A V the free semigroup in the (Birkhoff) variety generated by V. Such free objects are characterized by appropriate universal properties. For instance, Ω A V comes endowed with a mapping ι∶ A → Ω A V such that, for every mapping φ∶ A → S into a pro-V semigroup S, there exists a unique continuous homomorphismφ∶ Ω A V → S such thatφ ○ ι = φ. For more details on this topic we refer the reader to [1].
Let S be a finite semigroup. Let π 1 , . . . , π r ∈ Ω r V. Define recursively a sequence (u 1,i , . . . , u r,i ) by (u 1,0 , . . . , u r,0 ) ∈ S r and u i,n+1 = π i (u 1,n , . . . , u r,n ). Denote lim n→∞ u i,n! by ○ ω i (π 1 , . . . , π r ). The component ○ ω i (π 1 , . . . , π r ) for 1 ≤ i ≤ r is also a member of Ω r V. Moreover, if each π i is a computable operation, then so is each ○ ω i (π 1 , . . . , π r ) [3, Corollary 2.5]. Recall that a pseudoidentity (over V) is a formal equality π = ρ between π, ρ ∈ Ω r V for some integer r. For a set Σ of V-pseudoidentities, we denote by Σ V (or simply Σ if V is understood from the context) the class of all S ∈ V that satisfy all pseudoidentities from Σ. Reiterman [17] proved that a subclass V of a pseudovariety W is a pseudovariety if and only if V is of the form Σ W for some set Σ of W-pseudoidentities. For the pseudovarieties G nil and BG, of all finite block groups, that is, finite semigroups in which each element has at most one inverse, we have G nil = φ ω (x) = x ω , x ω y = yx ω = y where φ is the continuous endomorphism of the free profinite semigroup on {x, y} such that φ(x) = x ω−1 y ω−1 xy, φ(y) = y [4, Example 4.15 (2)] and
BG = (ef ) ω = (f e) ω where e = x ω , f = y ω (see for example [1, Exercise 5.2.7]).
Strongly nilpotent semigroups
For a semigroup S with elements x 1 , . . . , x t , z 1 , z 2 , . . . one recursively defines sequences λ n,i = λ n,i (x 1 , . . . , x t ; z 1 , . . . , z n ) by λ 0,i = x i and λ n+1,i = λ n,i z n+1 λ n,i+1 z n+1 ⋯ λ n,t z n+1 λ n,1 z n+1 ⋯ λ n,i−1 for every 1 ≤ i ≤ t. A semigroup is said to be strongly Mal'cev nilpotent, if there exists a positive integer n such that
λ n,1 (x 1 , . . . , x t ; z 1 , . . . , z n ) = ⋯ = λ n,t (x 1 , . . . , x t ; z 1 , . . . , z n )
for all x 1 , . . . , x t in S and z 1 , . . . , z n in S 1 . The smallest such n is called the strong Mal'cev nilpotency class of S. We denote the class all finite strongly Mal'cev nilpotent semigroups by SMN. Note that if we choose t = 2 then the sequences λ n,1 and λ n,2 are equal to the sequences λ n and ρ n , respectively. Hence, if a semigroup S is strongly Mal'cev nilpotent then it is Mal'cev nilpotent too and, thus, we have SMN ⊆ MN. The set SMN is a pseudovariety. It is an example of ultimate equational definition of pseudovariety in the sense of Eilenberg and Schützenberger [8]. Since SMN ⊆ MN and MN ⫋ BG nil , we have the following theorem.
λ n (x, y, z 1 , . . . , z n ) = ρ n (x, y, z 1 , . . . , z n )
for all x, y, z 1 , . . . , z n in S.
Proof. Suppose that there exists a finite semigroup S such that S satisfies the condition of the lemma and S is not Mal'cev nilpotent.
If S ∈ BG nil , then there exists a regular J -class M 0 (G, n, m; P ) ∖ {θ} of S such that one of the following conditions holds:
(1) G is not a nilpotent group;
(2) there exist integers 1 ≤ i 1 , i 2 ≤ n and 1 ≤ j ≤ m such that p ji 1 , p ji 2 ≠ θ;
(3) there exist integers 1 ≤ i ≤ n and 1 ≤ j 1 , j 2 ≤ m such that p j 1 i , p j 2 i ≠ θ. If G is not a nilpotent group, then, by [15,Corollary 1], G is not Mal'cev nilpotent. Since G has an identity, G does not satisfy the condition of the lemma, a contradiction. If (2) holds, then
λ n ((1 G ; i 1 , j), (1 G ; i 2 , j), (1 G ; i 1 , j), (1 G ; i 1 , j), . . . , (1 G ; i 1 , j)) ≠ ρ n ((1 G ; i 1 , j), (1 G ; i 2 , j), (1 G ; i 1 , j), (1 G ; i 1 , j), . . . , (1 G ; i 1 , j)),
for every integer 0 ≤ n. A contradiction with the assumption. Similarly, we have a contradiction for Condition (3). Now, suppose that S ∈ BG nil . Since S satisfies the condition of the lemma and S ∈ MN, by Lemma 2.1 there exist a positive integer m, distinct elements x, y ∈ S and elements w 1 , w 2 , . . . , w m−1 ∈ S 1 such that x = λ m (x, y, 1, w 1 , . . . , w m−1 ) and y = ρ m (x, y, 1, w 1 , . . . , w m−1 ).
; i, j), (g ′ ; i ′ , j ′ ) ∈ M such that x = (g; i, j) and y = (g ′ ; i ′ , j ′ ). As λ 1 , ρ 1 ∈ M , we have j = i ′ and j ′ = i. Hence, we have (i, j) ⊆ Γ(w 1 ) when Γ is an L M -representation of M .
If i ≠ j then w 1 ≠ 1 and, thus,
λ n ((1 G ; i, i), (1 G ; j, j), w 1 , w 2 1 , w 1 , w 2 1 , . . .) ≠ ρ n ((1 G ; i, i), (1 G ; j, j), w 1 , w 2 1 , w 1 , w 2 1 , . . .)
, for every integer 0 ≤ n. A contradiction with the assumption. If i = j, then we have g = λ n (g, g ′ , 1, ψ(w 1 )(i), ψ(w 2 )(i), . . . , ψ(w m−1 )(i)), g ′ = ρ n (g, g ′ , 1, ψ(w 1 )(i), ψ(w 2 )(i), . . . , ψ(w m−1 )(i)).
Since x ≠ y, we have g ≠ g ′ . It follows that G is not nilpotent.
The result follows. Now, by Lemma 3.2 with using the same method as in the proof of [11, Lemma 2.2], we can improve Lemma 2.1.
Lemma 3.3. A finite semigroup S is not Mal'cev nilpotent if and only if
there exist a positive integer m, distinct elements x, y ∈ S and elements w 1 , w 2 , . . . , w m ∈ S such that x = λ m (x, y, w 1 , w 2 , . . . , w m ) and y = ρ m (x, y, w 1 , w 2 , . . . , w m ).
Neumann and Taylor proved that a group G is nilpotent with the nilpotency class n if and only if it is Mal'cev nilpotent with the nilpotency n [15, Corollary 1]. The following lemma, presents a similar result for strong Mal'cev nilpotency. (1) G is a nilpotent group of class n.
(2) G is strongly Mal'cev nilpotent with the strong Mal'cev nilpotency class n.
Proof. If G is strongly Mal'cev nilpotent with the strong Mal'cev nilpotency class 1, then λ 1,1 (x 1 , x 2 ; 1) = λ 1,2 (x 1 , x 2 ; 1), for all x 1 , x 2 in G. It follows that x 1 x 2 = x 2 x 1 . Thus (1), (2) are equivalent to the commutativity of G. Assume that the assertion holds for some n > 1. Let x 1 , . . . , x t , z 1 , . . . , z n in G and a i = λ n,i (x 1 , . . . , x t ; z 1 , . . . , z n ) for every 1 ≤ i ≤ t. For any x ∈ G, denote by x the image of x in G Z(G). If G is nilpotent of class n + 1, then G Z(G) is nilpotent of class n and, by the induction hypothesis we have λ n,1 (x 1 , . . . , x t ; z 1 , . . . , z n ) = ⋯ = λ n,t (x 1 , . . . , x t ; z 1 , . . . , z n ).
Thus, there exist elements v i,j ∈ Z(G) such that
a i = λ n,i (x 1 , . . . , x t ; z 1 , . . . , z n ) = λ n,j (x 1 , . . . , x t ; z 1 , . . . , z n )v i,j = a j v i,j for all 1 ≤ i, j ≤ t with i ≠ j. Let 1 ≤ k, i, j ≤ t with i ≠ j and let (b 1 , . . . , b t ) = (a k , .
. . , a t , a 1 , . . . , a k−1 ).
There exist integers g and h such that b g = a i and b h = a j . Since a i = a j v i,j and v i,j ∈ Z(G), we have
λ n+1,k (x 1 , . . . , x t ; z 1 , . . . , z n+1 ) = b 1 z n+1 b 2 z n+1 . . . b g . . . b h . . . z n+1 b t = b 1 z n+1 b 2 z n+1 . . . a i . . . a j . . . z n+1 b t = b 1 z n+1 b 2 z n+1 . . . a j v i,j . . . a j . . . z n+1 b t = b 1 z n+1 b 2 z n+1 . . . a j . . . a j v i,j . . . z n+1 b t = b 1 z n+1 b 2 z n+1 . . . a j . . . a i . . . z n+1 b t .
Since we take the elements a i and a j arbitrarily, we have λ n+1,k (x 1 , . . . , x t ; z 1 , . . . , z n+1 ) = a 1 z n+1 a 2 z n+1 . . . z n+1 a t .
Therefore G is strongly Mal'cev nilpotent with the strong Mal'cev nilpotency class n + 1. Now, assume that G is strongly Mal'cev nilpotent with the strong Mal'cev nilpotency class n+1. Hence G is Mal'cev nilpotent with the nilpotency class n ′ with n ′ ≤ n + 1. Then, by [15,Corollary 1], G is a nilpotent group with the nilpotency class n ′ . If n ′ < n + 1, then, by assertion, G is strongly Mal'cev nilpotent with the strong Mal'cev nilpotency class n ′ , a contradiction. Hence, G is a nilpotent group with the nilpotency n + 1.
As was mentioned about Lemma 2.1, it is proved in [11] that a finite semigroup S is not Mal'cev nilpotent if and only if there exist a positive integer m, distinct elements x, y ∈ S and elements w 1 , w 2 , . . . , w m ∈ S 1 such that x = λ m (x, y, w 1 , w 2 , . . . , w m ) and y = ρ m (x, y, w 1 , w 2 , . . . , w m ). We proceed with some lemmas that serve to give a criterion for finite semigroups not to be strongly Mal'cev nilpotent (Lemma 3.9).
Lemma 3.5. Let S be a finite semigroup. Suppose that
S = S 1 ⊃ S 2 ⊃ . . . ⊃ S s ⊃ S s+1 = ∅
is a principal series of S and there is an integer 1 ≤ p ≤ s such that the following conditions are satisfied:
(1) S p S p+1 is an inverse completely 0-simple semigroup, say
M = M 0 (G, q, q; I q ); (2) there exist an integer 1 < t, integers α i , β i (1 ≤ i ≤ t), and elements v 1 , . . . , v t ∈ S ∖ S p+1 with [β 1 , α 1+i (mod t) ; . . . ; β t , α t+i (mod t) ] ⊑ Γ(v i ) (1 ≤ i ≤ t), where Γ is an L M -representation of S S p+1 ; (3) 1 < {α 1 , . . . , α t } < t or 1 < {β 1 , . . . , β t } < t.
Then, there exists an integer t ′ such that the following conditions are satisfied:
(1) t ′ ≠ 1 and t ′ t;
(2) {α 1 , . . . , α t ′ } = t ′ ; (3) [β 1 , α 1+i (mod t ′ ) ; . . . ; β t ′ , α t ′ +i (mod t ′ ) ] ⊑ Γ(v i ) (1 ≤ i ≤ t ′ ). Proof. We have 1 < {α 1 , . . . , α t } < t or 1 < {β 1 , . . . , β t } < t. First, we assume that 1 < {α 1 , . . . , α t } < t. Since {α 1 , . . . , α t } < t, there exist integers 1 ≤ h 1 < h 2 ≤ t such that α h 1 = α h 2 and if h 4 − h 3 < h 2 − h 1 , for some distinct integers h 3 and h 4 , then α h 3 ≠ α h 4 . First, suppose that h 2 − h 1 = 1. Since [β 1 , α 1 ; . . . ; β t , α t ] ⊑ Γ(v t ) and [β 1 , α 2 ; β 2 , α 3 ; . . . ; β t , α 1 ] ⊑ Γ(v 1 ),
we have α 1 = ⋯ = α t , which contradicts the initial assumption. Now, suppose that 1 < h 2 −h 1 . By our assumption, the integers α 1 , . . . ,
α (h 2 −h 1 ) are pairwise distinct. Again, as [β 1 , α 1 ; . . . ; β t , α t ] ⊑ Γ(v t ) and [β 1 , α 2 ; β 2 , α 3 ; . . . ; β t , α 1 ] ⊑ Γ(v 1 ), we have α j = α j+γ(h 2 −h 1 ) (mod t) , for every 0 ≤ γ and 1 ≤ j ≤ (h 2 − h 1 ). Also, since the integers α 1 , . . . , α (h 2 −h 1 ) are pairwise distinct, we have (h 2 − h 1 ) t. Therefore, we have [β 1 , α 1+i (mod (h 2 −h 1 )) ; . . . ; β (h 2 −h 1 ) , α (h 2 −h 1 )+i (mod (h 2 −h 1 )) ] ⊑ Γ(v i ), for every 1 ≤ i ≤ h 2 − h 1 . The proof in case 1 < {β 1 , . . . , β t } < t is similar.
We can get the following lemma from the results of the paper [12]. We present a similar lemma as well as the analogous result for strong Mal'cev nilpotency (Lemma 3.7). (
1) B ⫋ A and A B ≅ M ; (2) x = (g; α, β), y = (g ′ ; α ′ , β ′ ) ∈ M and α ≠ α ′ ; (3) w, v ∈ S ∖ B, [β, α ′ ; β ′ , α] ⊑ Γ(w) and [β ′ , α ′ ; β, α] ⊑ Γ(v), where Γ is an L M -representation of S B. Lemma 3.7. Let S ∈ BG nil . The semigroup S is not strongly Mal'cev nilpo- tent if and only if there exist ideals A, B of S, an inverse Rees matrix semi- group M = M 0 (G, q, q; I q )
, an integer t and elements y 1 , . . . , y t , v 1 , . . . , v t such that the following conditions are satisfied:
(1) B ⫋ A and A B ≅ M ; (2) 1 < t; (3) y i = (g i ; α i , β i ) ∈ M (1 ≤ i ≤ t) and {α 1 , . . . , α t } = t; (4) v 1 , . . . , v t ∈ S∖B and [β 1 , α 1+i (mod t) ; . . . ; β t , α t+i (mod t) ] ⊑ Γ(v i ) (1 ≤ i ≤ t), where Γ is an L M -representation of S B.
Proof. First, suppose that S is not strongly Mal'cev nilpotent. Let k = S . Since S is not strongly Mal'cev nilpotent, there exist elements a 1 , . . . , a t ∈ S with t > 1, and w 1 , . . . , w k t +1 ∈ S 1 such that {λ k t +1,1 (a 1 , . . . , a t ; w 1 , . . . , w k t +1 ), . . . , λ k t +1,t (a 1 , . . . , a t ; w 1 , . . . , w k t +1 )} ≠ 1.
Since S t = k t , there exist positive integers r 1 and r 2 ≤ k t + 1 with r 1 < r 2 such that (λ r 1 ,1 (a 1 , . . . , a t ; w 1 , . . . , w r 1 ), . . . , λ r 1 ,t (a 1 , . . . , a t ; w 1 , . . . , w r 1 )) = (λ r 2 ,1 (a 1 , . . . , a t ; w 1 , . . . , w r 2 ), . . . , λ r 2 ,t (a 1 , . . . , a t ; w 1 , . . . , w r 2 )).
Put
y i = λ r 1 ,i (a 1 , . . . , a t ; w 1 , . . . , w r 1 ) (1 ≤ i ≤ t), m = r 2 − r 1 , and v j = w r 1 +j (1 ≤ j ≤ m)
. This gives the equalities
y i = λ m,i (y 1 , . . . , y t ; v 1 , . . . , v m ) (1 ≤ i ≤ t). (3.2) Since {λ k t +1,1 (a 1 , . . . , a t ; w 1 , . . . , w k t +1 ), . . . , λ k t +1,t (a 1 , . . . , a t ; w 1 , . . . , w k t +1 )} ≠ 1, we have 1 < {y 1 , . . . , y t } . Let S = S 1 ⊃ S 2 ⊃ . . . ⊃ S s ⊃ S s+1 = ∅
be a principal series of S. Suppose that y 1 ∈ S p ∖ S p+1 for some 1 ≤ p ≤ s. Because S p and S p+1 are ideals of S, the equalities (3.2) yield y 1 , . . . , y t ∈ S p ∖ S p+1 and v 1 , . . . , v m ∈ S ∖ S p+1 . Since S ∈ BG nil , S p S p+1 is an inverse completely 0-simple semigroup, say M = M 0 (G, q, q; I q ). Then there exist integers 1 ≤ α i , β i ≤ q and elements g i ∈ G such that y
i = (g i ; α i , β i ) (1 ≤ i ≤ t). The equalities (3.2), imply that [β 1 , α 1+i (mod t) ; . . . ; β t , α t+i (mod t) ] ⊑ Γ(v i ) (1 ≤ i ≤ t). If α 1 = ⋯ = α t = α and β 1 = ⋯ = β t = β, then [β, α] ⊑ Γ(v i ) (1 ≤ i ≤ m)
. Therefore, we have
g i = λ m,i (g 1 , . . . , g t ; Ψ(v 1 )(β), . . . , Ψ(v m )(β)) (1 ≤ i ≤ t)
.
Since 1 < {y 1 , . . . , y t } , we have 1 < {g 1 , . . . , g t } .
Then, by Lemma 3.4, G is not a nilpotent group. This contradicts the assumption that S ∈ BG nil .
Then, there exist distinct integers 1 ≤ h, h ′ ≤ t such that α h ≠ α h ′ or β h ≠ β h ′ . Now, by Lemma 3.5,
there exists an integer t ′ such that the following conditions are satisfied:
(1) t ′ ≠ 1 and t ′ t; (2) {α 1 , . . . , α t ′ } = t ′ ; (3) [β 1 , α 1+i (mod t ′ ) ; . . . ; β t ′ , α t ′ +i (mod t ′ ) ] ⊑ Γ(v i ) (1 ≤ i ≤ t ′ ).
The converse, follows at once from the definition of strong Mal'cev nilpotency. Now, we can improve the definition of strong Mal'cev nilpotency for finite semigroups as well as Lemma 3.2.
Lemma 3.8. A finite semigroup S is strongly Mal'cev nilpotent if and only if there exists a positive integer n such that
λ n,1 (x 1 , . . . , x t ; z 1 , . . . , z n ) = ⋯ = λ n,t (x 1 , . . . , x t ; z 1 , . . . , z n )
for all x 1 , . . . , x t , z 1 , . . . , z n in S.
Proof. Suppose that there exists a finite semigroup S such that S satisfies the condition of the lemma and S is not strongly Mal'cev nilpotent.
If S ∈ BG nil , then S ∈ MN and, thus, by Lemma 3.2, S does not satisfy the condition of the lemma, a contradiction. Now, suppose that S ∈ BG nil . Since S ∈ SMN and S satisfies the condition of the lemma, by Lemma 3.7, there exist a regular J -class M = M 0 (G, n, n; I n ) ∖ {θ} of S, a positive integer t > 1, elements y i = (g i ; α i , β i ) ∈ M , for every 1 ≤ i ≤ t and an element w ∈ S 1 such that {α 1 , . . . , α t } = t and
λ 2,i (y 1 , . . . , y t ; 1, w) ∈ M (1 ≤ i ≤ t). As λ 1,1 , . . . , λ 1,t ∈ M , we have β i = α i+1 (1 ≤ i ≤ t − 1) and β t = α 1 . Hence, we have (α 1 , . . . , α t ) ⊆ Γ(w) when Γ is an L M -representation of M . Since {α 1 , . . . , α t } = t and t > 1, we have w ≠ 1. Now, as (α 1 , . . . , α t ) ⊆ Γ(w), we have λ l,i ((1 G ; α 1 , α 1 ), . . . , (1 G ; α t , α t ); w, w 2 , . . . , w t , w, . . . , w t , . . .) = (k l ; α i , α i−l mod t ) (1 ≤ i ≤ t, 0 ≤ l),
for some element k l ∈ G and, thus,
λ l,i ((1 G ; α 1 , α 1 ), . . . , (1 G ; α t , α t ); w, w 2 , . . . , w t , w, . . . , w t , . . .) ≠ λ l,i ′ ((1 G ; α 1 , α 1 ), . . . , (1 G ; α t , α t ); w, w 2 , . . . , w t , w, . . . , w t , . . .),
for all integers 1 ≤ i < i ′ ≤ t and 0 ≤ l. Since w ≠ 1, there is a contradiction with the assumption.
a i = λ m,i (a 1 , . . . , a t ; w 1 , . . . , w m ), for all 1 ≤ i ≤ t.
Proof. If S ∈ BG nil and S ∈ SMN, as we argue in the proof of Lemma 3.8, by Lemma 3.7, the result follows. Now, suppose that S ∈ BG nil . Hence, we have S ∈ MN and the result follows from Lemma 3. Proof. If M is strongly Mal'cev nilpotent then M is Mal'cev nilpotent and, thus, by [10, Lemma 2.1], the result follows. Now, suppose that G is nilpotent, M is inverse, and S is not strongly Mal'cev nilpotent. Then, by Lemma 3.9, there exist positive integers t > 1, m, pairwise distinct elements
a i = (g i ; α i , β i ) ∈ M (1 ≤ i ≤ t) and elements w 1 , . . . , w m ∈ M such that a i = λ m,i (a 1 , . . . , a t ; w 1 , . . . , w m ) (1 ≤ i ≤ t).
Since M is inverse and a i = λ m,i (a 1 , . . . , a t ;
w 1 , . . . , w m ) (1 ≤ i ≤ t), we have [β 1 , α 1+j (mod t) ; . . . ; β t , α t+j (mod t) ] ⊑ Γ(w j ) (1 ≤ j ≤ m)
where Γ is an L M -representation of M . Now, as w 1 , . . . , w m ∈ M , we have
β 1 = ⋯ = β t and α 1 = ⋯ = α t .
Therefore, we have
g i = λ m,i (g 1 , . . . , g t ; Ψ(w 1 )(β 1 ), . . . , Ψ(w m )(β 1 )) (1 ≤ i ≤ t).
Then G is not nilpotent by Lemma 3.4 in contradiction with the initial assumption.
Schützenberger graphs
Let M be a finite A-generated semigroup. If X is an R-class of M , then the Schützenberger graph (with respect to A) of X, denoted Sch A (X), is the full subgraph of the right Cayley graph of M with set of vertices X. Dually, for an L-class Y , the left Schützenberger graph of Y , denoted Sch ρ A (Y ), is the full subgraph of the left Cayley graph of M with vertices Y . An A-graph Γ is inverse, if and only if for every w ∈ (A ∪ A −1 ) ⋆ there is at run labeled w from any vertex q in the graph Γ (for more detail see [20]). It is clear that if M is a finite Mal'cev nilpotent semigroup, then Sch A (X) is inverse, for every regular R-class X, and Sch ρ A (Y ) is inverse, for every regular L-class Y .
Let X be an R-class of M and let L β,α,X = {w ∈ A + w run from β to α}, for every α, β ∈ V (Sch A (X)). We define the following notions for the R-class X:
(1) X is (H-nilpotent) nilpotent in M , if there exist vertices α, α ′ , β, β ′ in V (Sch A (X)) such that α ≠ α ′ (α, α ′ are not in the same H-class) and L β,α,X ∩ L β ′ ,α ′ ,X ≠ ∅, then L β ′ ,α,X ∩ L β,α ′ ,X = ∅. (2) X is (H-strongly nilpotent) strongly nilpotent in M , if there exist vertices α 1 , . . . , α n , β 1 , . . . , β n in V (Sch A (X)) such that there exist integers 1 ≤ i, j ≤ n with α i ≠ α j (α i , α j are in distinct H-classes) and L β 1 ,α 1 ,X ∩ ⋯ ∩ L βn,αn,X ≠ ∅, then there exists an integer k such that L β 1 ,α 1+k (mod n) ,X ∩ ⋯ ∩ L βn,α n+k (mod n) ,X = ∅. The following proposition can be seen as a criterion to detect non Mal'cev nilpotent semigroups by the Schützenberger graphs of its regular R-classes. (1) the semigroup S is in the pseudovariety BG nil ;
(2) if G is a subgroup of S and g ∈ G with g ≠ 1 G , then g 2 ≠ 1 G . If there exists a regular R-class X of S such that the subset X is not nilpotent in S, then the semigroup S is not Mal'cev nilpotent.
Proof. Suppose that there exist a regular R-class X of S and vertices α, α ′ , β, and β ′ in V (Sch A (X)) such that α ≠ α ′ and L β,α,X ∩ L β ′ ,α ′ ,X , L β ′ ,α,X ∩ L β,α ′ ,X ≠ ∅. Let J be a J -class of X. There exist an integer n and a finite nilpotent group G such that J ∪ {θ} ≅ (M =)M 0 (G, n, n; I n ). Then, we have α = (g 1 ; a, b 1 ), α ′ = (g 2 ; a, b 2 ), β = (g 3 ; a, b 3 ), and β ′ = (g 4 ; a, b 4 ) for some integers 1 ≤ a, b 1 , b 2 , b 3 , b 4 ≤ n and elements g 1 , g 2 , g 3 , g 4 ∈ G. Therefore, there exist elements m 1 ,
m 2 ∈ S such that [b 3 , b 1 ; b 4 , b 2 ] ⊑ Γ(m 1 ) and [b 4 , b 1 ; b 3 , b 2 ] ⊑ Γ(m 2 ) when Γ is an L M -representation of J. If b 1 ≠ b 2 then,
by Lemma 3.6, the semigroup S is not Mal'cev nilpotent. Suppose that b 1 = b 2 . Hence, g 1 ≠ g 2 . There exist elements g m 1 , g m 2 such that g 3 g m 1 = g 1 , g 4 g m 1 = g 2 , g 4 g m 2 = g 1 and g 3 g m 2 = g 2 . It follows that g 3 g −1 4 = g 1 g −1 2 and g 3 g −1 4 = g 2 g −1 1 and, thus, (g 2 g −1 1 ) 2 = 1 G . A contradiction with the assumption.
The following propositions can be seen as criteria to detect non Mal'cev nilpotent semigroups and non strongly Mal'cev nilpotent semigroups that are obtained at once from Lemmas 3.6, 3.5 and 3.7. (1) if there exists a regular R-class X of S such that the subset X is not H-nilpotent in S, then the semigroup S is not Mal'cev nilpotent.
(2) if there exists a regular R-class X of S such that the subset X is not H-strongly nilpotent in S, then the semigroup S is not strongly Mal'cev nilpotent.
We recall the pseudovariety BI = {S ∈ S S is block group and all subgroups of S are trivial} where S is all finite semigroups. (2) the semigroup S is strongly Mal'cev nilpotent if and only if for every regular R-class X of S the subset X is strongly nilpotent in S.
An iterative description of SMN
Let
SMN ○ t = φ ω t (y 1 ) = ⋯ = φ ω t (y t )
where φ t is the continuous endomorphism of the free profinite semigroup on {y 1 , . . . , y t , z 1 , . . . , z t } such that φ t (y i ) = λ t,i (y 1 , . . . , y t ; z 1 , . . . , z t ) and φ t (z i ) = z i , for all 1 ≤ i ≤ t.
Theorem 5.1. We have SMN = (⋂ 2≤t SMN ○ t ).
Proof. First, we prove that SMN ⊆ SMN ○ t , for every t ≥ 2. Suppose the contrary. Hence, there exists S ∈ SMN, elements y 1 , . . . , y t , z 1 , . . . , z t ∈ S and distinct integers i and j such that φ ω t (y i ) ≠ φ ω t (y j ). Therefore, we have 2 ≤ {λ n,i (y 1 , . . . , y t ; z 1 , . . . , z t , z 1 , . . .) 1 ≤ i ≤ t} , for every positive integer n which is a contradiction with S ∈ SMN. Now, suppose that there exists a finite semigroup S which S ∈ SMN and S ∈ (⋂ 2≤t SMN ○ t ). If S ∈ MN then, by [5, Theorem 3.1], we have S ∈ SMN ○ 2 , a contradiction. Hence, S ∈ BG nil and S ∈ SMN. By Lemma 3.7, there exists an integer t such that S ∈ SMN ○ t , a contradiction. The result follows.
The following theorem shows that the pseudovariety SMN has infinite rank and, therefore, it is non-finitely based.
Theorem 5.2. The pseudovariety SMN has infinite rank.
Proof. We prove that for every prime number p, there exists a finite semigroup S such that S is generated by 2p elements, S ∈ SMN and ⟨x 1 , . . . ,
x 2p−1 ⟩ ∈ SMN,
for all x 1 , . . . , x 2t−1 ∈ S.
Let the sets A p = {α 1 , . . . , α p } and B p = {β 1 , . . . , β p } with A p ∩ B p = ∅ and the partial bijections X p,i = (α i , β i , θ) and
W p,i = (β 1 , α 1+i (mod p) , θ) ⋯ (β p , α p+i (mod p) , θ) (1 ≤ i ≤ p)
on the set A p ∪B p ∪{θ}. Let S p be a subsemigroup of the full transformation semigroup on the set A p ∪ B p ∪ {θ} given by S p = ⟨X p,1 , . . . , X p,p , W p,1 , . . . , W p,p ⟩.
By Lemma 3.7, the semigroup S p is not in SMN.
Suppose that a subsemigroup T = ⟨y 1 , . . . , y 2p−1 ⟩ of S p is not in SMN. Since S p = M 0 ({1}, 2p, 2p; I 2p ) ∪ {W p,1 , . . . , W p,p } and T ∈ SMN, by Lemma 3.7, there exist an integer p ′ ≤ p and elements
a i = (1; α j i , β j i ) ∈ M 0 ({1}, 2p, 2p; I 2p ), b i ∈ {W p,1 , . . . , W p,p } such that {α j 1 , . . . , α j p ′ } = {β j 1 , . . . , β j p ′ } = p ′ and a i = λ p ′ ,i (a 1 , . . . , a p ′ ; b 1 , . . . , b p ′ ),
for every 1 ≤ i ≤ p ′ . There exists an integer 1 ≤ k ≤ p such that b 1 = W p,k and, thus, we have j i + k = j i+1 (mod p), for every 1 ≤ i < p ′ and j p ′ + k = j 1 (mod p). First, suppose that p ′ is even. Then, we have j 1 + (p ′ 2)k = j 1+p ′ 2 , j 1 − (p ′ 2)k = j 1+p ′ 2 (mod p) and, thus 2j 1 = 2j 1+p ′ 2 (mod p). Since the integers j 1 and j 1+p ′ 2 are distinct, we have 2 p. As p is prime, it follows that p = p ′ = 2. Now, suppose that p ′ is odd. Hence, we have j i + (l)k = j i+l mod p ′ , j i − (l)k = j i−l mod p ′ (mod p), for every 1 ≤ i ≤ p ′ and 1 ≤ l ≤ (p ′ − 1) 2. It follows that j 1 + j 2 + ⋯ + j p ′ = p ′ j 1 = ⋯ = p ′ j p ′ . Hence, it follows that p ′ p and, thus, p = p ′ . Therefore, we have {W p,1 , . . . , W p,p } ⫋ {y 1 , . . . , y 2p−1 }. Hence, there exists an integer 1 ≤ i ≤ p such that (α i , z, θ) ∈ {y 1 , . . . , y 2p−1 }, for every z ∈ A p ∪ B p and, thus, (α i , β i , θ) ∈ M . Since {b 1 , . . . , b p } = {W p,1 , . . . , W p,p }, we have {a 1 , . . . , a p } = {(1; α i , β i ), . . . , (1; α p , β p )}, a contradiction.
The result follows.
The following proposition can be seen as criteria to determine when a semigroup S ∈ BI is not Mal'cev nilpotent or is not strongly Mal'cev nilpotent. Then, the elements v 1 and v 2 are not regular.
Proof. We prove this by contradiction. Suppose that v 2 is regular. Let k = t gcd(t, i). Since v 2 is regular and S ∈ BI, v 2 has an inverse element. Hence, we have
(α 1 , α 1+i (mod t) , α 1+2i (mod t) , . . . , α 1−i (mod t) ) ⊆ Γ((v −1 2 v 1 ) k−1 )
. Since i < t, we have 1 < k and, thus, S is not aperiodic. This contradicts the assumption that S ∈ BI.
Similarly, we have a contradiction when v 1 is regular.
The following example is presented to illustrate the determination of some strongly Mal'cev nilpotent semigroups by Proposition 5.3.
The semigroup S is strongly Mal'cev nilpotent.
Proof. The semigroup S is aperiodic and S has the principal series (1) there exist distinct elements a 1 , a 2 ∈ S and distinct elements w 1 , w 2 ∈ {z 1 , z 2 , z 3 } such that a i = λ 2,i (a 1 , a 2 ; w 1 , w 2 ), for all 1 ≤ i ≤ 2;
S = S 1 ⊃ S 2 ⊃ S 3 ⊃ S 4 ⊃ S 5 ⊃ S 6 = {θ}
(2) there exist pairwise distinct elements a 1 , a 2 , a 3 ∈ S and pairwise distinct elements w 1 , w 2 , w 3 ∈ {z 1 , z 2 , z 3 } such that a i = λ 3,i (a 1 , a 2 , a 3 ; w 1 , w 2 , w 3 ),
for all 1 ≤ i ≤ 3.
By using a Mathematica package developed by the first author, based on Proposition 4.3, one can check that S ∈ MN, and it follows that the part (1) does not imply. Since {i 1 ≤ i ≤ 18 and Γ 1 (z 2 )(i) ≠ θ} = 3, Γ 1 (z 2 )(2) ≠ 0, (2, 7, 0) ⊆ Γ(z 1 ) and there does not exist any integer i such that Γ 1 (z 2 )(i) = 7, we have a 1 , a 2 , a 3 ∈ M 1 ∖ {θ} and, thus a 1 , a 2 , a 3 ∈ M 2 ∖ {θ}. Now, as Γ 2 (z 2 ) = θ, the part (2) does not imply. A contradiction and, thus, S ∈ SMN.
We have ⟨A ∩ Inv⟩ ⫋ A ∩ MN ([5, Theorem 8.1]). The following theorem presents the similar result for strong Mal'cev nilpotency.
Theorem 5.5. We have ⟨A ∩ Inv⟩ ⫋ A ∩ SMN.
Proof. Suppose that S ∈ (A ∩ Inv) ∖ (A ∩ SMN). Since S ∈ BI and S ∈ SMN, by Lemma 3.7 and Proposition 5.3, S is not inverse, a contradiction. Hence, A ∩ Inv is contained in SMN and, thus, ⟨A ∩ Inv⟩ ⊆ A ∩ SMN.
By Lemma 3.7, the semigroup N 4 in [5] is in the subset A∩SMN∖⟨A∩Inv⟩. Therefore, ⟨A ∩ Inv⟩ is strictly contained in A ∩ SMN.
Note that, we can improve the result of Theorem 5.5 and claim that
⟨A ∩ Inv⟩ ⫋ ⟨Inv⟩ ∩ A ∩ SMN.
Before we present an example to show that ⟨A ∩ Inv⟩ is strictly contained in ⟨Inv⟩ ∩ A ∩ SMN, we recall some definitions from [9]. Consider the sets X n = where w = (1, 5, θ)(2, 6, θ) (3, 4, θ) and v = (1, 4, θ)(2, 5, θ)(3, 6, θ). Thanks to Lemma 3.7, S is strongly Mal'cev nilpotent. Also, since the idempotents of S commute, we have S ∈ ⟨Inv⟩ [6]. We have S = S(U ) when U is the semigroup generated by the elements w ′ = (1, 2, 3) and v ′ = (1)(2)(3). Now, if S ≺ I, for some finite inverse semigroup I, then U ≺ I. Since U is not aperiodic, I is not aperiodic and, thus, S ∈ ⟨A ∩ Inv⟩.
The following propositions can be seen as criteria to determine when S(U ) is not Mal'cev nilpotent or is not strongly Mal'cev nilpotent.
Proposition 5.6. If there exist integers i 1 , i 2 and a bijection g ∈ {b 1 , b 2
, . . . , b k } such that (i 1 , i 2 ) ⊆ g, then the semigroup S(U ) is not Mal'cev nilpotent. Proof. We have (i 1 , i ′ 2 , θ)(i 2 , i ′ 1 , θ) ⊆ g ′ . Let x 1 = (i ′ 1 , i 2 , θ) and x 2 = (i ′ 2 , i 1 , θ). Now, we have x i = λ 2,i (x 1 , x 1 ; a ′ , g ′ ) for every 1 ≤ i ≤ 2.Proof. We have (i 1 , i ′ r+1 (mod m) , θ) ⋯ (i m , i ′ r+m (mod m) , θ) ⊆ (g r ) ′ for every integer 1 ≤ r ≤ m − 1. Let x j = (i ′ j , i j+1 , θ) for every 1 ≤ j ≤ m − 1 and x m = (i ′ m , i 1 , θ). Now, we have x j = λ m,j (x 1 , . . . , x t ; a ′ , g ′ , .
. . , (g m−1 ) ′ ) for every 1 ≤ j ≤ m. Then, by Lemma 3.9, the semigroup S(U ) is not strongly Mal'cev nilpotent.
Bases of κ-identities within BG nil
Let S be a semigroup. We define Property P 2 for S as follows:
if y 1 and y 2 are in a J -class of S and there exist elements z 1 , z 2 ∈ S such that y i z j y i+j (mod 2) is in the J -class of y 1 and y 2 , for all 1 ≤ i, j ≤ 2, then y 1 Hy 2 . Proof. Suppose that S does not satisfy Property P 2 . Then, there exist elements y 1 , y 2 , z 1 , z 2 and a J -class J of S such that y 1 , y 2 , y i z j y i+j (mod 2) ∈ J, for all 1 ≤ i, j ≤ 2, and y 1 and y 2 are not in the same H-class. Since y 1 z 1 y 2 ∈ J, we have y 1 z 1 ∈ J and, thus, J is a regular J -class. As S ∈ BG nil , there exist ideals A, B of S and an inverse Rees matrix semigroup M = M 0 (G, n, n; I n ) such that B ⫋ A, A B ≅ M and J = M ∖ {θ}. Therefore, there exist elements (g; α, β), (g ′ ; α ′ , β ′ ) ∈ M such that y 1 = (g; α, β) and y 2 = (g ′ ; α ′ , β ′ ). Since y 1 and y 2 are in different H-classes, we have α ≠ α ′ or β ≠ β ′ . As
y i z j y i+j (mod 2) ∈ J, for all 1 ≤ i, j ≤ 2, we have z 1 , z 2 ∈ S ∖ B, [β, α ′ ; β ′ , α] ⊑ Γ(z 1 ) and [β ′ , α ′ ; β, α] ⊑ Γ(z 2 ), where Γ is an L M -representation of S B. Lemma 3.6 entails that S ∈ MN.
Similarly, if S ∈ MN, then, by Lemma 3.6, S does not satisfy Property P 2 .
We recall the canonical signature κ which consists of the basic multiplication operation and the unary operation x ω−1 (for more details see [2]). Let S be a semigroup. We define the κ-term ∆(y 1 , y 2 ; z 1 , z 2 ) = ((y 1 z 2 ) ω−1 y 1 z 1 (y 2 z 2 ) ω−1 y 2 z 1 ) ω (y 1 z 2 ) ω , for y 1 , y 2 , z 1 , z 2 ∈ S. Theorem 6.2. Let MN ⋆ = ∆(y 1 , y 2 ; z 1 , z 2 ) = ∆(y 2 , y 1 ; z 1 , z 2 ) . We have
MN = MN ⋆ ∩ BG nil .
Proof. First, we prove that MN ⊆ MN ⋆ ∩ BG nil . Suppose the contrary. Since MN ⫋ BG nil , there exist S ∈ MN and elements y 1 , y 2 , z 1 , z 2 ∈ S such that (∆ 1 =)∆(y 1 , y 2 ; z 1 , z 2 ) ≠ ∆(y 2 , y 1 ; z 1 , z 2 )(= ∆ 2 ).
Let y ′ 1 = (r 1 z 1 r 2 z 1 ) ω r 1 and y ′ 2 = (r 2 z 1 r 1 z 1 ) ω r 2 where r 1 = (y 1 z 2 ) ω−1 y 1 and r 2 = (y 2 z 2 ) ω−1 y 2 . Since,
y ′ 1 = (r 1 z 1 r 2 z 1 ) ω (r 1 z 1 r 2 z 1 ) ω r 1 , there exist elements a and b in S 1 such that y ′ 1 = ay ′ 2 b. Similarly, there exist elements a ′ , b ′ ∈ S 1 such that y ′ 2 = a ′ y ′ 1 b ′ . It follows that y ′ 1 J y ′ 2 .
Note that we have
r 1 = (y 1 z 2 ) ω−1 y 1 = (y 1 z 2 ) ω (y 1 z 2 ) ω−1 y 1 = (y 1 z 2 ) ω−1 y 1 z 2 (y 1 z 2 ) ω−1 y 1 = r 1 z 2 r 1 .
Similarly, we have r 2 = r 2 z 2 r 2 . Since y ′ 1 =(r 1 z 1 r 2 z 1 ) ω (r 1 z 1 r 2 z 1 ) ω (r 1 z 1 r 2 z 1 ) ω r 1 =(r 1 z 1 r 2 z 1 ) ω r 1 z 1 (r 2 z 1 r 1 z 1 ) ω r 2 z 1 (r 1 z 1 r 2 z 1 ) ω−1 r 1 =y ′ 1 z 1 y ′ 2 z 1 (r 1 z 1 r 2 z 1 ) ω−1 r 1 and y ′ 1 =(r 1 z 1 r 2 z 1 ) ω (r 1 z 1 r 2 z 1 ) ω r 1 = (r 1 z 1 r 2 z 1 ) ω r 1 z 1 r 2 z 1 (r 1 z 1 r 2 z 1 ) ω−1 r 1 =(r 1 z 1 r 2 z 1 ) ω r 1 z 2 r 1 z 1 r 2 z 1 (r 1 z 1 r 2 z 1 ) ω−1 r 1 = (r 1 z 1 r 2 z 1 ) ω r 1 z 2 (r 1 z 1 r 2 z 1 ) ω r 1 =y ′ 1 z 2 y ′ 1 , the elements y ′ 1 z 1 y ′ 2 and y ′ 1 z 2 y ′ 1 are in the J -class of y ′ 1 and y ′ 2 . Similarly, the elements y ′ 2 z 1 y ′ 1 and y ′ 2 z 2 y ′ 2 are in the J -class of y ′ 1 and y ′ 2 . We supposed that S is Mal'cev nilpotent. Hence, by Lemma 6.1, S satisfies Property P 2 . It follows that that y ′ 1 and y ′ 2 are in the same H-class. Since r 1 = r 1 z 2 r 1 and r 2 = r 2 z 2 r 2 , ∆ 1 and ∆ 2 are in the J -class of y ′ 1 and y ′ 2 . As S ∈ BG, y ′ 1 , y ′ 2 are in the same H-class, ∆ 1 = y ′ 1 z 2 and ∆ 2 = y ′ 2 z 2 , ∆ 1 and ∆ 2 are in the same H-class too. The elements ∆ 1 and ∆ 2 are idempotents. It follows that ∆ 1 = ∆ 2 , a contradiction. Now, suppose there exists a finite semigroup S such that S ∈ MN ⋆ ∩ BG nil and S ∈ MN. Lemma 6.1 yields that S does not satisfy Property P 2 and, thus, S ∈ MN ⋆ . A contradiction.
The result follows.
Comparison with J m G nil
In this section, we compare the pseudovarieties MN, SMN and J m G nil where J is the pseudovariety of all finite J-trivial monoids.
Let A be a finite set, F (A) be the free group on A and H be a finitely generated subgroup of F (A). In the seminal paper [19], Stallings associated to H an inverse automaton A(H) which can be used to solve a number of algorithmic problems concerning H including the membership problem. Stallings, in fact, used a different language than that of inverse automata; the automata theoretic formulation is from [14]. Letà = A ∪ A −1 where A −1 is a set of formal inverses of the elements of A. An inverse automaton A over A is anÃ-automaton with the property that there is at most one edge labeled by each letter leaving each vertex and if there is an edge p → q labeled by a, then there is an edge q → p labeled by a −1 . Moreover, we require that there is a unique initial vertex, which is also the unique terminal vertex. The set of all reduced words accepted by a finite inverse automaton is a finitely generated subgroup of F (A) called the fundamental group of the automaton.
TheÃ-automaton A(H) (the Stallings automaton associated with H) is the unique finite connected inverse automaton whose fundamental group is H with the property that all vertices have out-degree at least 2 except possibly the initial vertex (where we recall that there are both A and A −1edges). One description of A(H) is as follows. Take the inverse automaton A ′ (H) with vertex set the coset space F (A) H and with edges of the form Hg a → Hga for a ∈Ã; the initial and terminal vertices are both H. Then A(H) is the subautomaton whose vertices are cosets Hu with u a reduced word that is a prefix of the reduced form of some element w of H and with all edges between such vertices; the coset H is still both initial and final. Stallings presented an efficient algorithm to compute A(H) from any finite generating set of H via a procedure known as folding. From the construction, it is apparent that there is an automaton morphism A(H 1 ) → A(H 2 ) if and only if H 1 ⊆ H 2 for finitely generated subgroups H 1 and H 2 . Also, it is known that H has finite index if and only if A(H) = A ′ (H). Stallings also provided an algorithm to compute A(H 1 ∩H 2 ) from A(H 1 ) and A(H 2 ) (note that intersections of finitely generated subgroups of free groups are finitely generated by Howson's theorem).
Conversely, if A = (Q, A, δ, i, i) is a reduced inverse automaton, one can effectively construct a basis of a finitely generated subgroup H of F (A) such that A = A(H). First we compute a spanning tree T of the graph A. For each state q of A, there is a unique shortest path from i to q within T : we let u q be the label (inà ⋆ ) of this path. Let p j a j → q j (1 ≤ j ≤ k) be the A-labeled edges of A which are not in T . For each j, let y j = u p j a j u −1 q j ∈à ⋆ , and let H = ⟨y 1 , . . . , y k ⟩. Then {y 1 , . . . , y k } is a basis for H and A = A(H). See [19,14,20] for details.
Margolis, Sapir and Weil presented a procedure to compute the Stallings automaton of the p-closure of a finitely generated subgroup of a free group (which is again finitely generated), for every prime integer p. To compute the p-closure of H, we compute a finite sequence of quotients of A(H),
A(H 0,p ) = A(H) ∼ 0 , . . . , A(H n,p ) = A(H) ∼ n , such that each H i,p is p-closed, the automaton congruence ∼ i+1 is contained in ∼ i (that is H i+1,p ⊆ H i,p )
, and H n,p is the p-closure of H. They let ∼ 0 be the universal, one-class congruence, so that H 0,p is a free factor of F (A). Let 0 ≤ i. After i iterations of the algorithm, we have computed the quotient A(H i,p ) = A(H) ∼ i . Roughly speaking, for the (i + 1)st iteration of the algorithm, they translate H into a basis of H i,p and they ask whether H is p-dense in H i,p . If it is, H i,p is the closure of H; if not, we compute the (Z pZ)-closure of H in H i,p , or rather a free factor H i+1,p of that closure which contains H. Formally, they present the following process:
(1) Computing a basis of H i,p . First we compute a basis for H i,p . Let A i be a set in bijection with that basis. We let
κ i ∶ F (A i ) → H i,p ⊆ F (A)
be the natural one-to-one morphism onto H i,p . We denote by σ i the natural morphism
σ i ∶ F (A i ) → (Z pZ) A i .(2)σ i κ −1 i (h 1 ), . . . , σ i κ −1 i (h r ). κ −1 i (H) is p-dense in F (A i ) if and only if M p (κ −1 i (H)
) has rank A i . Then we calculate the rank of the matrix to decide whether κ −1 i (H) is p-dense in F (A i ), and to compute a basis of
σ i κ −1 i (H) if it is not p-dense. (4) Stop if H is p-dense in H i,p . If H is p-dense in H i,p ,i (H) is not p- dense in F (A i ). The subset σ −1 i σ i κ −1 i (H) is the (Z pZ)-closure of κ −1 i (H) in F (A i ) and it is properly contained in F (A i ). Since κ i is a homomorphism from F (A i ) onto H i,p , the subgroup K = κ i σ −1 i σ i κ −1 i (H) is the (Z pZ)-closure of H in H i,p and K ≠ H i,p .
We define the automaton congruence ∼ i+1 on A(H) to be ∼ H,K , the congruence induced by the containment of H into K. In particular, the subgroup H i+1,p such that A(H i+1,p ) = A(H) ∼ i+1 is a free factor of K, and hence H i+1,p is p-closed. Moreover, we have H ⊆ H i+1,p ⊆ K ⫋ H i,p , and hence H i+1,p is properly contained in H i,p and ∼ i+1 is properly contained in ∼ i . The automaton congruence ∼ i+1 is computed as follows. If r and s are states of A(H), we have r ∼ i+1 s if and only if u r u −1 s ∈ K, that is, if and only if u r u −1
s ∈ H i,p and σ i κ −1 i (u r u −1 s ) ∈ σ i κ −1 i (H).
To verify whether u r u −1 s ∈ H i,p , and to compute in that case κ −1 i (u r u −1 s ), we run the reduced word obtained from u r u −1 s in the automaton A(H i,p ) starting at 1, we note down the edges traversed that are not in the chosen spanning tree of that automaton (as in Step 2), and we require that this path ends in 1. Then (1) Sch A (X) is an H-extendible inverse A-graph, for each regular Rclass X;
σ i κ −1 i (u r u −1 s ) is(2) Sch ρ A (Y )
is an H-extendible inverse A-graph, for each regular L-class Y . Let A = {a, b} and l be a positive integer. We defineÃ-automata A l , with l + 1 states, and B l and C l , each with l states by the diagrams in Figure 1. Margolis, Sapir and Weil proved that A 6 is not G nil -extendible. We extend their result using a similar technique through the following lemma and theorem. Proof. We take as a spanning tree of A(H) the path from vertex β 1 to β n , labeled a for every edge. Then, we have H = ⟨ab −1 , a 2 b −1 a −1 , . . . , a n−1 b −1 a −(n−2) , a n ⟩, Figure 1. Diagrams of the automata A l , B l , and C l and we need to compute the rank of the matrix 1 −1 n 0 . Since n = p n 1 1 ⋯ p nm m , for every prime p with p ∈ {p 1 , . . . , p m }, this matrix has rank 2 and, thus, H is p-dense in F ({a, b}).
A l ∶ α 1 α 2 α 3 α 4 α l−2 α l−1 α l α l+1 a, b a, b a, b a, b a, b a, b a b B l ∶ β 1 β 2 β 3 β 4 β l−2 β l−1 β l a, b a, b a, b a, b a, b a, b a C l ∶ γ 1 γ 2 γ 3 γ 4 γ l−2 γ l−1 γ l a, b a, b a, b a, b a, b a, b a, b
Suppose that p ∈ {p 1 , . . . , p m }. Since σ 0 (H) is generated by a − b, we have β i ∼ 1 β i+p ∼ 1 β i+2p ∼ 1 . . ., for every integer 1 ≤ i ≤ p and there is no relation ∼ 1 between any vertices β i and β i+k 1 p+k 2 , for every integer 1 ≤ i ≤ p, 0 ≤ k 1 and 1 ≤ k 2 ≤ p − 1 with i + k 1 p + k 2 ≤ n.
If n is a prime number, then A(H 1,p ) = B p and, thus H is p-closed. Hence, we have Cl nil (H) = H. Now, suppose that n is not prime. First, we assume that 1 < m. There is no relation ∼ 1 between any vertices β i and β j with 1 ≤ i, j ≤ p and i ≠ j. Since 1 < m, there are edges between β i and β i+1 for 1 ≤ i ≤ p − 1 labeled a and b. Now, as β i ∼ 1 β i+k 1 p , for every integer 1 ≤ i ≤ p and 0 ≤ k 1 , we have which r np = n p np . Then, we need to compute the rank of the matrix Proof. First, we prove that the automaton A n is not G nil -extendible.
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 0 . . . 0 0 0 0 1 . . . 0 0 0 ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ 0 0 . . . 1 0 0 0 0 . . . 0 −1 1 1 0 . . . 0 0 0 ⋮ . . . ⋮ ⋮ ⋮ ⋮ 0 0 . . . 0 −1 1 0 0 . . . 0 0 r np ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .
There exist finitely generated subgroups H, H ′ and H ′′ of F ({a, b}) such that A n = A(H), B n = A(H ′ ), and C n = A(H ′′ ). Since M (C n ) is a cyclic group, H ′′ is a normal subgroup of F ({a, b}) and, thus, bH ′′ b −1 = H ′′ . As conjugation by b is a homomorphism, we have Cl nil (H) = Cl nil (bH ′ b −1 ) = bCl nil (H ′ )b −1 . By Lemma 7.1, it follows that Cl nil (H ′ ) = H ′′ . Now, as bH ′′ b −1 = H ′′ , we have Cl nil (H) = H ′′ . If A n is G nil -extendible, then A n embeds in C n . This yields a contradiction.
If the automaton A is G nil -extendible, then there is a complete automaton D such that A ⊆ D and M (D) ∈ G nil . Hence, there is a complete automaton D ′ such that A n ⊆ D ′ and M (D ′ ) ∈ G nil which is a contradiction.
The result follows. Proof. If n is an even integer, then we have n = 2n ′ for some positive integer n ′ . It follows that (1; 1, n ′ + 1) = λ 2 ((1; 1, n ′ + 1), (1; n ′ + 1, 1), a n , a n ′ ) and
(1; n ′ + 1, 1) = ρ 2 ((1; 1, n ′ + 1), (1; n ′ + 1, 1), a n , a n ′ ).
Therefore, we have N ∈ MN. Now, we suppose that n is odd. We prove N ∈ MN by contradiction. If N ∈ MN, then, by Lemma 2.1, there exist a positive integer m, distinct elements x, y ∈ N and elements w 1 , w 2 , . . . , w m ∈ N such that x = λ m (x, y, w 1 , w 2 , . . . , w m ) and y = ρ m (x, y, w 1 , w 2 , . . . , w m ). Since N is the subsemigroup of the full transformation semigroup on the set {1, . . . , n + 1} ∪ {θ}, there exist integers 1 ≤ e 1 , . . . , e n 1 ≤ n + 1 and 1 ≤ f 1 , . . . , f n 2 ≤ n + 1 such that {e 1 , . . . , e n 1 } = n 1 , {f 1 , . . . , f n 2 } = n 2 , Γ(x)(e i ), Γ(y)(f j ) ≠ θ, for every 1 ≤ i ≤ n 1 and 1 ≤ j ≤ n 2 , and Γ(x)(e) = Γ(y)(f ) = θ, for every e ∈ {e 1 , . . . , e n 1 } and f ∈ {f 1 , . . . , f n 2 } where Γ is an L M 0 ({1},n+1,n+1;I n+1 )representation of N . It is easy to verify that x and y are in a regular J-class. Hence, we have n 1 = n 2 .
Since 1 = (1)(2) ⋯ (n + 1), a = (1, 2, . . . , n) and b = (n + 1, 1, 2, . . . , n, θ), the elements 1 and w are not in the same J-class for every w ∈ ⟨a, b⟩. Hence, the J-class of 1 has only one element and, thus x, y ≠ 1.
First, suppose that x, y ∈ ⟨a, b⟩. Thus, we have w 1 , . . . , w m ∈ ⟨a, b⟩ 1 and there exist letters c 1 , . . . , c m 1 , d 1 , . . . , d m 2 , w 1,1 , . . . , w 1,l 1 , . . . , w m,1 , . . . , w m,lm ∈ {a, b} such that x = c 1 ⋯ c m 1 , y = d 1 ⋯ d m 2 and w i = w i,1 ⋯ w i,l i (if w i = 1, we put l i = 0), for all 1 ≤ i ≤ m. Since x, y ≠ 0, there exist integers 1 ≤ i, j ≤ n + 1 and 1 ≤ i ′ , j ′ ≤ n such that Γ(x)(i) = i ′ and Γ(y)(j) = j ′ . As x = λ m (x, y, w 1 , w 2 , . . . , w m ), if i ≠ n + 1, then we have m 1 = 2 m−1 (m 1 + m 2 + l 1 ) + 2 m−2 l 2 + . . . + 2 0 l m = i ′ − i (mod n); otherwise, we have m 1 = 2 m−1 (m 1 + m 2 + l 1 ) + 2 m−2 l 2 + . . . + 2 0 l m = i ′ (mod n).
Similarly, as y = ρ m (x, y, w 1 , w 2 , . . . , w m ), if j ≠ n + 1, then we have m 2 = 2 m−1 (m 2 + m 1 + l 1 ) + 2 m−2 l 2 + . . . + 2 0 l m = j ′ − j (mod n); otherwise, we have m 2 = 2 m−1 (m 2 + m 1 + l 1 ) + 2 m−2 l 2 + . . . + 2 0 l m = j ′ (mod n).
Therefore we have m 1 = m 2 (mod n).
Again, as x = λ m (x, y, w 1 , w 2 , . . . , w m ), if 1 ≤ i ≤ n 1 , then Γ(y)(Γ(xw 1 )(e i )) ≠ 0 and, thus, Γ(xw 1 )(e i ) ∈ {f 1 , . . . , f n 1 }. Similarly, as y = ρ m (x, y, w 1 , . . . , w m ), if 1 ≤ j ≤ n 1 , then Γ(x)(Γ(yw 1 )(f j )) ≠ 0 and, thus, Γ(yw 1 )(f j ) ∈ {e 1 , . . . , e n 1 }. Now, since x ≠ y and m 1 = m 2 (mod n), there exist subsets {i 1 , . . . , i n ′ }, {j 1 , . . . , j n ′ } ⊆ {1, . . . , n 1 } such that e i t+1 − e it = 2(m 1 + l 1 ) (mod n), f j t+1 − f jt = 2(m 1 + l 1 ) (mod n), f jt − e it = (m 1 + l 1 ) (mod n), for every 1 ≤ t < n ′ , e i 1 − e i n ′ = 2(m 1 + l 1 ) (mod n), f j 1 − f j n ′ = 2(m 1 + l 1 ) (mod n), f j n ′ − e i n ′ = (m 1 + l 1 ) (mod n) and {i 1 , . . . , i n ′ } ≠ {j 1 , . . . , j n ′ }.
Since n is odd, there exist integers r and s such that 2r+ns = 1. Hence, we have 2r(m 1 +l 1 ) = (m 1 +l 1 ) (mod n). Therefore, {i 1 , . . . , i n ′ }∩{j 1 , . . . , j n ′ } ≠ ∅ and thus {i 1 , . . . , i n ′ } = {j 1 , . . . , j n ′ }, a contradiction. Now, suppose that x, y ∈ M 0 ({1}, n + 1, n + 1; I n+1 ) ∖ ⟨a, b⟩. It follows that x = (1; α 1 , β 1 ) and y = (1; α 2 , β 2 ), for some integers 1 ≤ α, β ≤ n + 1. Thus, we have [β 1 , α 2 ; β 2 , α 1 ] ⊑ Γ(w 1 ) and [β 1 , α 1 ; β 2 , α 2 ] ⊑ Γ(w 2 ). It is easy to verify that w 1 , w 2 ∈ ⟨a, b⟩ 1 . Hence, α 2 − β 1 = α 1 − β 2 (mod n) and α 1 − β 1 = α 2 − β 2 (mod n). It follows that 2(α 2 − α 1 ) = 0 (mod n). Since [β 1 , α 2 ; β 2 , α 1 ] ⊑ Γ(w 1 ), we have α 1 , α 2 ≠ n + 1. As n is odd, it follows that α 1 = α 2 . Now again, as [β 1 , α 2 ; β 2 , α 1 ] ⊑ Γ(w 1 ), we have β 1 = β 2 and, thus, x = y. A contradiction.
The result follows.
Let Now, we present semigroups M i such that M i ∈ BG nil , for all 1 ≤ i ≤ 3, and the following conditions are satisfied:
(1) M 1 ∈ J m G nil and M 1 ∈ SMN;
(2) M 2 ∈ J m G nil and M 2 ∈ (MN ∖ SMN);
(3) M 3 ∈ J m G nil and M 3 ∈ MN.
≠ y and S ∈ BG nil , by (3.1), there exists a regular J -class M = M 0 (G, n, n; I n ) ∖ {θ} of S such that x, y ∈ M . Then, there exist elements (g
Lemma 3 . 4 .
34Let n ≥ 1. Then the following conditions are equivalent for a group G:
Lemma 3 . 6 .
36Let S ∈ BG nil . The semigroup S is not Mal'cev nilpotent if and only if there exist ideals A, B of S, an inverse Rees matrix semigroup M = M 0 (G, q, q; I q ), and elements x, y, w, v such that the following conditions are satisfied:
Lemma 3.9. A finite semigroup S is not strongly Mal'cev nilpotent if and only if there exist positive integers t > 1, m, pairwise distinct elements a 1 , . . . , a t in S and elements w 1 , w 2 , . . . , w m in S such that
matrix semigroup M = M 0 (G, n, m; P ) is Mal'cev nilpotent if and only if G is nilpotent and M is inverse [10, Lemma 2.1]. Now, by Lemma 3.9, we can present a similar result for the strong Mal'cev nilpotency.
Lemma 3 . 10 .
310The finite Rees matrix semigroup M = M 0 (G, n, m; P ) is strongly Mal'cev nilpotent if and only if G is nilpotent and M is inverse.
Proposition 4. 1 .
1Let S be an A-generated finite semigroup with the following conditions:
Proposition 4. 2 .
2Let S be an A-generated semigroup in the pseudovariety BG nil . The following conditions hold:
Proposition 4. 3 .
3Let S be an A-generated semigroup in the pseudovariety BI. The following conditions hold:(1) the semigroup S is Mal'cev nilpotent if and only if for every regular R-class X of S the subset X is nilpotent in S.
Proposition 5. 3 .
3Let S be a semigroup in the pseudovariety BI. Suppose that there exist ideals A, B of S, an inverse Rees matrix semigroup M = M 0 ({1}, q, q; I q ), an integer t and elements y 1 , . . . , y t , v 1 , v 2 such that the following conditions are satisfied: (1) B ⫋ A and A B ≅ M ; (2) 1 < t; (3) y i = (1; α i , β i ) ∈ M , for every 1 ≤ i ≤ t and {α 1 , . . . , α t } = t; (4) [β 1 , α 1 ; . . . ; β t , α t ] ⊑ Γ(v 1 ) and [β 1 , α 1+i (mod t) ; . . . ; β t , α t+i (mod t) ] ⊑ Γ(v 2 ), for some integer 1 ≤ i < t, where Γ is an L M -representation of S B.
Example 5 . 4 .
54Let S be the subsemigroup of the full transformation semigroup on the set {1, . . . , 18} ∪ {θ} such that S = ⟨y 1 , y 2 , y 3 , z 1 , z 2 , z 3 ⟩,
where S 5
5S 6 = (M 1 =)M 0 ({1}, 18, 18; I 18 ), S 4 S 5 = (M 2 =)M 0 ({1}, 6, 6;I 6 ), S 3 ∖ S 4 = {z 1 }, S 2 ∖ S 3 = {z 2 } and S 1 ∖ S 2 = {z 3 }.Hence, only the elements z 1 , z 2 and z 3 are non regular elements of S. Let Γ i be an M i -representation of M i , for 1 ≤ i ≤ 2. We also have Γ 2 (z 2 ) = θ.Since S ∈ BI, if S ∈ SMN, by Lemma 3.7 and Proposition 5.3, one of the following conditions holds:
{1, . . . , n}, X ′ n = {1 ′ , . . . , n ′ }, X 2n = {1, . . . , n, 1 ′ , . . . , n ′ } and the Rees matrix semigroup M = M 0 ({1}, 2n, 2n; I 2n ). Take the action Γ of the symmetric inverse semigroup I 2n on the L-classes of M . Let b 1 , b 2 , . . . , b k be bijections on the set X n , and let U be the semigroup generated by the bijections b i , (1 ≤ i ≤ k). Higgins and Margolis introduced the subsemigroup S(U ) of I 2n as follows. For each 1≤ i ≤ k, let b ′ i be the map with dom b ′ i = dom b i , and ran b ′ i ⊆ (ran b i ) ′ which acts as follows: Γ(b ′ i )(α) = (Γ(b i )(α)) ′ for every α ∈ dom b i . Similarly,let a ′ = (1, 1 ′ , θ)(2, 2 ′ , θ) . . . (n, n ′ , θ). Finally, let S(U ) be the semigroup generated by the mappings b ′ i , (1 ≤ i ≤ k), together with a ′ and M . They prove that if S(U ) is a divisor of some finite inverse semigroup I, then U divides I also [9, Theorem 3.2]. Now, we present our candidate to show that ⟨A ∩ Inv⟩ ≠ ⟨Inv⟩ ∩ A ∩ SMN. Let S be the subsemigroup of the full transformation semigroup on the set {1, . . . , 6} ∪ {θ} given by the union of the completely 0-simple semigroup M 0 ({1}, 6, 6; I 6 ) and the set {w, v}, S = M 0 ({1}, 6, 6; I 6 ) ∪ {w, v},
Then by Lemma 2.1, the semigroup S(U ) is not Mal'cev nilpotent.Proposition 5.7. If there exist integers i 1 , . . . , i m with m > 1 and a bijection g ∈ {b 1 , b 2 , . . . , b k } such that (i 1 , . . . , i m ) ⊆ g and g 2 , . . . , g m−1 ∈ {b 1 , b 2 , . . . , b k }, then the semigroup S(U ) is not strongly Mal'cev nilpotent.
Lemma 6 . 1 .
61Let S ∈ BG nil . The semigroup S is MN if and only if S satisfies Property P 2 .
For another subgroup K of F (A), if H ⊆ K, the automaton congruence ∼ H,K on A(H) is defined by the morphism from A(H) into A(K). Suppose that, for each state p of A(H), u p is a reduced word such that 1.u p = p, in A(H). Then two states p and q of A(H) are ∼ H,K -equivalent if and only if u p u −1 q ∈ K. Let V be a pseudovariety of groups. The subgroup H is V-extendible if its automaton can be embedded into a complete automaton with transition group in V. Let ∼ be the intersection of the ∼ H,K , where the intersection runs over all clopen subgroups K in the pro-V topology containing H. The automaton congruence ∼ coincides with ∼ H,Cl V (H) on A(H). LetH be the subgroup of F (A) such that A(H) = A(H) ∼. The subgroupH is the least V-extendible subgroup containing H, and H is V-extendible if and only if H = H. Also, in general H ⊆H ⊆ Cl V (H) and since the congruences ∼ and ∼ H,Cl V (H) on A(H) coincide, A(H) = A(H) ∼ embeds in A(Cl V (H)).
Translating H into the basis of H i,p . Now we compute a basis of the subgroup κ −1 i (H) of F (A i ). This is done by running the elements of the basis of H in A(H i ) and noting down the edges traversed that are not in the chosen spanning tree. (3) Deciding the p-denseness of H in H i,p . Let M p (κ −1 i (H)) be the r × A i matrix consisting of the row vectors
the image of that word in (Z pZ) A . Now it suffices to verify whether the vector σ i κ −1 i (u r u −1 s ) lies in the vector subspace σ i κ −1 i (H). This can be done effectively, using the basis of σ i κ −1 i (H) computed in Step 3. They also proved that the nil-closure of H is the intersection over all primes p of the p-closures of H [14, Corollary 4.1]. Let M be a finite A-generated monoid and H a pseudovariety of groups. Steinberg in [20, Theorem 7.4] proved that M ∈ J m H if and only if the following conditions are satisfied:
Lemma 7 . 1 .
71Let n be a positive integer. Suppose that n = p n 1 1 ⋯ p nm m where p 1 , . . . , p m are pairwise distinct prime numbers and 1 ≤ n 1 , . . . , n m . Let B n = A(H) and C n = A(H ′ ), for some finitely generated subgroups H and H ′ of F ({a, b}). If m = 1, then Cl nil (H) = H, otherwise, we have Cl nil (H) = H ′ .
Theorem 7 . 2 .
72Let A be an inverse automaton. If there exists an integer n such that n = p n 1 1 ⋯ p nm m where p 1 , . . . , p m are pairwise distinct prime numbers, 1 ≤ n 1 , . . . , n m , m > 1 and A n is a subgraph of A, then A is not G nil -extendible.
Theorem 7 . 3 .
73Let n be a positive integer and N be the subsemigroup of the full transformation semigroup on the set {1, . . . , n + 1} ∪ {θ} such that N = M 0 ({1}, n + 1, n + 1; I n+1 ) ∪ ⟨a, b⟩ ∪ {1}, where a = (1, 2, . . . , n) and b = (n + 1, 1, 2, . . . , n, θ). Then, N ∈ MN if and only if the integer n is odd.
N 1
1be the subsemigroup of the full transformation semigroup on the set {1, . . . , 7} ∪ {θ} such thatN 1 = M 0 ({1}, 7, 7; I 7 ) ∪ ⟨a 1 , b 1 ⟩ ∪ {1},where a 1 = (1, 2, . . . , 6) and b 1 = (7, 1, 2, . . . , 6, θ) and N 2 be the subsemigroup of the full transformation semigroup on the set {1, . . . , 16} ∪ {θ} such thatN 2 = M 0 ({1}, 16, 16; I 16 ) ∪ ⟨a 2 , b 2 ⟩ ∪ {1},where a 2 = (1, 2, . . . , 15) and b 2 = (16, 1, 2, . . . , 15, θ). By Theorem 7.2 and [20, Theorem 7.4], we have N 1 , N 2 ∈ J m G nil . Using, for instance a Mathematica package developed by the first author, one can check that N 1 ∈ BG nil . By Theorem 7.3, it follows that N 1 ∈ MN, N 2 ∈ MN. Since M 0 ({1}, 16, 16; I 16 ) is an ideal of N 2 and (1, 2, . . . , 15) ∈ N 2 , we have N 2 ∈ SMN. Open Problem 7.4. Does there exist a finite semigroup S such that S ∈ SMN ∖ J m G nil ?
Theorem 3.1. We have SMN ⫋ BG nil .Note that we can improve the definition of Mal'cev nilpotency for finite semigroups.Lemma 3.2. A finite semigroup S is Mal'cev nilpotent if and only if there exists a positive integer n such that
The rank of this matrix is p + 1 and, thus, H is p-dense in H np,p . Hence, we have Cl p (H) = H np,p . Therefore, we have Clnil (H) = H np 1 ,p 1 ∩ . . . ∩ H np m ,pm . Let λ = ω κ 1 1 ⋯ ω with ω 1 , . . . , ω m ′ ∈ {a, b} and C l = A(H C l ) for some integer l and finitely generated subgroup H C l of F ({a, b}). We have λ ∈ H C l if and only if κ 1 + ⋯ + κ m ′ ≡ l 0. Hence λ ∈ H np 1 ,p 1 ∩ . . . ∩ H np m ,pm if and only if κ 1 + ⋯ + κ m ′ ≡ 0 (mod p n i i ), for all 1 ≤ i ≤ m. Therefore, we have H np 1 ,p 1 ∩ . . . ∩ H np m ,pm = H ′ . Now, we assume that m = 1. Similarly, we have A(H i,p ) = C p i , for all 1 ≤ i ≤ n p − 1, and A(H np,p ) = B p np .Thus H is p-closed and, we have Cl nil (H) = H.κ m ′
m ′
Corresponding author
Acknowledgments. The work of the first and third authors was supported, in part, by CMUP (UID/MAT/00144/2013), which is funded by FCT (Portugal) with national (MCTES) and European structural funds through the programs FEDER, under the partnership agreement PT2020. The second author was supported by the DFG projects DI 435/5-2 and KU 2716/1-1. The work of the third author was also partly supported by the FCT post-doctoral scholarship SFRH/BPD/89812/2012. The work was also carried out within an FCT/DAAD bilateral collaboration project involving the Universities of Porto and Stuttgart.A(H 1,p ) = C p . Hence, we haveLet 1 ≤ n ′ ≤ n − 1. There exist integers 0 ≤ k 1 and 1 ≤ k 2 ≤ p such that, a n ⟩,we haveThen, we need to compute the rank of the matrixIf n p = 1, then the rank of this matrix is p + 1 and, thus, H is p-dense in H 1,p . Hence, we have Cl p (H) = H 1,p . Otherwise, the rank of this matrix is p and, thus, we must calculate H 2,p . If γ i ∼ 2 γ j , for some integers 1 ≤ i, j ≤ p, then a i−j ∈ H 1,p and, thus, i − j = i ′ p, for some integer i ′ . Hence, we havefor some integer i ′′ and, thus u i u −1 j = a i ′′ p 2 . It follows that A(H 2,p ) = C p 2 . By induction, it is easy to verify that A(H i,p ) = C p i for all 1 ≤ i ≤ n p . Then, we have In terms of pseudovarieties, the results of this section may be summarized as follows:Theorem 7.5. The pseudovarieties MN and J m G nil are incomparable.
Finite semigroups and universal algebra, volume 3 of Series in Algebra. J Almeida, World Scientific Publishing Co., IncRiver Edge, NJTranslated from the 1992 Portuguese original and revised by the authorJ. Almeida. Finite semigroups and universal algebra, volume 3 of Series in Algebra. World Scientific Publishing Co., Inc., River Edge, NJ, 1994. Translated from the 1992 Portuguese original and revised by the author.
Dynamics of finite semigroups. J Almeida, Semigroups, algorithms, automata and languages. Coimbra; River Edge, NJJ. Almeida. Dynamics of finite semigroups. In Semigroups, algorithms, automata and languages (Coimbra, 2001), pages 269-292. World Sci. Publ., River Edge, NJ, 2002.
Dynamics of implicit operations and tameness of pseudovarieties of groups. J Almeida, Trans. Amer. Math. Soc. 3541J. Almeida. Dynamics of implicit operations and tameness of pseudovarieties of groups. Trans. Amer. Math. Soc., 354(1):387-411 (electronic), 2002.
Profinite semigroups and applications. J Almeida, Structural theory of automata, semigroups, and universal algebra. Alfredo CostaDordrechtSpringer207J. Almeida. Profinite semigroups and applications. In Structural theory of automata, semigroups, and universal algebra, volume 207 of NATO Sci. Ser. II Math. Phys. Chem., pages 1-45. Springer, Dordrecht, 2005. Notes taken by Alfredo Costa.
The rank of variants of nilpotent pseudovarieties. J Almeida, M H Shahzamanian, submittedJ. Almeida and M. H. Shahzamanian. The rank of variants of nilpotent pseudovari- eties. (submitted).
Finite semigroups with commuting idempotents. C J Ash, J. Austral. Math. Soc. Ser. A. 431C. J. Ash. Finite semigroups with commuting idempotents. J. Austral. Math. Soc. Ser. A, 43(1):81-90, 1987.
The algebraic theory of semigroups. A H Clifford, G B Preston, Vol. I. Mathematical Surveys. 7American Mathematical SocietyA. H. Clifford and G. B. Preston. The algebraic theory of semigroups. Vol. I. Math- ematical Surveys, No. 7. American Mathematical Society, Providence, R.I., 1961.
On pseudovarieties. S Eilenberg, M P Schützenberger, Advances in Math. 193S. Eilenberg and M. P. Schützenberger. On pseudovarieties. Advances in Math., 19(3):413-418, 1976.
Finite aperiodic semigroups with commuting idempotents and generalizations. P M Higgins, S Margolis, Israel J. Math. 116P. M. Higgins and S. Margolis. Finite aperiodic semigroups with commuting idempo- tents and generalizations. Israel J. Math., 116:367-380, 2000.
Nilpotent semigroups and semigroup algebras. E Jespers, J Okniński, J. Algebra. 1693E. Jespers and J. Okniński. Nilpotent semigroups and semigroup algebras. J. Algebra, 169(3):984-1011, 1994.
The non-nilpotent graph of a semigroup. Semigroup Forum. E Jespers, M H Shahzamanian, 85E. Jespers and M. H. Shahzamanian. The non-nilpotent graph of a semigroup. Semi- group Forum, 85(1):37-57, 2012.
Finite semigroups that are minimal for not being Malcev nilpotent. E Jespers, M H Shahzamanian, J. Algebra Appl. 1381450063E. Jespers and M. H. Shahzamanian. Finite semigroups that are minimal for not being Malcev nilpotent. J. Algebra Appl., 13(8):1450063, 22, 2014.
Nilpotent semigroups. A I , Ivanov. Gos. Ped. Inst. Uč. Zap. Fiz.-Mat. Nauki. 4A. I. Mal ′ cev. Nilpotent semigroups. Ivanov. Gos. Ped. Inst. Uč. Zap. Fiz.-Mat. Nauki, 4:107-111, 1953.
Closed subgroups in pro-V topologies and the extension problem for inverse automata. S Margolis, M Sapir, P Weil, Internat. J. Algebra Comput. 114S. Margolis, M. Sapir, and P. Weil. Closed subgroups in pro-V topologies and the extension problem for inverse automata. Internat. J. Algebra Comput., 11(4):405-445, 2001.
Subsemigroups of nilpotent groups. B H Neumann, T Taylor, Proc. Roy. Soc. Ser. A. 274B. H. Neumann and T. Taylor. Subsemigroups of nilpotent groups. Proc. Roy. Soc. Ser. A, 274:1-4, 1963.
BG = PG: a success story. J E Pin, Semigroups, formal languages and groups. DordrechtKluwer Acad. Publ466J. E. Pin. BG = PG: a success story. In Semigroups, formal languages and groups (York, 1993), volume 466 of NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., pages 33-47. Kluwer Acad. Publ., Dordrecht, 1995.
The Birkhoff theorem for finite algebras. Algebra Universalis. J Reiterman, 14J. Reiterman. The Birkhoff theorem for finite algebras. Algebra Universalis, 14(1):1- 10, 1982.
The congruence η * on semigroups. M H Shahzamanian, J. Math. 673M. H. Shahzamanian. The congruence η * on semigroups. Q. J. Math., 67(3):405-423, 2016.
Topology of finite graphs. J R Stallings, Invent. Math. 713J. R. Stallings. Topology of finite graphs. Invent. Math., 71(3):551-565, 1983.
Finite state automata: a geometric approach. B Steinberg, Trans. Amer. Math. Soc. 3539B. Steinberg. Finite state automata: a geometric approach. Trans. Amer. Math. Soc., 353(9):3409-3464 (electronic), 2001.
. J Almeida, M H Shahzamanian, Portugal E-mail address: [email protected]. 687Centro de Matemática e Departamento de Matemática, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegrept; [email protected] 1J. Almeida and M. H. Shahzamanian, Centro de Matemática e Departamento de Matemática, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre, 687, 4169-007 Porto, Portugal E-mail address: [email protected]; [email protected] 1
M Kufleitner, [email protected] Methods in Computer Science (FMI). Stuttgart, Germany E-mail address38University of Stuttgart, UniversitätsstrM. Kufleitner, Formal Methods in Computer Science (FMI), University of Stuttgart, Universitätsstr. 38, D-70569 Stuttgart, Germany E-mail address: [email protected]
| []
|
[
"A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks",
"A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks"
]
| [
"James B Simon ",
"Madeline Dickens ",
"Michael R Deweese "
]
| []
| []
| Kernel regression is an important nonparametric learning algorithm with an equivalence to neural networks in the infinite-width limit. Understanding its generalization behavior is thus an important task for machine learning theory. In this work, we provide a theory of the inductive bias and generalization of kernel regression using a new measure characterizing the "learnability" of a given target function. We prove that a kernel's inductive bias can be characterized as a fixed budget of learnability, allocated to its eigenmodes, that can only be increased with the addition of more training data. We then use this rule to derive expressions for the mean and covariance of the predicted function and gain insight into the overfitting and adversarial robustness of kernel regression and the hardness of the classic parity problem. We show agreement between our theoretical results and both kernel regression and wide finite networks on real and synthetic learning tasks. | null | [
"https://arxiv.org/pdf/2110.03922v3.pdf"
]
| 246,652,649 | 2110.03922 | 69d60a5369bf55d09dd7b5aad0bd5676fca6ebf4 |
A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks
James B Simon
Madeline Dickens
Michael R Deweese
A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks
Kernel regression is an important nonparametric learning algorithm with an equivalence to neural networks in the infinite-width limit. Understanding its generalization behavior is thus an important task for machine learning theory. In this work, we provide a theory of the inductive bias and generalization of kernel regression using a new measure characterizing the "learnability" of a given target function. We prove that a kernel's inductive bias can be characterized as a fixed budget of learnability, allocated to its eigenmodes, that can only be increased with the addition of more training data. We then use this rule to derive expressions for the mean and covariance of the predicted function and gain insight into the overfitting and adversarial robustness of kernel regression and the hardness of the classic parity problem. We show agreement between our theoretical results and both kernel regression and wide finite networks on real and synthetic learning tasks.
Introduction
Kernel (ridge) regression -simply linear (ridge) regression with a kernel function replacing the standard dot product between data vectors -is an influential nonparameteric learning algorithm with broad use across domains (de Vlaming & Groenen, 2015;Exterkate et al., 2016;Schulz et al., 2018). Theoretical interest in this algorithm has increased significantly in recent years due to the discovery that both Bayesian and trained neural networks converge to kernel regression in the infinite-width and infinite-time limit (Jacot et al., 2018;Lee et al., 2018), with one well-known paper declaring that "to understand deep learning we need to un-derstand kernel learning" (Belkin et al., 2018). As relevant insights will elucidate both kernel regression and deep learning, understanding kernel regression is thus an important endeavor for the field of machine learning theory.
The most important desideratum of a learning algorithm is good generalization to unseen data. The generalization of deep neural networks remains mysterious, with a full quantitative theory still out of reach. Fortunately, Bordelon et al. (2020) made important progress towards understanding the generalization of kernel regression by deriving approximate analytical expressions for test mean-squared error (MSE) depending on the target function, kernel eigensystem, and training set size. Their final expressions quantify a spectral bias seen in many other studies: as points are added to the training set, higher eigenmodes are learned first.
However, that important study does not close the case on the theory of kernel regression for several reasons: (a) the authors provide no exact results, only approximations, and (b) there are many functionals of interest besides MSE. For example, the ability to also quantitatively predict the mean squared gradient E x |∇ xf (x)| 2 for various domains and hyperparameters would be of clear use for the study of adversarial examples, which are essentially a phenomenon of surprisingly large gradient. Furthermore, (c) their derivations rely on heavy mathematical machinery, including a matrix PDE and replica calculations, that somewhat obscure the path to their main result, and (d) though they reveal that higher eigenmodes are better-learned, we find that there is a precise sense in which eigenmodes compete to be learned, which, to our knowledge, has not been described in any previous work.
Learnability is the normalized inner product between the target and predicted functions. Learnability is intimately related to the bias term of the bias-variance decomposition of MSE and to the signal capture threshold of Jacot et al. (2020). We prove that, for kernel eigenmodes, it has several natural properties that MSE does not, including boundedness in [0, 1] and monotonic improvement with training set size.
We begin by deriving several exact results describing these quantities and their dependence upon kernel eigenvalues. Our first main result (Theorem 3.2) is a conservation law describing the inductive bias of kernel regression: the sum of the learnabilities of any complete basis of target functions is at most the training set size. In the ridgeless case, this sum is exactly the training set size, independent of the kernel. This provides a concrete, intuitive picture of the design tradeoffs inherent in choosing a kernel: a kernel has a fixed budget of learnability it must divide among its eigenmodes.
We then use our framework to derive approximate expressions for the statistics of the predicted function. We make a set of approximations similar to those of Bordelon et al. (2020), but our derivation is quite different and uses only basic linear algebra. We derive expressions for both the mean and covariance of the target function (Theorem 4.1): in addition to recovering the expression for MSE of Bordelon et al. (2020), our results are sufficient to calculate other functionals of interest, such as the mean squared gradient. In concordance with the common notion of spectral bias, we find that a function is more learnable the more weight it places in higher-eigenvalue modes. We illustrate the power of our framework with a new result regarding the hardness of the parity problem for rotation-invariant kernels.
Along the way, we derive two results that shed light on the nature of overfitting in kernel regression. First, we prove that kernel regression necessarily generalizes with worse-thanchance MSE when attempting to learn a sufficiently lowlearnability function, and that such a function always exists. Second, we show that, for modes below a certain eigenvalue threshold, MSE increases with additional samples in the small dataset regime because the model mistakenly explains them using more learnable modes. This phenomenon has appeared in the empirical results of other recent studies, but has never been explained.
We then run many experiments on real and synthetic datasets that corroborate our theoretical results. Using both kernel regression and wide, deep neural networks, we find excellent agreement with our theoretical predictions of the kernel conservation law, the first-and second-order statistics of the predicted function including the mean squared gradient, and overfitting at low dataset size. Finally, repeating one experiment with varying network width, we find good agreement even down to width 20, suggesting that kernel eigenanalysis may be a fruitful approach even for the study of finite networks outside the kernel regime.
We conclude by discussing promising extensions and implications for the study of deep learning. The generalization of deep learning systems has long defied theoretical understanding, and even now there are few cases in which one can predict an interesting quantity or effect from first principles and then observe it in a deep learning experiment with excellent numerical agreement. These few well-understood cases are rare firm platforms on which future understanding can build. Our results provide several new well-understood quantities and phenomena regarding wide networks' inductive bias, generalization, overfitting, and robustness, yielding new tools and insights for future studies.
Related Work
The generalization of kernel regression was first studied in the Gaussian process literature (Sollich, 1999;Vivarelli & Opper, 1999;Sollich, 2001). These works typically assumed a restricted teacher-student framework, and none considered the problem in full generality. Jacot et al. (2020) study the generalization of kernel ridge regression for positive ridge parameter using random matrix theory techniques. Their "reconstruction operator" is closely related to our learning transfer matrix, and their "signal capture threshold" ϑ is proportional to the constant t from Bordelon et al. (2020) and is the same as our constant C. They derive the mean predicted function and some, but not all, of its second-order statistics, a picture which our results complete. Canatar et al. (2021) applied the work of Bordelon et al. (2020) to understanding certain generalization phenomena. One of their main insights is that learning curves can be nonmonotonic in the presence of zero eigenvalues and noise. Our overfitting results, though of a different nature, can be seen as generalizing this observation: we show that nonmonotonic learning curves in fact require neither zero eigenvalues nor noise, only a sufficiently low eigenvalue.
The general observation that higher eigenmodes are easier learned appears in many recent works and is the essence of neural networks' "spectral bias" towards simple functions (Valle-Perez et al., 2018;Yang & Salman, 2019), which is apparent in both training speed (Rahaman et al., 2019;Xu et al., 2019b;a;Xu, 2018;Cao et al., 2019;Su & Yang, 2019) and generalization (Arora et al., 2019).
Theoretical Setup
Setting and Review of Kernel Regression
We consider the task of inferring an m-element function f : X → R m given a set of n unique training points D = {x i } n i=1 ⊆ X and their corresponding function values f (D) ∈ R n×m . In the typical setting, data are drawn i.i.d. from a nonuniform distribution over a continuous X . However, we find a much simpler analysis is possible if we instead consider a discrete problem that includes this setting as a limit: we let X be discrete with cardinality |X | = M and assume the n samples are chosen randomly from X without replacement. By taking M → ∞ and allowing X to fill a region of R d with nonuniform density, we can later recover the continuous limit.
Kernel regression is defined by the inference equation
f (x) = K(x, D) (K(D, D) + δ I n ) −1 f (D),(1)
wheref is the predicted function, δ ≥ 0 is the ridge parame-
ter, K : X × X → R is the kernel function, K(D, D) is the "kernel matrix" with components K(D, D) ij = K(x i , x j ), and K(x, D) is a row vector with components K(x, D) i = K(x, x i ).
Remarkably, for an infinite-width neural network trained to convergence on MSE loss, the predicted function is given by ridgeless kernel regression with the network's "neural tangent kernel" (NTK) (Jacot et al., 2018;Lee et al., 2019). For unfamiliar readers, we define and motivate the NTK in Appendix B.
It follows from the form of Equation 1 that the m indices of f can each be treated separately: one can equivalently perform kernel regression on each index separately and simply vectorize the results. We thus hereafter assume m = 1 as the extension to m > 1 is trivial.
Given two functions g, h, we define their inner product to be g, h ≡ E x∈X [g(x)h(x)], and we define the norm of a function g to be ||g|| 2 ≡ g, g 1/2 .
The Learning Transfer Matrix
We now translate Equation 1 into the eigenbasis of the kernel. Any kernel function must be symmetric and positive semidefinite (Shawe-Taylor et al., 2004), which implies that we can find a set of orthonormal eigenfunctions
{φ i } M i=1 and nonnegative eigenvalues {λ i } M i=1 that satisfy K(x, ·), φ i = λ i φ i (x), φ i , φ j = δ ij .(2)
We will assume for simplicity that K is positive definite and λ i > 0, which will be true in most cases of interest. 1
We first decompose f andf into weighted sums of the eigenfunctions as
f (x) = M i=1 v i φ i (x),f (x) = M i=1v i φ i (x),(3)
where v andv are vectors of coefficients. We note that
f,f = v T v. As K(x 1 , x 2 ) = M i=1 λ i φ i (x 1 )φ i (x 2 ), we next decompose the kernel matrix as K(D, D) = Φ T ΛΦ, where Φ ij ≡ φ i (x j ) is the M × n "design matrix" and Λ ≡ diag(λ 1 , ..., λ M ) is a diagonal matrix of eigenvalues.
The predicted coefficientsv are given bŷ
v i = φ i ,f = λ i φ i (K(D, D) + δI n ) −1 Φ T v. (4)
Stacking these coefficients into a matrix equation, we find
v = ΛΦ Φ T ΛΦ + δI n −1 Φ T v = T (D) v,(5)
where the learning transfer matrix
T (D) ≡ ΛΦ Φ T ΛΦ + δI n −1 Φ T is an M × M matrix, in-
dependent of f , that fully describes the model's learning behavior on a training set D.
Learnability
We define the learnability of a target function f as
L (D) (f ) ≡ f,f ||f || 2 2 , L(f ) ≡ E D L (D) (f ) ,(6)
where L (D) (f ) (the "D-learnability") is defined for a particular dataset and L(f ) (the "learnability") is its expectation over datasets. We analogously define D-MSE and MSE as
E (D) (f ) ≡ f −f 2 2 , E(f ) ≡ E D E (D) (f ) . (7)
Learnability is a peculiar quantity at first glance and deserves some motivation. It is of interest for several reasons:
• ||f || 2 2 (1 − L(f )) 2 is a lower bound for the bias term in the standard bias-variance decomposition of MSE (see Appendix D.4). Constraining learnability thus yields a bound on MSE.
• For kernel regression, learnability has several desirable properties that MSE does not. In particular, when f is an eigenfunction, it is bounded in [0, 1] and only improves with the addition of more data (Lemma 3.1).
• Learnability obeys a conservation law and provides a useful way to view a kernel's budget of inductive bias.
• Our approximate results can be expressed simply in terms of modewise learnabilities.
Exact Theoretical Results
We now present several exact results regarding learnability and the learning transfer matrix. All theoretical results henceforth are specialized to kernel regression. We relegate all proofs to Appendix D.
Lemma 3.1. The following properties of T (D) , L (D) , L,
and {φ i } M i=1 hold: (a) L (D) (φ i ) = T (D) ii , and L(φ i ) = E T (D) ii . (b) L(φ i ), L (D) (φ i ) ∈ [0, 1].(e) Let D + be D ∪ x, where x ∈ X, x / ∈ D is a new data point. Then L (D+) (φ i ) ≥ L (D) (φ i ). (f) ∂ ∂λi L (D) (φ i ) ≥ 0 and ∂ ∂λi L (D) (φ j ) ≤ 0. (g) ∂ ∂δ L (D) (φ i ) ≤ 0.
Property (a) in Lemma 3.1 formalizes the relationship between the transfer matrix and learnability. Properties (b-e) together give an intuitive picture of the learning process: the learnability of each eigenfunction monotonically increases from zero as the training set grows, attaining its maximum of one in the ridgeless, maximal-data limit. Properties (f-g) show that the kernel eigenmodes are in competition: increasing one eigenvalue while fixing all others can only improve the learnability of the corresponding eigenfunction, but can only harm the learnabilities of all others. Property (h) shows that a ridge parameter only harms eigenfunction learnability.
We now present the conservation law obeyed by learnability.
Theorem 3.2 (Conservation of learnability). For any complete basis of orthogonal functions F, with zero ridge parameter,
f ∈F L (D) (f ) = f ∈F L(f ) = n,(8)
and with positive ridge parameter,
f ∈F L (D) (f ) < n and f ∈F L(f ) < n.(9)
To understand the significance of this result, consider that one might naively hope to design a (neural tangent) kernel that achieves generally high performance for all target functions f . Theorem 3.2 states that this is impossible because, averaged over a complete basis of functions, all kernels achieve the same learnability. Because there exist no universally high-performing kernels, we must instead aim to choose a kernel that assigns high learnability to task-relevant functions.
This result concretizes the notion of inductive bias for kernel regression. A learning algorithm's inductive bias is the set of (implicit or explicit) assumptions about the nature of the target function that allow it to generalize to new data. As no algorithm can generalize well on all functions, an algorithm's inductive bias is a design choice with inherent tradeoffs: enabling generalization on one subset of target functions typically means harming generalization on another. 2 For complex models used in practice, the inductive assumptions are highly implicit and the nature of these tradeoffs has long been unclear. By contrast, Theorem 3.2 makes these tradeoffs explicit, precisely characterizing a model's inductive bias as a fixed budget of learnability it must divide among each set of orthogonal functions. The fact that this budget is exactly known (as opposed to being a bounded but unknown constant) will simplify our later derivations and allow us insight into the parity problem.
This theorem is similar in spirit to the classic "no-free-lunch" theorem for learning algorithms, which states that, averaged over all classification target functions, all models perform at chance level (Wolpert, 1996). However, Theorem 3.2 holds for any orthogonal basis of functions and thus gives a much stronger condition than the classic result. In subsequent derivations, we will choose this basis to be the eigenbasis.
A natural consequence of the bounded total learnability is that, if some functions are more learnable than average, others must be less. The following corollary states that, for these "losing" functions, the model overfits, and MSE evaluated off the training set is worse than that obtained by simply predicting zero.
Corollary 3.3 (Low-L functions are overfit). There is always a function for which L(f ) ≤ n M . If δ = 0, the mean off-training-set MSE for any such function is at least as high as would be obtained by always predicting zero.
It is, in some cases, intuitive that such a threshold exists: for example, a sufficiently-high-frequency function on a continuous input space will appear to be noise and will be overfit. However, it is perhaps surprising that even when learning a noiseless function on a finite input space, there must always be functions that are overfit.
Approximate Theoretical Results
We now obtain expressions for the mean and covariance of the learning transfer matrix and thus of the predicted function. Our derivation involves only basic linear algebra. We give the full derivation in Appendix E and sketch our method here. We first set the ridge parameter δ → 0, then take the following steps:
1. We observe that the only random variable is a design matrix Φ. The expectation is taken over many (i.e.,
M ! (M −n)! ) such matrices, each obeying Φ T Φ = M I n .
We approximate this expectation with a continuous average over all Φ obeying this condition.
2. We show via symmetry that the off-diagonal elements of E T (D) are zero.
3. We show that the diagonal elements are given by
E T (D) ii = E λ i λ i + C (D) i ,(10)
where C (D) i is independent of λ i . We argue that, in cases of interest, C (D) i will concentrate around a modeindependent constant, so we replace it by its deterministic mean C.
4.
Having fixed the form of E T (D) , we fix the constant C using Theorem 3.2.
5. By taking a derivative of T (D) w.r.t. Λ, we find the covariance of T (D) with no additional approximations.
6. Noting that the ridge parameter can be seen essentially as a small increase to all eigenvalues, we immediately extend our results to nonzero ridge.
Our final expressions simplify if either M → ∞ or δ = 0 (to the same results in either limit), so we report these limits here and provide the general expressions in our derivations.
Theorem 4.1 (Statistics of the learning transfer matrix).
Under the above approximations, if either M → ∞ or δ = 0, the mean and covariance of the learning transfer matrix are given by
E T (D) ij = δ ij L i ,(11)
Cov T
(D) ij , T (D) k = L i (1 − L j )L k (1 − L ) n − M m=1 L 2 m (12) · (δ ik δ j + δ i δ jk − δ ij δ k ), where L i ≡ L(φ i ) = λ i λ i + C(13)
and
C ≥ 0 satisfies M i=1 λ i λ i + C + δ C = n.(14)
The learnability of an eigenfunction φ i thus depends solely on the comparison of its eigenvalue to the constant C: if λ i C, the mode is learned, and if λ i C, it is not learned. As n grows, C monotonically decreases and makes successively lower eigenmodes learnable. This observation was also made by Jacot et al. (2020). In practice, Equation 14 can be numerically solved for C, but we also provide bounds on C in terms of {λ i } i in Appendix E.8 as a tool for analytical study.
Under Theorem 4.1, the learnability of an arbitrary function is
L(f ) = 1 v T v i v 2 i L i .
A function is thus more learnable the more weight it places in high eigenvalue modes. As noted by Canatar et al. (2021), higher eigenmodes are typically simpler functions, and this bias towards high eigenmodes is the underlying explanation for (wide) neural networks' "spectral bias" towards low-frequency functions. However, we note that one can design a kernel with complex high eigenmodes (e.g. ?), and thus while spectral bias is merely a consequence of the kernels used in practice, the bias towards high eigenmodes is universal.
We extend these results to settings with target noise in Appendix E.7.
MSE and the Covariance off
Noting that E(f ) = E |v −v| 2 , recalling thatv = T (D) v, and using Theorem 4.1 to evaluate a sum over eigenmodes, we recover the result of Bordelon et al. (2020) for expected MSE:
E(f ) = n n − m L 2 m i (1 − L i ) 2 v 2 i .(15)
The Inductive Bias and Generalization of Kernel Regression and Wide Networks
Taking a sum over indices of v, we find that the covariance of the predicted function can be written simply in terms of MSE as 3
Cov[v i ,v j ] = L 2 i E(f ) n δ ij .(16)
Learning Speeds and Double Descent
The denominator in Equation 12 and Equation 15 can also be written as m L m (1−L m )+ δ C . The quantity L m (1−L m ) intuitively represents the rate at which mode m is being learned, as can be seen from the observation that
dL i dn = L i (1 − L i ) M m=1 L m (1 − L m ) + δ C .(17)
This equation states that a new unit of learnability (i.e., a new data point) is split between all eigenmodes in proportion to L i (1 − L i ) (with a portion proportional to δ C sacrificed to the ridge parameter).
L m (1 − L m ) is small for well-or poorly-learned modes and maximal when L m = .5. MSE is thus very high when all modes are either fully learned or not learned at all, which happens when the tail eigenvalues {λ i } i>n are either zero or very small relative to {λ i } i≤n . This is intimately related to the well-known double-descent phenomenon in which MSE peaks when the rank of the kernel equals n (Belkin et al., 2019; Mei & Montanari, 2019).
Mean Squared Gradient
Equations 11 and 12 for the statistics of the predicted function are important because they allow the prediction, and thus the theoretical study, of a wide array of functionals of f for kernel regression and wide neural networks. There are many such functionals, and we admittedly cannot foresee which will prove most important; instead of having a narrow use case as motivation, we rather believe that the ability to predict generic 2nd-order statistics is a versatile new tool which is likely to have many applications.
That said, for concreteness, we highlight one such use case here. Adversarial examples are essentially a phenomenon of surprisingly large gradient, so the study of adversarial robustness would greatly benefit by the ability to study from first principles how problem parameters -e.g., data dimension, target function, and dataset size -affect the typical function gradient. One quantification of this, the mean squared gradient, is given by
E x ∇ xf (x) 2 = ij E[v ivj ] G ij ,(18)where G ij ≡ E x [∇ x φ i (x) · ∇ x φ j (x)]
, and can be predicted using Theorem 4.1.
Case Study: the Parity Problem
Setting aside the fact that Theorem 3.2 enabled a transparent new derivation of the statistics off , one might fairly question the value of our learnability framework as opposed to, e.g., the eigenvalue-centric approach of Bordelon et al.
(2020). We demonstrate the power of our framework by using it to easily derive a new result regarding the hardness of the classic parity problem. The domain of this problem is the hypercube X = {−1, +1} d , over which we define the subset-parity functions
φ S (x) = (−1) i∈S 1[xi=1] ,(19)
where
S ⊆ {1, ..., d} ≡ [d]. The objective is to learn φ [d] .
This was shown to be exponentially hard for Gaussian kernel methods by Bengio et al. (2006); here we extend this result to arbitrary rotation-invariant kernels.
For any rotation-invariant kernel, such as the NTK of a fullyconnected neural network, {φ S } S are the eigenfunctions, with degenerate eigenvalues {λ k } d k=0 depending only on k = |S|. Yang & Salman (2019) proved that, for any fullyconnected kernel, the even and odd eigenvalues each obey an ordering in k. Letting d be odd for simplicity, this result and Equation 13 imply that L 1 ≥ L 3 ≥ ... ≥ L d . Counting level degeneracies, this is a hierarchy of 2 d−1 learnabilities of which L d is the smallest. The conservation law of 3.2 then implies that
L d ≤ n 2 d−1 ,(20)
which, using the fact that E ≥ (1 − L) 2 , implies that
E(φ [d] ) ≥ 1 − n 2 d−1 2 .(21)
Obtaining an MSE below a desired threshold thus requires at least n min = 2 d−1 (1− 1/2 ) samples, which is exponential in d, our desired result. This analysis was made simple with our conservation law formulation, but is not at all obvious when MSE is written in terms of eigenvalues as in prior work.
Overfitting at Low n
Expanding an eigenfunction's MSE at low n, we find that E(φ i )| n=0 = 1 and
dE(φ i ) dn n=0 = 1 j λ j + δ j λ 2 j j λ j + δ − 2λ i . (22)k = 0 k = 2 k = 5 k = 10 2 0 2 2 2 4 2 6 2 8 n B 8d Hypercube k = 0 k = 1 k = 3 k = 8 2 0 2 2 2 4 2 6 2 8 n C 7-sphere k = 0 k = 1 k = 2 k = 3 k = 4 k = 5
2 0 2 2 2 4 2 6 2 8 2 10 n D Image Datasets This implies that, at small n, MSE increases as samples are added for all modes i such that
λ i < j λ 2 j 2( j λ j + δ) .(23)
Because learnability (the projection off onto φ i ) is nonnegative, this worsening MSE is due to overfitting: confidently mistaking φ i for more learnable modes. This expression confirms the intuitive expectation that increasing the ridge parameter tightens the eigenvalue threshold of Equation 23
Experimental Verification of Theoretical Results
We now perform experiments to confirm our various theoretical results for both ridgeless NTK regression and wide neural networks. Unless otherwise stated, all neural network experiments used a fully-connected (FC) four-hidden-layer (4HL) ReLU architecture with width 500 trained to convergence. We use the neural tangents library (Novak et al., 2019) to perform NTK regression. Our experiments use both real image datasets and synthetic target functions on the following three domains.
1. Discretized Unit Circle. The simplest input space we consider is the discretization of the unit circle into M points, X = {(cos(2πj/M ), sin(2πj/M )} M j=1 . Unless otherwise stated, we use M = 256. The eigenfunctions on this domain are φ 0 (θ) = 1, φ k (θ) = √ 2 cos(kθ), and φ k (θ) = √ 2 sin(kθ), for k ≥ 1.
2.
Hypercube. We also perform experiments on the ddimensional hypercube, X = {−1, 1} d , giving M = 2 d . As stated in Section 4.4, the eigenfunctions on this domain are the subset-parity functions with eigenvalues determined by the number of sensitive bits k.
3.
Hypersphere. To demonstrate that our results extend to continuous domains, we perform experiments on the d- Kernel conservation law. We first verify that total learnability equals the training set size. On the unit circle with M = 10, we train models on all 10 eigenfunctions with the same dataset D and compute their resulting D-learnabilities.
sphere S d ≡ {x ∈ R d+1 |x 2 = 1}.
Results are shown in Figure 2. We find that, for both shallow and deep models with both ReLU and tanh nonlinearities, the sum of modewise learnabilities equals n exactly for NTK regression and very nearly for trained networks.
Learnability. We next verify the predictions of Equation 13 for modewise learnability. We train 4HL models on several eigenmodes on each synthetic domain at varying n and compare predicted and empirical learnabilities in Figure 1(A-C). In all cases, we find excellent agreement between theory and experiment.
We also train 4HL models on four binary image classification tasks, generating theoretical predictions using 10 4 training samples as described in Appendix C. As shown in Figure 1D, not only do theoretical learnabilities match experimental learnabilities, but both match one's intuitive expectation of the relative difficulty of these tasks, with MNIST 0/1 the most learnable pair and CIFAR10 deer/horse the least. In these cases, learnability is a useful metric for quantifying the difficulty of a task.
Furthermore, to evaluate the utility of (1 − L) 2 as a lower bound for MSE, we evaluate both quantities on these four image classification tasks. As shown in Figure A.8, we find that (1 − L) 2 is a remarkably close lower bound for MSE. This is important because it suggests that learnability, which we have shown is much simpler to study than MSE, can nonetheless furnish a faithful approximation for MSE in realistic settings.
Mean squared gradient. We next confirm that our results allow prediction of mean squared gradient (MSG) E x |∇ xf (x)| 2 = ||∇f || 2 2 as described in Section 4.3. On the hypersphere, the modewise gradient interaction constants are G (k ),(k ) = k(k + d − 2)δ kk δ . We train networks on modes with k = 2 with increasing dimension d and compute MSG on test data. Results are plotted in Figure 4.
Vulnerability to gradient-based adversarial attacks can be viewed essentially as a phenomenon of surprisingly large gradients. If such vulnerability is an inevitable consequence of high dimension, a common heuristic belief (e.g. Gilmer et al. (2018)), one should expect that MSG becomes much larger than the ground-truth value at high dimension. Surprisingly, we find no such effect. Our theory enables future first-principles study of this discrepancy.
Results for a comparable experiment on the unit circle, varying k instead of dimension, are shown in Figure A.9.
Increasing MSE curves. We next verify our prediction of increasing MSE curves for low eigenmodes as a result of overfitting. With small n, we fit four eigenmodes on each domain, three of which are predicted to have increasing MSE according to Equation 23. Our theoretical predictions match experiment excellently and match the true sign of dE/dn in every case.
Agreement with narrow networks. Our neural network experiments thus far have used moderately large widths that place them in the NTK regime. However, realistic, finite-width networks are often not in the NTK regime and undergo significant kernel evolution and feature learning (Olah et al., 2017;Mei et al., 2018;Dyer & Gur-Ari, 2019;Yang & Hu, 2021). With typical network parameterization, the scale of these effects is controlled by network width, with more pronounced kernel evolution at narrower widths. It is thus important to study the finite-width deviations from our theory in order to gauge its scope of applicability.
To this end, we compute MSE and learnability for four hypercube eigenmodes using networks of widths from ∞ to 20. The results, plotted in Figure A.5, show that our predictions remain a good fit even down to width 20 for learnability and 50 for MSE. Furthermore, despite kernel evolution, the difficulty ordering of the target functions remains perfectly predicted by their eigenvalues with respect to the infinite-width kernel. These results strongly suggest that kernel eigenanalysis may prove a useful tool even for the study of the generalization of networks used in practice.
Conclusions
We have presented a new account of the generalization of kernel ridge regression using a new measure of target function learnability. This allowed us to describe the inductive bias of a kernel as a fixed budget of learnability, approximate the mean and covariance of the predicted function, obtain a new result regarding the hardness of the parity problem, and study network overfitting and robustness.
Our main results suggest many promising directions for future study, such as the comparison of convolutional and fully-connected architectures via eigenanalysis of their NTKs, the study of adversarial robustness using mean squared gradient, and the development of techniques to better apply our theory to real datasets. Our theory's agreement with even fairly narrow networks is a surprise that may help guide theoretical endeavors to extend NTK analysis to finite width. Dyer, E. and Gur-Ari, G. Asymptotics of wide networks from feynman diagrams. arXiv preprint arXiv:1909.11304, 2019.
Exterkate, P., Groenen, P. J., Heij, C., and van Dijk, D.
Nonlinear forecasting with many predictors using kernel ridge regression. International Journal of Forecasting, 32 (3) Consider a feedforward neural network representing a functionf θ : X → R, where θ is a parameter vector. Further consider one training example x with target value y and one test point x and suppose we perform one step of gradient descent with a small learning rate η with respect to the MSE loss θ ≡ (f θ (x) − y) 2 . This gives the parameter update
θ → θ + δθ, with δθ = −η∇ θ θ = −2η(f − y)∇ θfθ (x).(24)
We now wish to know how this parameter update changesf θ (x ). To do so, we linearize about θ, finding that
f θ+δθ (x ) =f θ (x ) + ∇ θfθ (x ) · δθ + O(δθ 2 ) =f θ (x ) − 2η(f − y) ∇ θfθ (x) · ∇ θfθ (x ) + O(δθ 2 ) =f θ (x ) − 2η(f − y)K(x, x ) + O(δθ 2 ),(25)
where we have defined K(x, x ) ≡ ∇ θfθ (x) · ∇ θfθ (x ). This quantity is the NTK. Remarkably, as network width 4 goes to infinity, the O(δθ 2 ) corrections become negligible, and K(x, x ) is the same after any random initialization 5 and at any time during training. This dramatically simplifies the analysis of network training, allowing one to prove that after infinite time training on MSE loss for an arbitrary dataset, the network's learned function is given by ridgeless kernel regression. See, for example, Equations 14-16 of Lee et al. (2019) 6 .
C. Experimental details
We conduct all our experiments using JAX (Bradbury et al., 2018), performing exact NTK regression with the neural tangents library (Novak et al., 2019) built atop it. For the dataset sizes we consider in this paper, exact NTK regression is typically quite fast, running in seconds, while the training time of finite networks varies from seconds to minutes and depends on width, depth, training set size, and eigenmode. In particular, as described by Rahaman et al. (2019), lower eigenmodes take longer to train (especially when aiming for near-zero training MSE as we do here).
Naively, when training an infinitely-wide network, the NTK only describes the mean learned function, and the true learned function will include an NNGP-kernel-dependent fluctuation term reflecting the random initialization (Lee et al., 2019). However, by storing a copy of the parameters at t = 0 and redefiningf t (x) :=f t (x) −f 0 (x) throughout optimization and at test time, this term becomes zero. We use this trick in our experiments with finite networks. 7
Unless otherwise stated, all experiments used four-hidden-layer ReLU networks initialized with NTK parameterization (Sohl-Dickstein et al., 2020) with σ w = 1.4, σ b = .1. The tanh networks used in generating Figure 2 instead used σ w = 1.5. Experiments on the unit circle always used a learning rate of .5, while experiments on the hypercube, hypersphere, and image datasets used a learning rate of .5 or .1 depending on the experiment. While higher learning rates led to faster convergence, they often also gave different generalization behavior, in line with the large learning rate regimes described by Lewkowycz et al. (2020). Means and 1σ error bars always reflect statistics from 30 random dataset draws and initializations (for finite nets), except for
• the learnability sum experiment of Figure 2, which used only a single trial,
• experiments on image datasets, which used 15 trials, and
• the nonmonotonic MSE curves of Figure 3, which used 100 trials.
Experiments with binary image classification tasks used scalar targets of ±1. To obtain the eigeninformation necessary for theoretical predictions, we examine a training set with 10 4 samples, computing the eigensystem of the data-data kernel matrix to obtain 10 4 eigenvalues {λ i } i and projecting the target vector onto each eigenvector to compute a 10 4 -vector of target coefficients v. We then compute theoretical learnability and MSE as normal.
These estimates turn out to be overly optimistic because our framework assumes that test data is drawn i.i.d. from the universe of M samples, and so if M is finite, then a particular test point has a n/M chance of belonging to the training set and being predicted perfectly. In our experiments (as in practice), however, this is not the case. We correct for this by 4 The "width" parameter varies by architecture; for example it is the minimal hidden layer width for fully connected networks and the minimal number of channels per hidden layer for a convolutional network. Our theory holds for any architecture in the proper limit. 5 (assuming the parameters are drawn from the same distribution) 6 We note that there exists a different infinite-width kernel, called the "NNGP kernel," describing a network's random initialization, and this reference reserves K for the NNGP kernel and uses Θ for the NTK. 7 We credit this trick to a talk by Jascha Sohl-Dickstein.
computing the off-traning-set learnability and MSE from the naive versions: subtracting off the free training set learnability and normalizing both quantities gives
L OTS = L naive − n M 1 − n M , E OTS = E naive 1 − n M .(26)
We find the off-training-set quantities agree much better with experiment. It is these quantities that constitute the theoretical predictions in Figures 1 and A.8. Experimental quantities are computed with a random subset of 1500 images from the test set.
Bordelon et al. (2020) report similar experiments confirming their theoretical MSE on MNIST, but they allow duplicates and use knowledge of the test data in computing the eigensystem. By contrast, our predictions are generated strictly from training data (albeit from more than n samples). Applying kernel analysis to efficiently and accurately predict generalization performance on realistic data is an important problem, and as this comparison illustrates, there are many possible approaches.
D. Proofs: Exact Results
In this section, we provide proofs of the formal claims of Section 3. As a reminder, we note that
T (D) ≡ ΛΦ Φ T ΛΦ + δI n −1 Φ T .(27)
We will also make use of the observation that the ridge parameter can be viewed essentially as a uniform increase in all eigenvalues. Letting T (D) (Λ; δ) denote the learning transfer matrix with eigenvalue matrix Λ and ridge parameter δ, it follows from Equation 27 and the fact that Φ T Φ = M I n that Proof. Using the fact that φ i , φ i = 1, we see that
T (D) (Λ; δ) = Λ Λ + δ M I M T (D) Λ + δ M I M ; 0 .(28)L (D) (φ i ) = φ i , φ i = e T i T (D) e i = T (D)
ii , where e i is a one-hot M -vector with the one at index i. The second clause of the property follows by averaging.
Property (b): L(φ i ), L (D) (φ i ) ∈ [0, 1].
We observe that
L (D) (φ i ) = e T i T (D) e i (29) = e T i ΛΦ Φ T ΛΦ + δI n −1 Φ T e i (30) = Tr Φ T e i e T i ΛΦ Φ T ΛΦ + δI n −1 ≤ 1,(31)
where in the last line we have used the fact that
Φ T ΛΦ + δI n = Φ T e i e T i ΛΦ + [PSD matrix],(33)
which implies that the trace is less than or equal to one.
Property ( Property (e): Let D + be D ∪ x, where x ∈ X, x / ∈ D is a new data point. Then
L (D+) (φ i ) ≥ L (D) (φ i ).
To begin, we set δ = 0. We then use the Moore-Penrose pseudoinverse, which we denote by (·) + , to cast T (D) into a more transparent form:
T (D) ≡ ΛΦ Φ T ΛΦ −1 Φ T = Λ 1/2 Λ 1/2 ΦΦ T Λ 1/2 Λ 1/2 ΦΦ T Λ 1/2 + Λ −1/2 ,(34)
where we have suppressed the D in Φ(D). This follows from the property of pseudoinverses that A(A T A) + A T = (AA T )(AA T ) + for any matrix A. We now augment our system with one extra data point, getting
T (D+) = Λ 1/2 Λ 1/2 (ΦΦ T + ξξ T )Λ 1/2 Λ 1/2 (ΦΦ T + ξξ T )Λ 1/2 + Λ −1/2 ,(35)
where ξ is an M -element column vector orthogonal to the others of Φ and satisfying ξ T ξ = M . Equations 34 and 35 yield that
L (D) (φ i ) = e T i T (D) e i = e T i Λ 1/2 ΦΦ T Λ 1/2 Λ 1/2 ΦΦ T Λ 1/2 + e i ,(36)L (D+) = e T i T (D+) e i = e T i Λ 1/2 (ΦΦ T + ξξ T )Λ 1/2 Λ 1/2 (ΦΦ T + ξξ T )Λ 1/2 + e i .(37)
The rightmost expressions of Equations 36 and 37 both contain a factor of the form AA + , where A is a positive semidefinite matrix. An operator of this form is a projector onto the row-space of A. Comparing these equations, we find that the projectors are the same except that, in Equation 37 This proof can easily be extended to nonzero ridge parameter using Equation 28, in which case Equations 36 and 37 both gain an overall factor of λ i /(λ i + δ M ) and the same projector argument applies.
Property (f): ∂ ∂λi L (D) (φ i ) ≥ 0 and ∂ ∂λi L (D) (φ j ) ≤ 0. Proof. Differentiating T (D)
jj with respect to a particular λ i , we find that
∂ ∂λ i T (D) jj = (δ ij − λ j φ T j K −1 φ i )φ T i K −1 φ j ,(38)
where φ T i is the ith row of Φ and K = Φ T ΛΦ + δI n . Specializing to the case i = j, we note that
φ T i K −1 φ i ≥ 0 because K is positive definite, and λ i φ i K −1 φ T i ≤ 1 because λ i φ i φ T i is one of the positive semidefinite summands in K = k λ k φ k φ T k + δI n .
The first clause of the property follows. To prove the second clause, we instead specialize to the case i = j, which yields that
∂ ∂λ i T (D) jj = −λ j φ T j K −1 φ i 2 ,(39)
which is manifestly nonpositive because λ j > 0. The desired property follows.
Property (g): ∂ ∂δ L (D) (φ i ) ≤ 0. Proof. Differentiating Equation 27 w.r.t. δ yields that ∂ ∂δ T (D) = −ΛΦK −2 Φ T . We then observe that ∂ ∂δ L (D) (φ i ) = e T i ∂ ∂δ T (D) e i = −λ i e T i ΦK −2 Φ T e i ,(40)
which must be nonpositive because λ i > 0 and ΦK −2 Φ T is manifestly positive definite.
We note that, in a prior version of this paper, properties (b), (e), and (g) were mistakenly written as applying to all target functions when they in fact apply only to eigenfunction.
D.2. Proof of Theorem 3.2 (Conservation of learnability)
First, we note that, for any orthogonal basis F on X ,
f ∈F L (D) (f ) = v∈V v T T (D) v v T v ,(41)
where V is an orthogonal set of vectors spanning R M . This is equivalent to Tr T (D) . This trace is given by
Tr T (D) = Tr Φ T ΛΦ(Φ T ΛΦ + δI n ) −1 = Tr K K + δI n .(42)
When δ = 0, this trace simplifies to Tr[I n ] = n. When δ > 0, it is strictly less than n. This proves the theorem.
D.3. Proof of Corollary 3.3 (Low-L Functions are Overfit)
It is an immediate consequence of the conservation law of Theorem 3.2 that any orthogonal basis of functions F must contain an f such that L(f ) ≤ n M . We now specialize to the case δ = 0. In the ridgeless, kernel regression is an interpolating method, predicting exactly the training targets on any inputs that appear in the training set. Decomposing learnability as
L(f ) = 1 M ||f || 2 2 E x∈X f (x)f (x) = n M + 1 M ||f || 2 2 E x∈X \D f (x)f (x) ,(43)
we find that, for a function such that L(f ) ≤ n M , the second term is nonpositive. Off-training-set (OTS) MSE is given by
E OTS (f ) = 1 M − n E x∈X \D (f −f (x)) 2 = 1 M − n E x∈X \D (f 2 (x) − 2f (x)f (x) +f 2 (x)) .(44)
If the model punts and predictsf (x) = 0, OTS MSE is thus ||f || 2 2 . For a low-learnability function, the second and third terms are both nonnegative, yielding OTS MSE greater than or equal to this naive value. Examining the third term, unless the model does in fact always predict zero, its OTS MSE will be strictly greater.
D.4. Proof that E(f ) ≥ ||f || 2 2 (1 − L(f )) 2
Here we show that knowing a particular function's learnability is sufficient to lower-bound the bias term of MSE (and thus for MSE itself). We will work with v andv, the eigencoeffient vectors of f andf . Expected MSE is given by
E(f ) = E (v −v) 2 = v 2 − 2v T E[v] + E v 2 = v 2 − 2v T E[v] + E[v] 2 bias + Var[|v|] variance .(45)
Projecting any vector onto an arbitrary unit vector can only decrease its magnitude, and so
bias ≥ v 2 − 2v T E[v] + E v T vv T |v| 2 E[v] = |v| 2 1 − v T E[v] |v| 2 2 = ||f || 2 2 (1 − L(f )) 2 .(46)
This inequality is useful because it provides a bound on MSE in terms of learnability, which we show is simpler to study and obeys a conservation law. We use it in Section 4.4 to quickly show the difficulty of the parity problem.
E. Derivations: Approximate Results
In this section, we flesh out the derivation of the mean and covariance of T (D) sketched in Section 4. We organize our derivation according to the steps outlined in the main text. We first derive our results with δ = 0 and reintroduce a ridge parameter at the end.
E.1. Approximating Φ as a Random Matrix
Consider the M × M matrixΦ whereΦ ij = φ i (x j ) and the j index runs over the full input space X . This matrix obeys Φ TΦ =ΦΦ T = M I M . The choice of a particular dataset amounts to the choice of n of these M rows to construct the design matrix Φ. Any such choice yields a Φ satisfying the orthonormalization condition Φ T Φ = M I n .
Due to the large number of possible Φ, assuming no special structure in the eigenfunctions, it is reasonable to approximate it as a continuous average over all M × n matrices Φ such that Φ T Φ = M I n with the isotropic (Haar) measure. This approximation, which amounts to assuming no special structure in the eigenfunctions, is also made implicitly by Bordelon et al. (2020) and explicitly by Jacot et al. (2020).
E.2. Vanishing Off-Diagonals of E T (D)
We next observe that
E Φ ΛΦ Φ T U T ΛUΦ −1 Φ T = E Φ ΛU T Φ Φ T ΛΦ −1 Φ T U ,(47)
where U is any orthogonal M × M matrix. Defining U (m) as the matrix such that U in terms of λ i (the ith eigenvalue), Λ (i) (Λ with its ith row and column removed), φ T i (the ith row of Φ), and Φ (i) (Φ with its ith row removed). Using the Sherman-Morrison matrix inversion formula, we find that
Φ T ΛΦ −1 = Φ T (i) Λ (i) Φ (i) + λ i φ i φ T i −1 = Φ T (i) Λ (i) Φ (i) −1 − λ Φ T (i) Λ (i) Φ (i) −1 φ i φ T i Φ T (i) Λ (i) Φ (i) −1 1 + λ i φ T i Φ T (i) Λ (i) Φ (i) −1 φ i .(49)
The Inductive Bias and Generalization of Kernel Regression and Wide Networks
Inserting this into the expectation of T (D) , we find that
E T (D) ii = E Φ (i) ,φ i λ i φ T i Φ T (i) Λ (i) Φ (i) −1 φ i − λ 2 i φ T i Φ T (i) Λ (i) Φ (i) −1 φ i 2 1 + λ i φ T i Φ T (i) Λ (i) Φ (i) −1 φ i = E Φ (i) ,φ i λ i λ i + φ T i Φ T (i) Λ (i) Φ (i) −1 φ i −1 = E Φ (i) ,φ i λ i λ i + C (Φ (i) ,φ i ) ,(50)where C (Φ (i) ,φ i ) ≡ C (Φ) i ≡ φ T i Φ T (i) Λ (i) Φ (i) −1 φ i −1
is a nonnegative scalar.
E.3.2. CONCENTRATION OF C (Φ) i
We now argue that, in realistic settings, C
(Φ) i concentrates about its deterministic mean C i ≡ E C (Φ) i
. This simply requires observing that, if C (Φ) i were to have significant variance relative to its mean, then for modes i such that
λ i ∼ C (Φ) i , T (D)
ii and thus L (D) (φ i ) would also vary significantly with random dataset selection. However, as with most generalization metrics, we should in general expect that in realistic settings, simply resampling the dataset will not drastically change the resulting D-learnability. In order for D-learnability to have the expected small fluctuations, C (Φ) i must concentrate. Our experimental results in Figure 1 confirm that these fluctuations are indeed small in practice, especially at large n.
E.3.3. MODE-INDEPENDENCE OF C i
We next argue that C i is approximately independent of i, so we can replace it with a constant C. This requires another natural assumption: adding one additional eigenmode does not significantly change the (mean) learnability of a particular eigenmode. We expect this to hold true for realistic eigenspectra at modestly large n, at which adding one additional sample (i.e. one additional unit of learnability) also does not significantly change L(φ i ). 8 In particular, we assume that
E T (D) ii ≈ λ i λ i + C i ≈ λ i λ i + C + i =⇒ C i ≈ C + i ,(51)
where C + i is C i computed with the addition of another eigenmode. We choose the additional eigenmode to have eigenvalue λ i , and we insert it at index i, effectively reinserting the missing mode i into Φ (i) and Λ (i) .
To clarify the random variables in play, we shall adopt a more explicit notation, writing out Φ (i) in terms of its row vectors as Φ (i) = [φ 1 , ..., φ i−1 , φ i+1 , ..., φ M ] T . Using this notation, we find upon adding the new eigenmode that
C i ≡ E Φ (i) ,φ i φ T i Φ T (i) Λ (i) Φ (i) −1 φ i −1 (52) ≡ E {φ k } M k=1 φ T i [φ 1 , ..., φ i−1 , φ i+1 , ..., φ M ]Λ (i) [φ 1 , ..., φ i−1 , φ i+1 , ..., φ M ] T −1 φ i −1 (53) ≈ C + i ≡ E {φ k } M k=1 ,φ i φ T i [φ 1 , ..., φ i−1 ,φ i , φ i+1 , ..., φ M ]Λ[φ 1 , ..., φ i−1 ,φ i , φ i+1 , ..., φ M ] T −1 φ i −1 ,(54)
8 There are certainly pathological eigenspectra that will violate this assumption for a particular mode. For example, given an eigenspectrum with a large gap between λ −1 and λ , the learnability of mode when n = will be greatly affected by the insertion of an additional high-eigenvalue mode. However, as kernel eigenspectra are typically well-behaved in practice (and even in the worst case most eigenmodes will not be thus susceptible), this assumption is reasonable.
where Λ is the original eigenvalue matrix andφ T i is the design matrix row corresponding to the new mode. We can also perform the same manipulation with C j , this time adding an additional eigenvalue λ j at index j, yielding that
C j ≡ E Φ (j) ,φ j φ T j Φ T (j) Λ (j) Φ (j) −1 φ j −1 (55) ≡ E {φ k } M k=1 φ T j [φ 1 , ..., φ j−1 , φ j+1 , ..., φ M ]Λ (j) [φ 1 , ..., φ j−1 , φ j+1 , ..., φ M ] T −1 φ j −1 (56) ≈ C + j ≡ E {φ k } M k=1 ,φ j φ T j [φ 1 , ..., φ j−1 ,φ j , φ j+1 , ..., φ M ]Λ[φ 1 , ..., φ j−1 ,φ j , φ j+1 , ..., φ M ] T −1 φ j −1 .(57)
We now compare Equations 54 and 57. Each is an expectation over M + 1 vectors from the isotropic measure with the constraint that, when stacked, they form a design matrix Φ such that Φ T Φ = (M + 1)I n . Though they are not independent, the statistics of these M + 1 vectors are symmetric under exchange, so we are free to relabel them. Equation 54 is identical to Equation 57 upon relabeling φ i → φ j ,φ i → φ i , and φ j →φ j , so they are equivalent, and C + i = C + j . This in turn implies that C i ≈ C j .
In light of this, we now replace all C i with a mode-independent (but as-of-yet-unknown) constant C. This argument is closely related to the cavity method of statistical physics (Del Ferraro et al., 2014), which we expect could be applied to further develop the approximate theory we present in this work.
E.4. Fixing the Constant C
We can determine the value of C by observing that, using the ridgeless case of Theorem 3.2,
i E T (D) ii = i λ i λ i + C = n.(58)
E.5. Differentiating w.r.t. Λ to Obtain the Covariance
Here we derive expressions for the second-order statistics of T (D) . These derivations make no further approximations beyond those already made in approximating E T (D) . We begin with a calculation that will later be of use: differentiating both sides of the constraint on C with respect to a particular eigenvalue, we find that
d dλ i M j=1 λ j λ j + C = M j=1 −λ j (λ j + C) 2 dC dλ i + C (λ i + C) 2 = 0,(59)
yielding that
dC dλ i = C q(λ i + C) 2 , where q ≡ M j=1 λ j (λ j + C) 2 .(60)
We now factor T (D) into two matrices as
T (D) = ΛZ, where Z ≡ Φ Φ T ΛΦ −1 Φ T .(61)
Unlike T (D) , the matrix Z has the advantage of being symmetric and containing only one factor of Λ. Our approach will be to study the second-order statistics of Z, which will trivially give these statistics for T (D) . Examining our expression for E T (D) , we find that the expectation of Z is
E[Z] = (Λ + CI M ) −1 .(62)
We are not using the Einstein convention of summation over repeated indices.
Cases 1 and 2. We now consider differentiating Z with respect to a particular element of the matrix Λ. This yields
dZ i dΛ jk = −φ T i Φ T ΛΦ −1 φ j φ T k Φ T ΛΦ −1 φ = −Z ij Z k ,(66)
where φ i is the ith row of Φ. This gives us the useful expression that
E[Z ij Z k ] = − d dΛ jk E[Z i ] .(67)
We now set = i and evaluate this expression using Equation 62, concluding that
E[Z ij Z ij ] = E[Z ij Z ji ] = − d dλ j 1 λ i + C = 1 (λ i + C) 2 δ ij + C q(λ j + C) 2 ,(68)Cov[Z ij , Z ij ] = Cov[Z ij , Z ji ] = C q(λ i + C) 2 (λ j + C) 2 .(69)
We did not require that i = j, and so Equation 68 holds for Case 1 as well as Case 2.
Case 3. We now aim to calculate E[Z ii Z jj ] with i = j. We might hope to use Equation 67 in calculating E[Z ii Z jj ], but this approach is stymied by the fact that we would need to take a derivative with respect to Λ ij , but we only have an approximation for Z for diagonal Λ. We can circumvent this by means of Z (U) . From the definition of Z (U) , we find that
d dU ij − d dU ji Z (U) U=I M = −φ T i Φ T ΛΦ −1 φ j λ i φ T i − φ i λ j φ T j + φ i λ i φ T j − φ j λ j φ T i Φ T ΛΦ −1 φ j = (λ j − λ i ) Z 2 ij + Z ii Z jj .(70)
Differentiating with respect to both U ij and U ji with opposite signs ensures that the derivative is taken within the manifold of orthogonal matrices. Now, using Equation 63, we find that
d dU ij − d dU ji E Z (U) U=I M = d dU ij − d dU ji U T (Λ + CI M ) −1 U U=I M = 1 λ i + C − 1 λ j + C .(71)
Taking the expectation of Equations 70, plugging in Equation 68 for the squared off-diagonal element, comparing to 71, and performing some algebra, we conclude that
E[Z ii Z jj ] = 1 (λ i + C)(λ j + C) − C q(λ i + C) 2 (λ j + C) 2(72)
and that Z ii , Z jj are anticorrelated with covariance
Cov Φ [Z ii , Z jj ] = − C q(λ i + C) 2 (λ j + C) 2 .(73)
With the use of Kronecker deltas, we can combine Equations 69 and 73 into one expression covering all cases. As can be verified by case-by-case evaluation, one such expression is Cov Φ [Z ij , Z k ] = C (δ ik δ j + δ i δ jk − δ ij δ k ) q(λ i + C)(λ j + C)(λ k + C)(λ + C)
.
Using the fact that T (D) ij = λ i Z ij , defining L i ≡ λ i (λ i + C), and noting that q = i L i (1 − L i ), we obtain the elementwise covariances of T (D) reported in Theorem 4.1.
E.6. Adding Back the Ridge Parameter
Our approximate results have thus far all assumed δ = 0, which has simplified our derivations. We can now add the ridge parameter back with the observation of Appendix E that the ridge parameter can be viewed essentially as a uniform increase in all eigenvalues. To reiterate, letting T (D) (Λ; δ) denote the learning transfer matrix with eigenvalue matrix Λ and ridge parameter δ, it holds that Φ T Φ = M I n that
To add a ridge parameter, then, we need merely replace λ i → λ i + δ M and then multiply T
ij by λ i (λ i + δ M ) −1 . This yields that
( c )
cWhen n = 0, T (D) = 0 M and L (D) (f ) = L(f ) = 0. (d) When n = M and δ = 0, T (D) = I M and L (D) (f ) = L(f ) = 1.
Figure 1 :Figure 2 :
12Predicted learnabilities closely match experimental values. (A-C) Each plot shows the learnability of several eigenfunctions on a synthetic domain. Theoretical predictions (curves) are plotted against experimental values from trained finite networks (circles) and NTK regression (triangles) with varying dataset size n. Error bars reflect 1σ variation due to random choice of dataset and, for finite nets, random initialization. (D) Predicted and experimental learnability for four binary image classification tasks. Eigenfunction learnabilities always sum to the size of the training set. Stacked bar charts with 10 components show D-learnability for each of the 10 eigenfunctions on the unit circle discretized with M = 10. The left bar in each pair contains results from NTK regression, while the right bar contains results from trained finite networks. Dashed lines indicate n.
Figure A. 6 Figure 3 :Figure 4 :
634shows the eigenvalues for these domains as a function of k. On each domain, eigenvalues decrease essentially monotonically as k increases, in concordance with For difficult eigenmodes, MSE increases with n due to overfitting. Predicted MSE (curves) and empirical MSE for trained networks (circles) and NTK regression (triangles) for four eigenmodes on three domains at small n. Dotted lines indicated dE/dn| n=0 as predicted by Equation 22. Predicted mean squared gradient matches experiment. Predicted mean squared gradient (curves) and empirical values for trained networks (circles) and kernel regression (triangles) for k = 2 modes on hyperspheres with d ∈ {3, 5, 8}, normalized by the ground-truth values.the common intuition that neural nets are biased towards simple functions.Figure A.7 shows the eigenvalues and eigencoefficients for the binary image classification tasks used in our experiments.
Belkin, M., Ma, S., and Mandal, S. To understand deep learning we need to understand kernel learning. In International Conference on Machine Learning, pp. 541-549. PMLR, 2018. Belkin, M., Hsu, D., Ma, S., and Mandal, S. Reconciling modern machine-learning practice and the classical biasvariance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849-15854, 2019. Bengio, Y., Delalleau, O., and Le Roux, N. The curse of highly variable functions for local kernel machines. Advances in neural information processing systems, 18: 107, 2006. Bordelon, B., Canatar, A., and Pehlevan, C. Spectrum dependent learning curves in kernel regression and wide neural networks. In International Conference on Machine Learning, pp. 1024-1034. PMLR, 2020. Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., and Zhang, Q. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Canatar, A., Bordelon, B., and Pehlevan, C. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Nature communications, 12(1):1-12, 2021. Cao, Y., Fang, Z., Wu, Y., Zhou, D.-X., and Gu, Q. Towards understanding the spectral bias of deep learning. arXiv preprint arXiv:1912.01198, 2019. de Vlaming, R. and Groenen, P. J. The current and future use of ridge regression for prediction in quantitative genetics. BioMed research international, 2015, 2015.Del Ferraro, G., Wang, C., Martí, D., and Mézard, M. Cavity method: Message passing from a physics perspective. arXiv preprint arXiv:1409.3048, 2014.
:736-753, 2016. Frye, C. and Efthimiou, C. J. Spherical harmonics in p dimensions. arXiv preprint arXiv:1205.3548, 2012. Gilmer, J., Metz, L., Faghri, F., Schoenholz, S. S., Raghu, M., Wattenberg, M., and Goodfellow, I. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018. Jacot, A., Hongler, C., and Gabriel, F. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8580-8589, 2018. Jacot, A., Ş imşek, B., Spadaro, F., Hongler, C., and Gabriel, F. Kernel alignment risk estimator: risk prediction from training data. arXiv preprint arXiv:2006.09796, 2020. Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. Deep neural networks as gaussian processes. In International Conference on Learning Representations (ICLR). OpenReview.net, 2018. Lee, J., Xiao, L., Schoenholz, S. S., Bahri, Y., Novak, R., Sohl-Dickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8570-8581, 2019. Lewkowycz, A., Bahri, Y., Dyer, E., Sohl-Dickstein, J., and Gur-Ari, G. The large learning rate phase of deep learning: the catapult mechanism. CoRR, abs/2003.02218, 2020. Mei, S. and Montanari, A. The generalization error of random features regression: Precise asymptotics and the double descent curve. Communications on Pure and Applied Mathematics, 2019. Mei, S., Montanari, A., and Nguyen, P.-M. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33): E7665-E7671, 2018. Misiakiewicz, T. and Mei, S. Learning with convolution and pooling operations in kernel methods. arXiv preprint arXiv:2111.08308, 2021. Novak, R., Xiao, L., Hron, J., Lee, J., Alemi, A. A., Sohl-Dickstein, J., and Schoenholz, S. S. Neural tangents: Fast and easy infinite neural networks in python. CoRR, abs/1912.02803, 2019. Olah, C., Mordvintsev, A., and Schubert, L. Feature visualization. Distill, 2(11):e7, 2017. Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville, A. On the spectral bias of neural networks. In International Conference on Machine Learning, pp. 5301-5310. PMLR, 2019.
Figure A. 5 :
5Comparison between predicted learnability and MSE for networks of various widths. (A-F) Predicted (curves) and true (circles) learnability for four eigenmodes on the 8d hypercube. Dataset size n varies within each subplot, and the width of the 4HL ReLU network varies between subplots. (G-L) Same as (A-F) but with MSE instead of learnability.
Figure A. 6 :
64HL ReLU NTK eigenvalues and multiplicities on three synthetic domains. (A) Eigenvalues for k for the discretized unit circle (M = 256). Eigenvalues decrease as k increases except for a few near exceptions at high k. (B) Eigenvalues for the 8d hypercube. Eigenvalues decrease monotonically with k. (C) Eigenvalues for the 7-sphere up to k = 70. Eigenvalues decrease monotonically with k. (D) Eigenvalue multiplicity for the discretized unit circle. All eigenvalues are doubly degenerate (due to cos and sin modes) except for k = 0 and k = 128. (E) Eigenvalue multiplicity for the 8d hypercube. (F) Eigenvalue multiplicity for the 7-sphere.
Figure A. 7 :
7Eigenvalues and eigencoefficients for four binary image classification tasks. (A) Kernel eigenvalues as computed from 10 4 training points as described in Appendix C. Spectra for CIFAR10 tasks roughly follow power laws with exponent −1, while spectra for MNIST tasks follow power laws with slightly steeper descent. (B) Eigencoefficients as computed from 10 4 training points. Tasks with higher observed learnability (Figure 1) place more weight in higher (i.e., lower-index) eigenmodes and less in lower ones.
Figure A. 8 :Figure A. 9 :
89Learnability provides a close lower bound for MSE on image classification tasks. Experimental MSE for finite networks (circles) and kernel regression (triangles), theoretical MSE (solid curves), and theoretical (1 − L) 2 (dashed curves) for four binary image classification tasks. Predicted mean squared gradient matches experiment on the unit circle. Mean squared gradient theoretical predictions (curves) and empirical values for finite networks (circles) and kernel regression (triangles) for various eigenmodes on the discretized unit circle with M = 256, normalized by the ground-truth mean squared gradient of E |f (x)| 2 = k 2 . Empirical values are computed discretely as E j |f (x j ) −f (x j+1 )| 2 , where x j and x j+1 are neighboring points on the unit circle.B. Review of the NTKIn the main text, we assume prior familiarity with the NTK, using Equation 1 as the starting point of our derivations. Here we provide a definition and very brief introduction to the NTK for unfamiliar readers. For derivations and full discussions, seeJacot et al. (2018) andLee et al. (2019).
D. 1 .
1Proof of Lemma 3.1 (Properties of T (D) , L (D) , and L) Property (a): L (D) (φ i ) = T (D) ii , and L(φ i ) = E T (D) ii .
c): When n = 0, T (D) = 0 M and L (D) (f ) = L(f ) = 0. Proof. When n = 0, Φ has no columns, and thus T (D) = 0. The other clauses follow from Property (a) and averaging. Property (d): When n = M and δ = 0, T (D) = I M and L (D) (f ) = L(f ) = 1. Proof. When n = M and δ = 0, Φ is a full-rank M × M matrix. Inspection of Equation 27 then shows that T (D) = I M .The other clauses follow from Property (a) and averaging.
, there is one additional dimension in the row-space and thus one new basis vector in the projector. This new basis vector can only increase e T i T (D+) e i , and thus L (D+) (φ i ) ≥ L (D) (φ i ) in the ridgeless case.
≡ δ ab (1 − 2δ am ), noting that U (m) ΛU (m) = Λ, and plugging U (m) in as U in Equation 47, we find that m = a, we conclude that E T(D) ab = 0 if a = b. E.3. Fixing the Form of E T (D) iiE.3.1. ISOLATING THE DESIRED ELEMENTWe now isolate a particular diagonal element of the mean learning transfer matrix. To do so, we write E T (D) ii
T
(D) (Λ; δ) = Λ Λ + δ M I M T (D) Λ + δ M I M ; 0 .
The Inductive Bias and Generalization of Kernel Regression and Wide Networks2 0
2 2
2 4
2 6
2 8
n
0.00
0.25
0.50
0.75
1.00
( )
(f)
A Unit Circle
and combats overfitting. This explains why MSE increases with n when learning difficult target functions in many of the experiments of Bordelon et al. (2020) and Misiakiewicz & Mei (2021), a fact which has not previously been explained. In studying MSE near n = M , Canatar et al. (2021) found a different class of increasing MSE curve, but they required both noise and a zero eigenvalue; by contrast, our observation is significantly more general, requiring only a sufficiently small eigenvalue.
The eigenfunctions on this domain are the hyperspherical harmonics (see, e.g., Frye & Efthimiou (2012); Bordelon et al.(2020)) which group into degenerate sets indexed by k ∈ N. The corresponding eigenvalues decrease exponentially with k, and so when summing over all eigenmodes when computing predictions, we simply truncate the sum at k max = 70.
Su, L. and Yang, P. On learning over-parameterized neural networks: A functional approximation perspective. arXiv preprint arXiv:1905.10826, 2019.Schulz, E., Speekenbrink, M., and Krause, A. A tutorial on
gaussian process regression: Modelling, exploring, and
exploiting functions. Journal of Mathematical Psychol-
ogy, 85:1-16, 2018.
Shawe-Taylor, J., Cristianini, N., et al. Kernel methods for
pattern analysis. Cambridge university press, 2004.
Sohl-Dickstein, J., Novak, R., Schoenholz, S. S., and Lee, J.
On the infinite width limit of neural networks with a stan-
dard parameterization. arXiv preprint arXiv:2001.07301,
2020.
Sollich, P. Learning curves for gaussian processes. Advances
in neural information processing systems, pp. 344-350,
1999.
Sollich, P. Gaussian process regression with mismatched
models. arXiv preprint cond-mat/0106475, 2001.
Spigler, S., Geiger, M., and Wyart, M. Asymptotic learning
curves of kernel methods: empirical data versus teacher-
student paradigm. Journal of Statistical Mechanics: The-
ory and Experiment, 2020(12):124001, 2020.
Valle-Perez, G., Camargo, C. Q., and Louis, A. A. Deep
learning generalizes because the parameter-function map
is biased towards simple functions. arXiv preprint
arXiv:1805.08522, 2018.
Vivarelli, F. and Opper, M. General bounds on bayes er-
rors for regression with gaussian processes. Advances
in neural information processing systems, 11:302-308,
1999.
Wolpert, D. H. The lack of a priori distinctions between
learning algorithms. Neural computation, 8(7):1341-
1390, 1996.
Xu, Z. J. Understanding training and generalization
in deep learning by fourier analysis. arXiv preprint
arXiv:1808.04295, 2018.
Xu, Z.-Q. J., Zhang, Y., Luo, T., Xiao, Y., and Ma, Z.
Frequency principle: Fourier analysis sheds light on
deep neural networks. arXiv preprint arXiv:1901.06523,
2019a.
Xu, Z.-Q. J., Zhang, Y., and Xiao, Y. Training behavior of
deep neural network in frequency domain. In Interna-
tional Conference on Neural Information Processing, pp.
264-274. Springer, 2019b.
Yang, G. and Hu, E. J. Tensor programs iv: Feature learning
in infinite-width neural networks. In International Con-
ference on Machine Learning, pp. 11727-11737. PMLR,
2021.
Yang, G. and Salman, H. A fine-grained spectral perspective
on neural networks. arXiv preprint arXiv:1907.10599,
2019.
We do not use the reproducing kernel Hilbert space (RKHS) formalism in this work, but we note that, by the Moore-Aronszajn theorem, the kernel K defines a unique RKHS.
For example, choosing a linear model often leads to good performance on linear targets but poor performance on nonlinear targets.
Equation 71in the supplement ofCanatar et al. (2021) provides an expression for this covariance that, upon performing some algebra, agrees with ours except for a second, negative term proportional to (1 − δij). As Li, Lj → 0, the variance ofvi,vj approaches 0, but due to this additional term, the covariance approaches a finite negative value instead of 0, which is impossible. We thus believe that Equation 16 corrects this minor error.
Jacot et al. (2020) scale the ridge parameter proportional to n in their definition of kernel ridge regression; our reproduction of their result accounts for this and applies to our convention.
AcknowledgmentsThe authors thank Zack Weinstein for useful discussions and Chandan Singh, Sajant Anand, Jesse Livezey, Roy Rinberg, Jascha Sohl-Dickstein, and several reviewers for helpful comments on the manuscript. This research was supported in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under contract W911NF-20-1-0151. JS gratefully acknowledges support from the National Science Foundation Graduate Fellow Research Program (NSF-GRFP) under grant DGE 1752814.Cov Twhere C ≥ 0 satisfiesTaking M → ∞, we find thatThese expressions, which reduce to those for finite M when δ = 0, are equivalent to those given in Theorem 4.1.E.7. Target NoiseSuppose that, instead of given clean targets, we are given noisy labels f * (x i ) + η i , where f * is the noiseless underlying function and η i ∼ N (0, 2 ) is the noise when the function is evaluated on x i . As long as we are assured that no input will either appear twice in the training set or appear in both the training and test sets, instead of viewing the noise as a random variable sampled with each function evaluation, we can instead view it as a random but fixed noisy perturbation to f which alters its eigencoefficients to be v → v * +ṽ, where v * are the original coefficients and the elements ofṽ are sampled i.i.d from N (0, 2 M ). We can then examine the statistics off when trained on this noisy function. In the M → ∞ limit, all the noise is in arbitrarily low eigenmodes with L i = 0. In this limit, the expected learnability observed when fitting a noisy function isObserved learnability is thus lowered with the addition of noise. That said, the expected unnormalized inner product off with the true function (i.e., E v T v * ) remains unchanged. In this limit, MSE becomesThe covariance of the predicted coefficients in the case of noise is still given by Equation 16 using the noisy MSE.E.8. Properties of CIn experimental settings, C is in general easy to find numerically, but for theoretical study, we anticipate it being useful to have some analytical bounds on C in order to, for example, prove that certain eigenmodes are or are not asymptotically learned for particular spectra. To that end, the following lemma gives some properties of C.λi λi+C + δ C = n, with positive eigenvalues {λ i } M i=1 ordered from greatest to least, the following properties hold:The Inductive Bias and Generalization of Kernel Regression and Wide Networks (a) C = ∞ when n = 0, and C = 0 when n → M and δ = 0.(b) C is strictly decreasing with n.(d) C ≥ λ n − 1 for all ∈ {n, ..., M }.Proof of property (a): Because M i=1 λi λi+C + δ C is strictly decreasing with C for C ≥ 0, there can only be one solution for a given n. The first statement follows by inspection, and the second follows by inspection and our assumption that all eigenvalues are strictly positive.Proof of property (b): Differentiating the constraint on C with respect to n yieldsProof of property (c): We observe that n =The desired property follows. Proof of property (d): We set δ = 0 and consider replacing λ i with λ if i ≤ and 0 if i > . Noting that this does not increase any term in the sum, we find that n =The desired property in the ridgeless case follows. A positive ridge parameter only increases C, so the property holds in general. We note that a positive ridge parameter can be incorporated into the bound, givingWe also note that, as observed by Jacot et al.(2020)andSpigler et al. (2020), the asymptotic scaling of C can be fixed if the kernel eigenvalues follow a power law spectrum. Specifically, if λ i ∼ i −α for some α > 1, then Jacot et al. (2020) 9 show that C = Θ(δ n −1 + n −α ).
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. S Arora, S Du, W Hu, Z Li, Wang , R , International Conference on Machine Learning. PMLRArora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generalization for overpa- rameterized two-layer neural networks. In International Conference on Machine Learning, pp. 322-332. PMLR, 2019.
| [
"http://github.com/google/jax."
]
|
[
"SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark",
"SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark"
]
| [
"Chandan Misra [email protected] ",
"Swastik Haldar [email protected] ",
"Sourangshu Bha [email protected] ",
"Soumya K Ghosh ",
"\nIndian Institute of Technology Kharagpur West Bengal\nIndia\n",
"\nIndian Institute of Technology\nKharagpur West BengalIndia\n",
"\nIndian Institute of Technology\nKharagpur West BengalIndia\n",
"\nIndian Institute of Technology\nKharagpur West BengalIndia\n"
]
| [
"Indian Institute of Technology Kharagpur West Bengal\nIndia",
"Indian Institute of Technology\nKharagpur West BengalIndia",
"Indian Institute of Technology\nKharagpur West BengalIndia",
"Indian Institute of Technology\nKharagpur West BengalIndia"
]
| []
| e growth of big data in domains such as Earth Sciences, Social Networks, Physical Sciences, etc. has lead to an immense need for efficient and scalable linear algebra operations, e.g. Matrix inversion. Existing methods for efficient and distributed matrix inversion using big data platforms rely on LU decomposition based block-recursive algorithms. However, these algorithms are complex and require a lot of side calculations, e.g. matrix multiplication, at various levels of recursion. In this paper, we propose a different scheme based on Strassen's matrix inversion algorithm (mentioned in Strassen's original paper in 1969), which uses far fewer operations at each level of recursion. We implement the proposed algorithm, and through extensive experimentation, show that it is more efficient than the state of the art methods. Furthermore, we provide a detailed theoretical analysis of the proposed algorithm, and derive theoretical running times which match closely with the empirically observed wall clock running times, thus explaining the U-shaped behaviour w.r.t. block-sizes. | 10.1145/3154273.3154300 | [
"https://arxiv.org/pdf/1801.04723v1.pdf"
]
| 19,348,083 | 1801.04723 | 4341056bfd479c7bbb1f0f9b184c8830adf41068 |
SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark
15 Jan 2018
Chandan Misra [email protected]
Swastik Haldar [email protected]
Sourangshu Bha [email protected]
Soumya K Ghosh
Indian Institute of Technology Kharagpur West Bengal
India
Indian Institute of Technology
Kharagpur West BengalIndia
Indian Institute of Technology
Kharagpur West BengalIndia
Indian Institute of Technology
Kharagpur West BengalIndia
SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark
15 Jan 201810.1145/3154273.3154300CCS CONCEPTS •Computing methodologies → MapReduce algorithms; KEYWORDS Linear AlgebraMatrix InversionStrassen's AlgorithmApache Spark
e growth of big data in domains such as Earth Sciences, Social Networks, Physical Sciences, etc. has lead to an immense need for efficient and scalable linear algebra operations, e.g. Matrix inversion. Existing methods for efficient and distributed matrix inversion using big data platforms rely on LU decomposition based block-recursive algorithms. However, these algorithms are complex and require a lot of side calculations, e.g. matrix multiplication, at various levels of recursion. In this paper, we propose a different scheme based on Strassen's matrix inversion algorithm (mentioned in Strassen's original paper in 1969), which uses far fewer operations at each level of recursion. We implement the proposed algorithm, and through extensive experimentation, show that it is more efficient than the state of the art methods. Furthermore, we provide a detailed theoretical analysis of the proposed algorithm, and derive theoretical running times which match closely with the empirically observed wall clock running times, thus explaining the U-shaped behaviour w.r.t. block-sizes.
INTRODUCTION
Dense matrix inversion is a basic procedure used by many applications in Data Science, Earth Science, Scientific Computing, etc, and has become an essential component of many such systems. It is an expensive operation, both in terms of computational and space complexity, and hence consumes a large fraction of resources in Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permi ed. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ICDCN '18, Varanasi, India © 2018 ACM. 978-1-4503-6372-3/18/01. . . $15.00 DOI: 10.1145/3154273.3154300 many of the workloads. In the big data era, many of these applications have to work on huge matrices, possibly stored over multiple servers, and thus consuming huge amounts of computational resources for matrix inversion. Hence, designing efficient large scale distributed matrix inversion algorithms, is an important challenge.
Since its release in 2012, Spark [19] has been adopted as a dominant solution for scalable and fault-tolerant processing of huge datasets in many applications, e.g., machine learning [11], graph processing [7], climate science [12], social media analytics [2], etc. Spark has gained its popularity for its in-memory distributed data processing ability, which runs interactive and iterative applications faster than Hadoop MapReduce. It's close intergration with Scala / Java, and the flexible structure for RDDs allow distributed recursive algorithms to be implemented efficiently, without compromising on scalability and fault-tolerance. Hence, in this paper we focus on Spark for implementation of large scale distributed matrix inversion.
ere are a variety of existing inversion algorithms, e.g. methods based on QR decomposition [13], LU decomposition [13], Cholesky decomposition [5], Gaussian Elimination [3], etc. Most of them require O(n 3 ) time (where n denotes the order of the matrix), and main speed-ups in shared memory se ings come from architecture specific optimizations (reviewed in section 2). Surprisingly, there are not many studies on distributed matrix inversion using bigdata frameworks, where jobs could be distributed over machines with a diverse set of architectures. LU decomposition is the most widely used technique for distributed matrix inversion, possibly due to it's efficient block-recursive structure. Xiang et al. [17] proposed a Hadoop based implementation of inverting a matrix relying on computing the LU decomposition and discussed many Hadoop specific optimizations. Recently, Liu et al. [10] proposed several optimized block-recursive inversion algorithms on Spark based on LU decomposition. In the block recursive approach [10], the computation is broken down into subtasks that are computed as a pipeline of Spark tasks on a cluster. e costliest part of the computation is the matrix multiplication and the authors have given a couple of optimized algorithms to reduce the number of multiplications. However, in spite of being optimized, the implementation requires 9 O(n 3 ) operations on the leaf node of the recursion tree, 12 multiplications at each recursion level of LU decomposition and an additional 7 multiplication a er the LU decomposition to invert the matrix, which makes the implementation perform slower.
In this paper, we use a much simpler and less exploited algorithm, proposed by Strassen in his 1969 multiplication paper [16]. e algorithm follows similar block-recursion structure as LU decompostion, yet providing a simpler approach to matrix inversion.
is approach involves no additional matrix multiplication at the leaf level of recursion, and requires only 6 multiplications at intermediate levels. We propose and implement a distributed matrix inversion algorithm based on Strassen's original serial inversion scheme. We also provide a detailed analysis of wall clock time for the proposed algorithm, thus revealing the 'U'-shaped behaviour with respect to block size. Experimentally, we show comprehensively, that the proposed approach is superior to the LU decomposition based approaches for all corresponding block sizes, and hence overall. We also demonstrate that our analysis of the proposed approach matches with the empirically observed wall clock time, and similar to ideal scaling behaviour. In summary:
(1) We propose and implement a novel approach (SPIN) to distributed matrix inversion, based on an algorithm proposed by Strassen [16]. (2) We provide a theoretical analysis of our proposed algorithm which matches closely with the empirically observed wall clock time. (3) rough extensive experimentation, we show that the proposed algorithm is superior to the LU decomposition based approach.
RELATED WORK
e literature on parallel and distributed matrix inversion can be divided broadly into three categories: 1) HPC based approach, 2) GPU based approach and 3) Hadoop and Spark based approach. Here, we briefly review them.
HPC based approach
LINPACK, LAPACK and ScaLAPACK are some of the most robust linear algebra so ware packages that support matrix inversion. LIN-PACK was wri en in Fortran and used on shared-memory vector computers. It has been superseded by LAPACK which runs more efficiently on modern cache-based architectures. LAPACK has also been extended to run on distributed-memory MIMD parallel computers in ScaLAPACK package. However, these packages are based on architectures and frameworks which are not fault tolerant and MapReduce based matrix inversion are more scalable than ScaLAPACK as shown in [17]. Lau et al. [8] presented two algorithms for inverting sparse, symmetric and positive definite matrices on SIMD and MIMD respectively. e algorithm uses Gaussian elimination technique and the sparseness of the matrix to achieve higher performance. Bientinesi et al. [4] presented a parallel implementation of symmetric positive definite matrix on three architechtures -sequential processors, symmetric multiprocessors and distributed memory parallel computers using Cholesky factorization technique. Yang et al. [18] presented a parallel algorithm for matrix inversion based on Gauss-Jordan elimination with partial pivoting. It used efficient mechanism to reduce the communication overhead and also provides good scalability. Bailey et al. presented techniques to compute inverse of a matrix using an algorithm suggested by Strassen in [16]. It uses Newton iteration method to increase its stability while preserving parallelism. Most of the above works are based on specialized matrices and not meant for general matrices. In this paper, we concentrate on any kind of square positive definite and invertible matrices which are distributed on large clusters which the above algorithms are not suitable for.
Multicore and GPU based approach
In order to fully exploit the multicore architecture, tile algorithms have been developed. Agullo et al. [1] developed such a tile algorithm to invert a symmetric positive definite matrix using Cholesky decomposition. Sharma et al. [15] presented a modified Gauss-Jordan algorithm for matrix inversion on CUDA based GPU platform and studied the performance metrics of the algorithm. Ezza i et al. [6] presented several algorithms for computing matrix inverse based on Gauss-Jordan algorithm on hybrid platform consisting of multicore processors connected to several GPUs. Although the above works have demonstrated that GPU can considerably reduce the computational time of matrix inversion, they are nonscalable centralized methods and need special hardwares.
MapReduce based approach
MadLINQ [14] offered a highly scalable, efficient and fault tolerant matrix computation system with a unified programming model which integrates with DryadLINQ, data parallel computing system. However, it does not mention any inversion algorithm explicitly. Xiang et al. [17] implemented first LU decomposition based matrix inversion in Hadoop MapReduce framework. However, it lacks typical Hadoop shortcomings like redundant data communication between map and reduce phases and inability to preserve distributed recursion structure. Liu et al. [10] provides the same LU based distributed inversion on Spark platform. It optimizes the algorithm by eliminating redundant matrix multiplications to achieve faster execution. Almost all the MapReduce based approaches relies on LU decomposition to invert a matrix. e reason is that it partitions the computation in a way suitable for MapReduce based systems. In this paper, we show that matrix inversion can be performed efficiently in a distributed environment like Spark by implementing Strassen's scheme which requires less number of multiplications than the earlier providing faster execution.
ALGORITHM DESIGN
In this section, we discuss the implementation of SPIN on Spark framework. First, we describe the original Strassen's inversion algorithm [16] for serial matrix inversion in section 3.1. Next, in section 3.2, we describe the BlockMatrix data structure from MLLib which is used in our algorithm to distribute the large input matrix into the distributed file system. Finally, section 3.3 describes the distributed inversion algorithm, and its implementation strategy using Blockmatrix.
Strassen's Algorithm for Matrix Inversion
Strassen's matrix inversion algorithm appeared in the same paper in which the well known Strassen's matrix multiplication was published. is algorithm can be described as follows. Let two matrices A and C = A −1 be split into half-sized sub-matrices:
A 11 A 12 A 21 A 22 −1 = C 11 C 12 C 21 C 22
en the result C can be calculated as shown in Algorithm 1. Intuitively, the steps involved in the algorithm are difficult to be performed in parallel. However, for input matrices which are too large to be fit into the memory on a single server, each such step is required to be processed distributively. ese steps include breaking a matrix into four equal size sub-matrices, multiplication and subtraction of two matrices, multiplying a matrix to a scalar and arranging four half-sized sub-matrices into a full matrix. All these steps are done by spli ing the matrix into blocks which act as execution unit of the spark job. A brief description of the block data structure is given below.
Algorithm 1: Strassen's Serial Inversion Algorithm function Inverse(); Input : Matrix A (input matrix of size n × n), int threshold Output : Matrix C (invert of matrix A begin
if n=threshold then invert A in any approach (e.g., LU, QR, SVD decomposition); else
Compute A 11 , B 11 , ..., A 22 , B 22 by computing n = n 2 ;
I ← A −1 11 I I ← A 21 .I I I I ← I .A 12 IV ← A 21 .I I I V ← IV − A 22 V I ← V −1 C 12 ← I I I .VI C 21 ← V I .I I V I I ← I I I .C 21 C 11 ← I − V I I C 22 ← −V I end return C end
Block Matrix Data Structure
In order to distribute the matrix in the HDFS (Hadoop Distributed File System), we create a distributed matrix called BlockMatrix, which is basically an RDD of MatrixBlocks spread in the cluster. Distributing the matrix as a collection of Blocks makes them easy to be processed in parallel and follow divide and conquer approach.MatrixBlock is a block of matrix represented as a tuple ((rowIndex, columnIndex), Matrix). Here, rowIndex and columnIndex are the row and column index of a block of the matrix. Matrix refers to a one-dimensional
A A 11 A 11 A −1 11 V −1 V A −1 11 V −1 V A 11 A −1 11 V −1 V A −1 11 V −1
Distributed Block-recursive Matrix
Inversion Algorithm e distributed block-recursive algorithm can be visualized as Figure 1, where upper le sub-matrix is divided recursively until it can be inverted serially on a single machine. A er the leaf node inversion, the inverted matrix is used to compute intermediate matrices, where each step is done distributively. Another recursive call is performed for matrix V I until leaf node is reached. Like A 11 , it is also inverted on a single node when the leaf node is reached. e core inversion algorithm (described in Algorithm 2) takes a matrix (say A) represented as BlockMatrix, as input as shown in Figure 1. e core computation performed by the algorithm is based on six distributed methods, which are as follows:
• breakMat: Breaks a matrix into four equal sized sub-matrices • xy: Returns one of the four sub-matrices a er the breaking, according to the index specified by x and . • multiply: Multiplies two BlockMatrix • subtract: Subtracts two BlockMatrix • scalarMul: Multiples a scalar with a BlockMatrix • arrange: Arranges four equal quarter BlockMatrices into a single full BlockMatrix.
Below we describe the methods in a li le bit more details and also provide the algorithm for each.
breakMat method breaks a matrix into four sub-matrices but does not return four sub-matrices to the caller. It just prepare the input matrix to a form which help filtering each part easily. As described in Algorithm 3 it takes a BlockMatrix and returns a Pair-RDD of tag and Block using a mapToPair transformation. First, the BlockMatrix is converted into an RDD of MatrixBlocks. en, each MatrixBlock of the RDD is mapped to tuple of (tag, MatrixBlock), resulting a pairRDD of such tuples. Inside the mapToPair transformation, we carefully tag each MatrixBlock according to which quadrant it belongs to.
xy method is a generic method signature for four methods used for accessing one of the four sub-matrices of size 2 n−1 from a matrix of size 2 n . Each method consists of two transformationfilter and map. filter takes the matrix as a pairRDD of (tag, MatrixBlock) tuple which was the output of breakMat method and filters the appropriate portion against the tag associated with the MatrixBlock.
; if n = 1 then RDD < Block > in A ← A.toRDD() Map(); begin Input : Block block Output :Block block block.matrix ← locIn erse(block.matrix) return block end blockAIn ← in A.toBlockMatrix() return blockAIn else size ← size/2 pairRDD ← breakMat(A,size) A11 ← 11(pairRDD,blockSize) A12 ← 12(pairRDD,blockSize) A21 ← 21(pairRDD,blockSize) A22 ← 22(pairRDD,blockSize) I ← In erse(A11,size,blockSize) I I ← multipl (A21,I ) I I I ← multipl (I ,A12) IV ← multipl (A21,I I I ) V ← subtract(IV , A22) V I ← In erse(V, size,blockSize) C12 ← multipl (I I I,V I ) C21 ← multipl (VI , I I ) V I I ← multipl (I II,C21) C11 ← subtract(I ,V I I ) C22 ← scalerMul(VI , −1, blockSize)
C ← arran e(C11,C12,C21,C22, size,blockSize) return C end end en it converts the pairRDD into RDD using the map transformation.
multiply method multiplies two input sub-matrices and returns another sub-matrix of BlockMatrix type. Multiply method in our algorithm uses naive block matrix multiplication approach, which replicates the blocks of matrices and groups the blocks together to be multiplied in the same node. It uses co-group to reduce the communication cost.
subtract method subtracts two BlockMatrix and returns the result as BlockMatrix. Input : PairRDD f ilteredRDD Output : RDD rdd return f ilteredRDD.block end x ← rdd.toBlockMatrix() return x end scalarMul method (as described in Algorithm 5), takes a Block-Matrix and returns another BlockMatrix using a map transformation. e map takes blocks one by one and multiply each element of the block with the scalar.
if ri/size = 0 & ci/size = 0 then ta ← "A11" end else if ri/size = 0 & ci/size = 1 then ta ← "A12" end else if ri/size = 1 & ci/size = 0 then ta ← "A21" end else ta ← "A22" end block.rowIndex ← ri%size block.colIndex ← ci%size return Tuple2(ta ,block) end return brokenMat end
PERFORMANCE ANALYSIS
In this section, we a empt to estimate the performances of the proposed approach, and state-of-the-art approach using LU decomposition for distributed matrix inversion. In this work, we are interested in the wall clock running time of the algorithms for varying number of nodes, matrix sizes and other algorithmic parameters e.g., partition / block sizes. is is because we are interested in the practical efficiency of our algorithm which includes not only the time spent by the processes in the CPU, but also the time taken while waiting for the CPU as well as data communication during shuffle. e wall clock time depends on three independently analyzed quantities: total computational complexity of the sub-tasks to be executed, total communication complexity between executors of different sub-tasks on each of the nodes, and parallelization factor of each of the sub-tasks or the total number of processor cores available.
Later, in section 5, we compare the theoretically derived estimates of wall clock time with empirically observed ones, for validation. We consider only square matrices of dimension 2 p for all the derivations. e key input and tunable parameters for the algorithms are:
• n = 2 p : number of rows or columns in matrix A • b = number of splits for square matrix • 2 q = n b = block size in matrix A • cores = Total number of physical cores in the cluster 4.1. e proposed distributed block recursive strassen's matrix inversion algorithm or SPIN (presented in Algorithm 2) has a complexity in terms of wall clock execution time requirement, where n is the matrix dimension, b is the number of splits, and cores is the actual number of physical cores available in the cluster, as
Cost S P I N = n 3 b 2 + 10b 2 − 6b min b 2 4 i , cores + (b − 1) + 9b 2 + n 2 (b + 1) b × min b 2 4 i +1 , cores + n 2 (b 2 n + b 2 − 2n) b 2 × min n 2 4 i +1 , cores(1)
P . Before going into details of the analysis, we give the performance analysis of the methods described in section 3.3. A summary of the independently analyzed quantities is given in Table 1.
ere are two primary part of the algorithmif part and else part. If part does the calculation of the leaf nodes of the leaf nodes of the recursion tree as shown in Figure 1, while else part does the computation for internal nodes. It is clearly seen from the figure that, at level i, there are 2 i nodes and the leaf level contains 2 p−q nodes.
ere is only one transformation in if part which is map. It calculates the inverse of a matrix block in a single node using serial matrix inversion method. e size of each block is n/b and we need ≈ (n/b) 3 time to perform each such method. erefore, the computation cost to process all the leaf nodes is
Comp l e af N ode = 2 p−q × n b 3 = n 3 b 2(2)
In SPIN, leaf nodes processes one block on a single machine of the cluster. In spite of being small enough to be accommodated in a single node, we do not collect them in the master node for the communication cost. Instead, we do a map which takes the only block of the RDD, do the calculation and return the RDD again.
breakMat method takes a BlockMatrix and returns a PairRDD of tag and Block using a mapToPair transformation. If the method is executed for m levels, the computation cost of breakMat is
Comp br e ak Mat = m−1 i =0 2 i × b 2 4 i = 2b (b − 1)(3)
Note that, i th level contains 2 i nodes. Here each block is consumed in parallel giving parallelization factor as PF br e ak Mat = min b 2 4 i , cores
e total number of blocks processed in filter and map are b 2 4 i and b 2 4 i +1 for i th level respectively. Consequently, the parallelization factor of both of them are min b 2 4 i , cores and min b 2 4 i +1 , cores respectively. erefore, the computation cost for xy is
Comp x = m−1 i =0 2 i × b 2 4 i min b 2 4 i , cores + m−1 i =0 2 i × b 2 4 i +1 min b 2 4 i +1 , cores = 8b 2 − 4b min b 2 4 i , cores + 2b 2 − 2b min b 2 4 i +1 , cores (5)
multiply method multiplies two BlockMatrices, the computation cost of which can be derived as (6) and the parallelization factor will be PF mul tipl = min n 2 4 i +1 , cores
Comp mul tipl = m−1 i =0 2 i × n 3 8 i +1 = n 3 b 2 − 1 6b 2
subtract method subtracts two BlockMatrices using a map transformation. ere are two subtraction in each recursion level. erefore, (8) and the parallelization factor will be PF subtr act = min n 2 4 i +1 , cores
Comp subtr act = m−1 i =0 2 i × n 2 4 i +1 = n 2 (b − 1) 2b
scalarMul method (as described in Algorithm 5), takes a Block-Matrix and returns another BlockMatrix using a map transformation.
e map takes blocks one by one and multiply each each element of the block with the scalar. erefore, the computation cost of scalarMul is
Comp scal ar Mul = m−1 i =0 2 i × b 2 4 i +1 = b 2 (b − 1)(10)
Again, here each block is consumed in parallel giving parallelization factor as
PF scal ar Mul = min b 2 4 i +1 , cores(11)
arrange method (as described in Algorithm 6), takes four submatrices of size 2 n−1 which represents four co-ordinates of a full matrix of size 2 n and arranges them in later and returns it as Block-Matrix. It consists of four maps, each one for a separate BlockMatrix. Each map maps the block index to a different block index that provides the final position of the block in the result matrix. e computation cost and parallelization factor for maps are same as scalarMul, which can be found in equation 10.
SPIN requires 4 xy method calls, 6 multiplications, and 2 subtractions for each recursion level. When summed up it will give equation 1.
n 3 b 2 - - breakMat 2 3 b 2 − 3b + 2 2b 2 − 2b min b 2 4 i , cores min b 2 4 i , cores xy (filter) 2 3 b 2 − 3b + 2 8b 2 − 4b min b 2 4 i +1 , cores min b 2 4 i , cores xy (map) 1 6 b 2 − 3b + 2 2b 2 − 2b min b 2 4 i +2 , cores min b 2 4 i +1 , cores multiply (large) 16n 3 21b 3 (b 3 − 7b + 6) n 3 6b 2 (b 2 − 1) min n 2 4 i , cores min n 2 4 i +1 , cores multiply Communication (large) 8n 2 (b 2 −1)(8b 2 −112) 105b 2 n 2 (b 2 −1) 6b min b 2 4 i , cores min b 2 4 i +1 , cores multiply (small) 8n 3 42b 3 (b 3 − 7b + 6) - min n 2 4 i +1 , cores - multiply Communication (small) n 2 (b 2 −1)(8b 2 −112) 105b 2 - min b 2 4 i +1 , cores - subtract 2n 2 3b 2 (b 2 − 3b + 2) n 2 2b (b − 1) min n 2 4 i , cores min n 2 4 i +1 , cores scalarMul 4 3 b 2 − 3b + 2 b 2 (b − 1) min b 2 4 i , cores min b 2 4 i +1 , cores arrange - b 2 (b − 1) - min b 2 4 i +1 , cores Additional Cost 7 × n 2 3 - min n 2 4 , cores - Cost LU = 9n 3 b 2 + (b − 1)[210b 2 (b − 2) + 64n 2 (b + 1)(b 2 − 14)] 105b 2 × min b 2 4 i , cores + (b − 1)[70b 2 (b − 2) + 8n 2 (b + 1)(b 2 − 14)] 105b 2 × min b 2 4 i +1 , cores + (b − 1)(b − 2) 105b 2 × min b 2 4 i +2 , cores + 2n 2 (b − 1)[8n(b 2 + b + 6) + 7b(b − 2)] 21b 3 × min n 2 4 i , cores + 8n 3 (b − 1)(b 2 + b − 6) 42b 3 × min n 2 4 i +1 , cores + 7n 3 8 × min n 2 4 , cores(12)
P . Liu et al. in [10] has described several algorithms for distributed matrix inversion using LU decomposition. We are referring the most optimized one (stated as Algorithm 5, 6 and 7 in the paper) for the performance analysis. e core computation of the algorithm is done with 1) a call to a recursive method LU which basically decomposes the input matrix recursively until leaf nodes of the tree where the size of the matrix reaches the block size and 2) the computation a er LU decomposition. e matrix inversion algorithm performs 7 additional multiplications (as given in Algorithm 5 in [10]) of size n 2 , providing additional cost of, which is basically 7 matrix multiplications of dimension n 2 . We call this as Additional Cost and can be obtained as follows
Comp AdditionalCost = 7 × n 2 3 min n 2 4 , cores(13)
ere are two primary parts of the LU methodif part and else part. If part does the LU decomposition at the leaf nodes of the recursion tree while else part does for the internal nodes. If part requires 2 LU decomposition, 4 matrix inversion and 3 matrix multiplications and there are 2 p−q number of leaf nodes in the recursion tree. Each of these processing requires O( n b 3 ) time for a matrix of n dimension. erefore, the total cost of the if part is
Comp l e a N ode = 9 × 2 p−q × ( n b ) 3 = 9 × n 3 b 2(14)
e else part requires 4 multiply, 1 subtraction and 2 calls to getLU method. getLU method compose the LU of a matrix by taking 9 matrices of dimension 2 k and arranges them to return 3 matrices of size 2 k+1 . It requires 4 multiply and 2 scalarMul methods of matrices of dimension 2 k . e recursion scheme of LU decomposition is li le bit different from SPIN. Here the number of LU call at level i is 2 i − 1 instead of 2 i of SPIN. e computation and communication costs for the methods (summarized in Table 1) can be summed up to get equation 12.
EXPERIMENTS
In this section, we perform experiments to evaluate the execution efficiency of our implementation SPIN comparing it with the distributed LU decomposition based inversion approach (to be mentioned as LU from now) and scalability of the algorithm compared to ideal scalability. First, we select the fastest wall clock execution
Test Setup
All the experiments are carried out on a dedicated cluster of 3 nodes. So ware and hardware specifications are summarized in Table 2. Here NA means Not Applicable.
For block level multiplications both the implementation uses JBlas [9], a linear algebra library for Java based on BLAS and LA-PACK. We have tested the algorithms on matrices with increasing cardinality from (16 × 16) to (16384 × 16384). All of these test matrices have been generated randomly using Java Random class.
Resource Utilization Plan. While running the jobs in the cluster, we customize three parameters -the number of executors, the executor memory and the executor cores. We wanted a fair comparison among the competing approaches and therefore, we ensured jobs should not experience thrashing and none of the cases tasks should fail and jobs had to be restarted. For this reason, we restricted ourselves to choose the parameters value which provides good utilization of cluster resources and mitigating the chance of task failures. By experimentation we found that, keeping executor memory as 50 GB ensures successful execution of jobs without "out of memory" error or any task failures for all the competing approaches. is includes the small amount of overhead to determine the full request to YARN for each executor which is equal to 3.5 GB. erefore, the executor memory is 46.5 GB. ough the physical memory of each node is 132 GB, we keep only 100 GB as YARN resource allocated memory for each node. erefore, the total physical memory for job execution is 100 GB resulting 2 executors per node and a total 6 executors. We reserve, 1 core for operating system and hadoop daemons. erefore, available total core is 11. is leaves 5 cores for each executor. We used these values of the run time resource parameters in all the experiments except the scalability test, where we have tested the approach with varied number of executors. In this section, we compare the performance of SPIN with LU. We report the running time of the competing approaches with increasing matrix dimension in Figure 2. We take the best wall clock time (fastest) among all the running time taken for different block sizes. It can be seen that, SPIN takes the minimum amount of time for all matrix dimensions. Also, as expected the wall clock execution time increases with the matrix dimension, non-linearly (roughly as O(n 3 )). Also, the gap in wall clock execution time between both SPIN and LU increases monotonically with input matrix dimension. As we shall see in the next section, both LU and SPIN follow a U shaped curve as a function of block sizes, hence allowing us to report the minimum wall clock execution time over all block sizes.
Variation with partition size
In this experiment, we examine the performance of SPIN with LU with increasing partition size for each matrix size. We report the wall clock execution time of the approaches when partition size is increased within a particular matrix size. For each matrix size (from (4096×4096) to (16384×16384)) we increase the partition size until we get a intuitive change in the results as shown in Figure. 3. It can be seen that both LU and SPIN follows a U shape curve. However, SPIN outperforms LU when they have the same partition size, for all the matrix sizes. e reason of this is manifold. First of all, LU requires 9 times more O n b 3 operations compared to a single operation of SPIN. For small partition sizes, where leafNode dominates the overall wall clock execution time, this cost is responsible for LU 's slower performance.
Additionally, when the partition size increases, the number of recursion level also increases and consequently the cost of multiply method increases which is the costliest method call. ough there is a difference between the number of recursion level for any partition size (), the additional matrix multiplication cost (as shown in Table 1) provides enough cost to slowdown LU 's performance.
Comparison between theoretical and experimental result
In this experiment, we compare the theoretical cost of SPIN with the experimental wall clock execution time to validate our theoretical cost analysis. Figure 4 shows the comparison for three matrix sizes (from (4096 × 4096) to (16384 × 16384) and for each matrix size with increasing partition size. As expected, both theoretical and experimental wall clock execution time shows a U shaped curve with increasing partition size. e reason is that, for smaller partition sizes, the block size becomes very large for large matrix size. As a result, the single node matrix inversion shares most of the execution time and subdues the effect of matrix multiplication execution time which are processed distributedly. at is why we find large execution time at beginning, which is also depicted in Table 3, where experimental wall clock execution time is tabulated for different methods used in the algorithm for matrix of dimension 4096. It is seen that for b = 2, the leafNode cost is far more than matrix multiply method.
Later, when partition size further increases, the leaf node cost drops sharply as the cost depends on n 3 b 2 , which decreases the cost by square of partition size. On the other hand, the number of multiply becomes large for enhanced recursion level, and thus the effective cost which subdues the effect of leafNode cost. As in Table 3, for b = 8 onwards the multiply cost becomes more and more dominating resulting further increase in wall clock execution time.
Scalability
In this section, we investigate the scalability of SPIN. For this, we generate three test cases, each containing a different set of two matrices of sizes equal to (4096 × 4096), (8192 × 8192) and (16384 × 16384). e running time vs. the number of spark executors for these 3 pairs of matrices is shown in Figure 5. e ideal scalability line (i.e. T (n) = T (1)/n -where n is the number of executors) has been over-plo ed on this figure in order to demonstrate the scalability of our algorithm. We can see that SPIN has a good scalability, with a minor deviation from ideal scalability when the size of the matrix is low (i.e. for (4096 × 4096) and (8192 × 8192)).
CONCLUSION
In this paper, we have focused on the problem of distributed matrix inversion of large matrices using Spark framework. To make large scale matrix inversion faster, we have implemented Strassen's matrix inversion technique which requires six multiplications in each recursion step. We have given the detailed algorithm, called SPIN, of the implementation and also presented the details of the cost analysis along with the baseline approach using LU decomposition. By doing that, we discovered that the primary bo leneck of inversion algorithm is matrix multiplications and that SPIN is faster as it requires less number of multiplications compared to LU based approach.
We have also performed extensive experiments on wall clock execution time of both the approaches for increasing partition size as well as increasing matrix size. Results showed that SPIN outperformed LU for all the partition and matrix sizes and also the difference increases as we increase matrix size. We also showed the resemblance between theoretical and experimental findings of SPIN, which validated our cost analysis. At last we showed that SPIN has a good scalability with increasing matrix size.
Figure 1 :
1Recursion tree for Algorithm 2 array representing the elements of the matrix arranged in a column major fashion.
Algorithm 4 :
4Spark Algorithm for multiplying a scalar to a distributed matrix function xy(); begin Input : PairRDD brokenRDD Output : BlockMatrix x filter(); begin Input : PairRDD brokenRDD Output : PairRDD f ilteredRDD return brokenRDD.ta = "A x " end map(); begin
Algorithm 5 :
5Spark Algorithm for multiplying a scalar to a distributed matrix function scalarMul(); begin Input : BlockMatrix A, double scalar, int blockSize Output :BlockMatrix productMat ARDD ← A.toRDD() Map(); begin Input : block of ARDD Output : block of productRDD product ← block.matrix.toDoubleMatrix block.matrix ← product .toMatrix return block end productMat ← product .toBlockMatrix() return productMat end arrange method (as described in Algorithm 6), takes four submatrices of size 2 n−1 which represents four co-ordinates of a full matrix of size 2 n and arranges them in later and returns it as Block-Matrix. It consists of four maps, each one for a separate BlockMatrix. Each map maps the block index to a different block index that provides the final position of the block in the result matrix.
end unionRDD ← C11RDD.union(C1.union(C2.union(C3))) C ← unionRDD.toBlockMatrix() return C end • i = current processing level of algorithm in the recursion tree. • m = total number of levels of the recursion tree. erefore, • Total number of blocks in matrix A or B = b 2 • b = 2 p−q
L
. e proposed distributed block recursive LU decomposition based matrix inversion algorithm or SPIN (presented in Algorithm 5, 6, and 7 in[10]) has a complexity in terms of wall clock execution time requirement, where n is the matrix dimension, b is the number of splits, and cores is the actual number of physical cores available in the cluster, as below
Figure 2 :
2Fastest running time of LU and Strassen's based inversion among different block sizes 5.2 Comparison with state-of-the-art distributed systems
Figure 3 :
3Comparing running time of LU and SPIN for matrix size (4096 × 4096), (8192 × 8192), (16384 × 16384) for increasing partition size
Figure 4 :
4Comparing theoretical and experimental running time of SPIN for matrix size (4096×4096), (8192×8192), (16384×16384)
Figure 5 :
5e scalability of SPIN, in comparison with ideal scalability (blue line), on matrix (4096 × 4096), (8192 × 8192) and (16384 × 16384)
Algorithm 2: Spark Algorithm for Strassen's Inversion Scheme function Inverse(); begin Input : BlockMatrix A, int size, int blockSize Output :BlockMatrix AIn size = Size of matrix A or B; blockSize = Size of a single matrix block; n = size bl ockSize
Algorithm 3: Spark Algorithm for breaking a BlockMatrixfunction breakMat();
begin
Input : BlockMatrix A, int size
Output : PairRDD brokenRDD
ARDD ← A.toRDD
MapToPair();
begin
Input : block of ARDD
Output : tuple of brokenRDD
ri ← block.rowIndex
ci ← block.colIndex
Algorithm 6 :
6Spark Algorithm for rearranging four submatrices into single matrixfunction arrange();
begin
Input : BlockMatrix C11, BlockMatrix C12, BlockMatrix
C21, BlockMatrix C22, int size, int blockSize
Output : BlockMatrix arranged
C11RDD ← C11.toRDD()
C12RDD ← C12.toRDD()
C21RDD ← C21.toRDD()
C22RDD ← C22.toRDD()
Map();
begin
Input : block of C12RDD
Output : block of C1
block.colIndex ← block.colIndex + size
return block
end
Map();
begin
Input : block of C21RDD
Output : block of C2
block.rowIndex ← block.rowIndex + size
return block
end
Map();
begin
Input : block of C22
Output : block of C3
block.rowIndex ← block.rowIndex + size
block.colIndex ← block.colIndex + size
return block
Table 1 :
1Summary of the cost analysis of LU and SPINMethod
Computation Cost
Parallelization Factor
LU
SPIN
LU
SPIN
leafNode
9 × n 3
b 2
Table 2 :
2Summary of Test setup components specificationsComponent Name Component Size Specification
Processor
2
Intel Xeon 2.60 GHz
Core
6 per processor
NA
Physical Memory 132 GB
NA
Ethernet
14 Gb/s
Infini Band
OS
NA
CentOS 5
File System
NA
Ext3
Apache Spark
NA
2.1.0
Apache Hadoop
NA
2.6.0
Java
NA
1.7.0 update 79
time among different partition size for each approach and compare
them. Second, we conduct a series of experiments to individually
evaluate the effect of partition size and matrix size of each compet-
ing approach. At last we evaluate the scalability of our implemen-
tation.
Table 3 :
3Experimental results of wall clock execution time of different methods in SPIN ( e unit of execution time is millisecond)Method
b = 2 b = 4 b = 8 b = 16
leafNode 43504 11550 5040
3980
breakMat
178
441
901
1764
xy
2913
1353
693
309
multiply
7836 13116 23256 37968
subtract
1412
1854
2820
5592
scalar
333
728
1308
2450
arrange
307
685
1510
3074
Total
56483 29727 35528 55137
Towards an Efficient Tile Matrix Inversion of Symmetric Positive Definite Matrices on Multicore Architectures. Emmanuel Agullo, Henricus Bouwmeester, Jack Dongarra, Jakub Kurzak, Julien Langou, Lee Rosenberg, VECPAR. Springer10Emmanuel Agullo, Henricus Bouwmeester, Jack Dongarra, Jakub Kurzak, Julien Langou, and Lee Rosenberg. 2010. Towards an Efficient Tile Matrix Inversion of Symmetric Positive Definite Matrices on Multicore Architectures.. In VECPAR, Vol. 10. Springer, 129-138.
MapReduce and Spark-Based Analytic Framework Using Social Media Data for Earlier Flu Outbreak Detection. Ali Al , Essa , Miad Faezipour, Industrial Conference on Data Mining. SpringerAli Al Essa and Miad Faezipour. 2017. MapReduce and Spark-Based Analytic Framework Using Social Media Data for Earlier Flu Outbreak Detection. In In- dustrial Conference on Data Mining. Springer, 246-257.
C Steven, Renate Althoen, Mclaughlin, Gauss-Jordan reduction: A brief history. 94Steven C Althoen and Renate Mclaughlin. 1987. Gauss-Jordan reduction: A brief history. e American mathematical monthly 94, 2 (1987), 130-142.
Families of algorithms related to the inversion of a symmetric positive definite matrix. Paolo Bientinesi, Brian Gunter, Robert A Geijn, ACM Transactions on Mathematical So ware (TOMS). 353Paolo Bientinesi, Brian Gunter, and Robert A Geijn. 2008. Families of algorithms related to the inversion of a symmetric positive definite matrix. ACM Transac- tions on Mathematical So ware (TOMS) 35, 1 (2008), 3.
A fixed-point implementation of matrix inversion using Cholesky decomposition. Adrian Burian, Jarmo Takala, Mikko Ylinen, Circuits and Systems. 3Adrian Burian, Jarmo Takala, and Mikko Ylinen. 2003. A fixed-point imple- mentation of matrix inversion using Cholesky decomposition. In Circuits and Systems, 2003 IEEE 46th Midwest Symposium on, Vol. 3. IEEE, 1431-1434.
High performance matrix inversion on a multi-core platform with several GPUs. Pablo Ezza I, Enrique S Intana-Orti, Alfredo Remon, Parallel, Distributed and Network-Based Processing (PDP), 2011 19th Euromicro International Conference on. IEEE. Pablo Ezza i, Enrique S intana-Orti, and Alfredo Remon. 2011. High perfor- mance matrix inversion on a multi-core platform with several GPUs. In Parallel, Distributed and Network-Based Processing (PDP), 2011 19th Euromicro Interna- tional Conference on. IEEE, 87-93.
GraphX: Graph Processing in a Distributed Dataflow Framework. E Joseph, Reynold S Gonzalez, Ankur Xin, Daniel Dave, Crankshaw, J Michael, Ion Franklin, Stoica, OSDI. 14Joseph E Gonzalez, Reynold S Xin, Ankur Dave, Daniel Crankshaw, Michael J Franklin, and Ion Stoica. 2014. GraphX: Graph Processing in a Distributed Dataflow Framework.. In OSDI, Vol. 14. 599-613.
Parallel matrix inversion techniques. Kk Lau, R Kumar, Venkatesh, Algorithms & Architectures for Parallel Processing. IEEE Second International Conference on. IEEEKK Lau, MJ Kumar, and R Venkatesh. 1996. Parallel matrix inversion techniques. In Algorithms & Architectures for Parallel Processing, 1996. ICAPP 96. 1996 IEEE Second International Conference on. IEEE, 515-521.
Linear Algebra for Java. Online; accessed 30Linear Algebra for Java 2017. Linear Algebra for Java. h p://jblas.org/. (2017). [Online; accessed 30-July-2017].
Spark-based large-scale matrix inversion for big data processing. Jun Liu, Yang Liang, Nirwan Ansari, IEEE Access. 4Jun Liu, Yang Liang, and Nirwan Ansari. 2016. Spark-based large-scale matrix inversion for big data processing. IEEE Access 4 (2016), 2166-2176.
Mllib: Machine learning in apache spark. Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies Liu, Jeremy Freeman, Manish Db Tsai, Sean Amde, Owen, Journal of Machine Learning Research. 17Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkatara- man, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, et al. 2016. Mllib: Machine learning in apache spark. e Journal of Machine Learning Research 17, 1 (2016), 1235-1241.
SciSpark: Applying in-memory distributed computing to weather event detection and tracking. Rahul Palamu Am, Renato Marroquín Mogrovejo, Chris Ma Mann, Brian Wilson, Kim Whitehall, Rishi Verma, Lewis Mcgibbney, Paul Ramirez, 2015 IEEE International Conference on. IEEE. Big Data (Big DataRahul Palamu am, Renato Marroquín Mogrovejo, Chris Ma mann, Brian Wil- son, Kim Whitehall, Rishi Verma, Lewis McGibbney, and Paul Ramirez. 2015. SciSpark: Applying in-memory distributed computing to weather event detec- tion and tracking. In Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2020-2026.
Numerical recipes 3rd edition: e art of scientific computing. H William, Press, Cambridge university pressWilliam H Press. 2007. Numerical recipes 3rd edition: e art of scientific comput- ing. Cambridge university press.
MadLINQ: large-scale distributed matrix computation for the cloud. Zhengping Qian, Xiuwei Chen, Nanxi Kang, Mingcheng Chen, Yuan Yu, Moscibroda Omas, Zheng Zhang, Proceedings of the 7th ACM european conference on Computer Systems. the 7th ACM european conference on Computer SystemsACMZhengping Qian, Xiuwei Chen, Nanxi Kang, Mingcheng Chen, Yuan Yu, omas Moscibroda, and Zheng Zhang. 2012. MadLINQ: large-scale distributed matrix computation for the cloud. In Proceedings of the 7th ACM european conference on Computer Systems. ACM, 197-210.
A fast parallel Gauss Jordan algorithm for matrix inversion using CUDA. Girish Sharma, Computers & Structures. 128Abhishek Agarwala, and Baidurya Bha acharyaGirish Sharma, Abhishek Agarwala, and Baidurya Bha acharya. 2013. A fast parallel Gauss Jordan algorithm for matrix inversion using CUDA. Computers & Structures 128 (2013), 31-37.
Gaussian elimination is not optimal. Volker Strassen, Numerische mathematik. 13Volker Strassen. 1969. Gaussian elimination is not optimal. Numerische mathe- matik 13, 4 (1969), 354-356.
Scalable matrix inversion using mapreduce. Jingen Xiang, Huangdong Meng, Ashraf Aboulnaga, Proceedings of the 23rd international symposium on High-performance parallel and distributed computing. the 23rd international symposium on High-performance parallel and distributed computingACMJingen Xiang, Huangdong Meng, and Ashraf Aboulnaga. 2014. Scalable matrix inversion using mapreduce. In Proceedings of the 23rd international symposium on High-performance parallel and distributed computing. ACM, 177-190.
A parallel method for matrix inversion based on gauss-jordan algorithm. Kaiqi Yang, Yubai Li, Yijia Xia, Journal of Computational Information Systems. 9Kaiqi Yang, Yubai Li, and Yijia Xia. 2013. A parallel method for matrix inversion based on gauss-jordan algorithm. Journal of Computational Information Systems 9, 14 (2013), 5561-5567.
Spark: Cluster computing with working sets. Matei Zaharia, Mosharaf Chowdhury, J Michael, Sco Franklin, Ion Shenker, Stoica, HotCloud. 1095Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Sco Shenker, and Ion Stoica. 2010. Spark: Cluster computing with working sets. HotCloud 10, 10-10 (2010), 95.
| []
|
[
"Age of Information in Prioritized Random Access",
"Age of Information in Prioritized Random Access"
]
| [
"Khac-Hoang Ngo \nDepartment of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden\n",
"Giuseppe Durisi \nDepartment of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden\n",
"Alexandre Graell \nDepartment of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden\n",
"Amat \nDepartment of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden\n"
]
| [
"Department of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden",
"Department of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden",
"Department of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden",
"Department of Electrical Engineering\nChalmers University of Technology\n41296GothenburgSweden"
]
| []
| Age of information (AoI) is a performance metric that captures the freshness of status updates. While AoI has been studied thoroughly for point-to-point links, the impact of modern random-access protocols on this metric is still unclear. In this paper, we extend the recent results by Munari to prioritized random access where devices are divided into different classes according to different AoI requirements. We consider the irregular repetition slotted ALOHA protocol and analyze the AoI evolution by means of a Markovian analysis following similar lines as in Munari(2021). We aim to design the protocol to satisfy the AoI requirements for each class while minimizing the power consumption. To this end, we optimize the update probability and the degree distributions of each class, such that the probability that their AoI exceeds a given threshold lies below a given target and the average number of transmitted packets is minimized. | 10.1109/ieeeconf53345.2021.9723286 | [
"https://arxiv.org/pdf/2112.01182v1.pdf"
]
| 244,799,632 | 2112.01182 | 8fa18ab8ac3c8bf37bf8155aaf8c27d460d0e886 |
Age of Information in Prioritized Random Access
2 Dec 2021
Khac-Hoang Ngo
Department of Electrical Engineering
Chalmers University of Technology
41296GothenburgSweden
Giuseppe Durisi
Department of Electrical Engineering
Chalmers University of Technology
41296GothenburgSweden
Alexandre Graell
Department of Electrical Engineering
Chalmers University of Technology
41296GothenburgSweden
Amat
Department of Electrical Engineering
Chalmers University of Technology
41296GothenburgSweden
Age of Information in Prioritized Random Access
2 Dec 2021
Age of information (AoI) is a performance metric that captures the freshness of status updates. While AoI has been studied thoroughly for point-to-point links, the impact of modern random-access protocols on this metric is still unclear. In this paper, we extend the recent results by Munari to prioritized random access where devices are divided into different classes according to different AoI requirements. We consider the irregular repetition slotted ALOHA protocol and analyze the AoI evolution by means of a Markovian analysis following similar lines as in Munari(2021). We aim to design the protocol to satisfy the AoI requirements for each class while minimizing the power consumption. To this end, we optimize the update probability and the degree distributions of each class, such that the probability that their AoI exceeds a given threshold lies below a given target and the average number of transmitted packets is minimized.
I. INTRODUCTION
The Internet of Things (IoT) foresees a very large number of devices, which we will refer to as users, to be connected and exchange data in a sporadic and uncoordinated manner. This has led to the development of modern random access protocols [1]. In most of these protocols, the users transmit multiple copies of their packets to create time diversity, and the receiver employs successive interference cancellation (SIC) to decode. In particular, in the irregular repetition slotted ALOHA (IRSA) protocol [2], the users draw the number of copies from a degree distribution and transmit the copies in randomly chosen slots of a fixed-length frame. A common design goal is to minimize the packet loss rate (PLR), thus maximizing the chance to deliver packets to the receiver successfully.
In many IoT applications, it is becoming increasingly important to deliver packets successfully and to guarantee the timeliness of those packets simultaneously. Examples include sensor networks, vehicular tracking, and health monitoring. In these delay-sensitive applications, the packets carry critical status updates that are required to be fresh. The age of information (AoI) metric (see, e.g., [3] and references therein) has been introduced precisely to account for the freshness of the status updates. It captures the offset between the generation of a packet and its observation time. In [4], the AoI in a system where independent devices send status updates through a shared queue was analyzed. The AoI has been used as a performance metric to design status update protocols in, e.g., [5], [6]. The first analytical characterization of the AoI for a class of modern random access, namely IRSA, has been recently reported in [7].
Since IoT devices are mostly battery-limited, their power consumption should be minimized. By assuming that each packet transmission consumes a fixed amount of energy, we This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101022113.
can use the average number of transmitted packets per slot as a proxy of the power consumption. When status updates are conveyed via an IRSA protocol, the number of transmitted packets per user depends both on the user activity, i.e., on how often the user has an update to transmit, and on the degree distribution assigned to the user. This leads to a tension between minimizing the AoI and minimizing the number of packets. Too sporadic user activity leads to stale updates, but too frequent updates lead to channel congestion and update failure. Furthermore, degree distributions with low degrees lead to a low number of transmitted packets but high PLR, which results in larger AoI, while degree distributions with high degrees achieve low PLR at the cost of a larger number of transmitted packets. Therefore, the user update probability and the degree distribution need to be carefully selected.
In this paper, we consider an IoT monitoring system where users attempt to deliver timely status updates to a receiver following the IRSA protocol. We assume that the users are heterogeneous and their updates require different levels of freshness. Accordingly, users are divided into different classes, each with a different AoI requirement. Following similar lines as in [7], we analyze the AoI evolution by means of a Markovian analysis and derive the age-violation probability (AVP), i.e., the probability that the AoI exceeds a certain threshold, for each class. We study the trade-off between the AVP and the number of transmitted packets by investigating the impact of the update probability and the degree distributions. Since the PLR of IRSA and, hence, the AVP are not known in closed form, we propose an easy-to-compute PLR approximation, which leads to an accurate approximation of the AVP. Our PLR approximation is based on density evolution (DE) [2] and on existing PLR approximations in the error-floor region [8] and the waterfall region [9]. We jointly optimize the update probability and the degree distributions for each class to minimize the number of transmitted packets while guaranteeing that the AVP of each class lies below a given target. Our simulation results show that the number of transmitted packets can be significant reduced with optimized irregular degree distributions, compared to regular distributions. Our experiments also suggest that using degrees up to 3 is sufficient for a setting where there are two classes containing respectively 800 and 3200 users, the framelength is 100 slots, and the AoI of class-1 users and class-2 users exceeds a threshold 7.5×10 4 and 4.5×10 4 with probability as low as 10 −5 and 10 −3 , respectively.
II. SYSTEM MODEL AND PROBLEM FORMULATION
We consider a system with U users attempting to deliver timestamped status updates to a receiver through a wireless channel. Time is slotted and each update is transmitted in a slot.
We let the slot length be 1 without loss of generality. Each user belongs to one of K classes with different AVP requirements. Let U k be the number of users in class k, k ∈ [K]. 1 We define the fraction of class-k users as γ k = U k /U . We assume that a class-k user has a new update in each slot with probability µ k independently of the other users. We further assume that slots containing a single packet always lead to successful decoding, whereas slots containing multiple packets (or, more specifically, unresolved collisions after SIC) lead to decoding failures.
A. Irregular Repetition Slotted ALOHA
We assume that the system operates according to the IRSA protocol. Time is divided into frames of M slots and users are frame-and slot-synchronous. A user may generate more than one update during a frame, but only the latest update is transmitted in the next frame. An active user in class k sends L k identical replicas of its latest update in L k slots chosen uniformly without replacement from the M available slots. The number L k is called the degree of the transmitted packet. It follows a class-dependent probability distribution {Λ
(k) ℓ } where Λ (k) ℓ = Pr[L k = ℓ].
We write this distribution using a polynomial notation as
Λ (k) (x) = d ℓ=0 Λ (k) ℓ x ℓ where d is the maximum degree.
Note that {Λ (k) } may contain degree 0. When L k = 0, the user discards the update. Upon successfully receiving an update, the receiver is assumed to be able to determine the position of its replicas. In practice, this can be done by including in the header of the packet containing each update a pointer to the position of its replicas. The receiver employs a SIC decoder. It seeks slots containing a single packet, decodes the packet, then locates and removes the replicas. These steps are repeated until no slots with a single packet can be found.
Note that a user in class k has a new update in a frame with probability σ k = 1 − (1−µ k ) M . Therefore, the number of class-k users transmitting over a frame is a binomial random variable of parameters (U k , σ k ) with expected value U k σ k . The average channel load of class k is given by
G k = U k σ k /M. The overall average channel load is G = K k=1 G k .
The average number of packets transmitted by a class-k user per slot is
Φ k = σ kΛ (k) (1)/M, whereΛ (k) (x) denotes the first-order derivative of Λ (k) (x).
The total average number of transmitted packets per slot is given by
Φ = K k=1 U k Φ k = K k=1 G kΛ (k) (1).
We use Φ as a proxy of the total power consumption.
B. Age of Information
We define the AoI for user i at slot n as δ i (n) = n − t i (n), where t i (n) denotes the timestamp of the last received update from user i as of slot n. Since the AoIs of users in the same class are stochastically equivalent, we denote a representative of the AoIs of class-k users as δ (k) (n). The AoI grows linearly with time and is reset at the end of a frame only when a new update is successfully decoded. We are interested in the value of the AoI at the end of a generic frame j ∈ N 0 . We will refer to this quantity simply as AoI hereafter. For class k, this quantity is given by δ (k) (jM ) + M . We define the AVP as the probability that the AoI exceeds a certain threshold θ at steady state. Specifically, the AVP for class k is defined as
ζ (k) (θ) = lim j→∞ Pr[δ (k) (jM ) + M > θ].(1)
We shall see in the next section that the AoI process is ergodic Markovian, thus the limit in (1) exists. We consider the requirement that the AoI at steady state of class k exceeds a threshold θ k with probability no larger than ǫ k :
ζ (k) (θ k ) ≤ ǫ k , k ∈ [K].
(2)
C. Problem Formulation
Our goal is to design the update probabilities {µ k } and the degree distributions {Λ (k) } such that the AoI requirements in (2) are satisfied, while the number of packets Φ is minimized:
minimize {µ k ,Λ (k) (x)} K k=1 Φ subject to (2).(3)
III. AOI ANALYSIS A. Current AoI
Let P (k) denote the PLR of a class-k user. It is convenient to denote by ξ k = σ k (1 − P (k) ) the probability that the AoI
δ (k) (n) is reset. Also, let B k ∈ [M ]
denote the number of slots between the generation of a packet of a class-k user and the start of the subsequent frame, when the user can access the channel. It has probability mass function
Pr[B k = b] = µ k (1 − µ k ) b−1 /σ k ,
where the numerator is the probability that the user has generated an update for the last time b slots before the end of a frame, and the denominator is the probability that at least one update is generated during the frame. Whenever an update is successfully decoded, the current AoI is reset to
B k + M ∈ [M + 1 : 2M ].
In what follows, it will be convenient to decompose an arbitrary integer n as n = α n +M β n , where α n = n mod M and β n = ⌊n/M ⌋. Using this decomposition, we can write
δ (k) (n) = δ (k) (M β n ) + α n ,
where the first term on the righthand side captures the age at the beginning of the current frame, and the second is the offset from the start of the current frame up to the observation time n. We set n = 0 right after the reception of the first update. Thus, the initial AoI is in [M + 1 : 2M ], and δ (k) (n) ≥ M + 1, ∀n. Therefore, the evolution of the AoI of a class-k user is fully characterized by the discrete-time, discrete-valued stochastic process
Ω (k) j = δ (k) (jM ) − (M + 1), j ∈ N 0 , k ∈ [K], (4)
where j is the frame index. Since each user operates independently over successive frames, Ω (k) j is a Markovian process across j. The one-step transition probabilities q
(k) n1,n2 = Pr Ω (k) j+1 = n 2 | Ω (k) j = n 1 are given by q (k) n1,n2 = ξ k Pr[B k = n 2 + 1], for n 2 ∈ [0 : M − 1], 1 − ξ k , for n 2 = n 1 + M, 0, otherwise.(5)
To verify (5), note that Ω (k) j is reset with probability ξ k , and in this case, it is reset to a value n 2 + 1, where n 2 ∈ [0 : M − 1], with probability Pr[B k = n 2 + 1]. With probability 1 − ξ k , the variable Ω (k) j is simply incremented by the framelength M . We start with the following observation. Proposition 1. The stochastic process Ω (k) j is ergodic, and has steady-state distribution
π (k) w = ξ k (1 − ξ k ) βw Pr[B k = α w + 1], w ∈ N 0 .
Proof. The proof follows directly from the proof of the singleclass case in [7, Prop. 1].
It follows from Proposition 1 that the limit in (1) exists.
B. Age-Violation Probability
It follows from (4) and (1)
that ζ (k) (θ) = Pr[Ω (k) > θ − 2M − 1], where the random variable Ω (k) has steady-state distribution {π (k) w }.
The following result holds. Proposition 2. The AVP is given by
ζ (k) (θ) = (1 − ξ k ) β θ−2M 1 − 1−(1−µ k ) 1+α θ−2M σ k ξ k , for θ > 2M, 1,
otherwise.
Proof. The proof follows similar steps as the proof of the single-class case in [7,Prop. 3].
Example 1. Consider a system with U = 4000 users, framelength M = 100, K = 2 classes with fractions (γ 1 , γ 2 ) = (0.2, 0.8), AoI thresholds (θ 1 , θ 2 ) = (7.5×10 4 , 4.5×10 4 ), and target AVPs (ǫ 1 , ǫ 2 ) = (10 −4 , 10 −2 ). We further assume that
µ 1 = µ 2 = µ. Thus, Φ = U(1−(1−µ) M ) M U u=1 γ kΛ (k) (1)
, which increases with µ. We evaluate the AVPs for this scenario and plot them as functions of Φ in Fig. 1 for three sets of regular degree distributions, namely, Λ (1)
(x) = Λ (2) (x) ∈ {x, x 2 , x 3 }.
The PLR is computed numerically. We vary Φ by varying µ. Some remarks are in order.
• For each class, the AVP first decreases and then increases with Φ. Indeed, when µ is low, collisions are unlikely. Although the updates are successfully received with high probability, the sporadicity of the updates entails a high AoI. When µ is high, users transmit frequently, and updates fail with high probability due to collision, entailing a high AoI. Fig. 1. The AVPs ζ (1) (7.5 × 10 4 ) and ζ (2) (4.5 × 10 4 ) vs. Φ for the scenario in Example 1 with (ǫ 1 , ǫ 2 ) = (10 −4 , 10 −2 ). We consider three sets of regular degree distributions, namely,
ζ (k) (θk) Λ (1) (x) = Λ (2) (x) = x Λ (1) (x) = Λ (2) (x) = x 2 Λ (1) (x) = Λ (2) (x) = x 3Λ (1) (x) = Λ (2) (x) ∈ {x, x 2 , x 3 }.
(10 −5 , 10 −3 ) can be met with about 1.92 packets/slot. In general, distributions with low degrees can achieve mild AoI requirements with a low Φ, while higher degrees are needed to achieve more stringent requirements.
The observations in Example 1 reveal the existence of a trade-off in the choice of {µ k } and {Λ (k) } to satisfy the AoI requirements while minimizing Φ.
IV. PACKET LOSS RATE APPROXIMATION
The PLR for class-k users can be derived as [10, Eq. (2)]
P (k) = d ℓ=0 Λ (k) ℓ P ℓ ,
where P ℓ is the probability that a degree-ℓ user (of any class) is not resolved. The probability P ℓ is determined by the overall channel load G and the average degree distribution
Λ(x) = d ℓ=0 Λ ℓ x ℓ with Λ ℓ = K k=1 γ k Λ (k) ℓ , ℓ ∈ [0 : d].
If Λ 0 > 0, then P 0 = 1 and P ℓ , ℓ ≥ 1, is the probability that a degree-ℓ user is not resolved in a system with channel load
G = G(1 − Λ 0 ) and degree distributionΛ(x) = d ℓ=1Λ ℓ x ℓ withΛ ℓ = 1 1−Λ0 K k=1 γ k Λ (k)
ℓ . Therefore, we assume without loss of generality that Λ 0 = 0 in the remainder of the section. The PLR is not known in closed form in general, but can be computed numerically. However, since the optimization (3) requires repeated evaluation of the AVP, and thus of the PLR, simulation-based PLR computation becomes inefficient. Therefore, we seek an easy-to-compute approximation of the PLR that leads to an accurate approximation of the AVP.
The SIC process of IRSA is equivalent to graph-based iterative erasure decoding of low-density parity-check (LDPC) codes. In the asymptotic regime where M → ∞, P ℓ can be evaluated using DE as [2]
P ℓ,DE = lim i→∞ (η i ) ℓ ,
where η i is the probability that an edge connected to a degree-ℓ user remains unknown in the decoding process. It can be computed in an iterative manner as η 0 = 1, η i = 1−exp(−GΛ(η i−1 )) whereΛ(x) = dΛ(x)/ dx. As M → ∞, P ℓ = P ℓ,DE and the PLR P (k) drops at a certain threshold value as the channel load G decreases. That is, all but a vanishing fraction of the class-k users are resolved if the channel load is below the decoding threshold. According to [10,Prop. 1], the thresholds for all classes coincide and can be obtained by means of DE as the largest value G * of g such that ν > 1 − exp(−gΛ(ν)) for all ν ∈ (0, 1].
In the finite-framelength regime, the PLR is typically characterized by two regions: a waterfall (WF) region near the DE threshold where the PLR decreases sharply, and an errorfloor (EF) region where the PLR flattens. In the WF region, according to [9], the PLR can be approximated based on the finite-length scaling of the frame-error rate of LDPC codes [11]. Specifically, the overall PLR d ℓ=0 Λ ℓ P ℓ (averaged over classes) can be approximated by
P WF = P G→1 Q √ M (G * − β(Λ)M −2/3 − G) α 2 (Λ) + G(1 − M G/U )(6)
where P G→1 is the PLR in the limit G → 1 computed via DE, Q(·) is the Gaussian Q-function, and {α(Λ), β(Λ)} are scaling parameters computed as specified in [12]. In the EF region, the PLR can be approximated using the method proposed in [8].
In this region, decoding failures are mainly caused by harmful structures in the corresponding bipartite graph, referred to as stopping sets. A connected bipartite graph S is a stopping set if all check nodes in S have a degree larger than one. By enumerating the stopping sets, we can approximate P ℓ by
P ℓ,EF = (U −1)! Λ ℓ S∈A v ℓ (S)c(S) M ψ(S) (U −v(S))! d j=1 M j −vj (S) Λ vj (S) j v j (S)! ,
where A is the set of considered stopping sets, v(S) and ψ(S) are the number of variable nodes and check nodes in S, respectively, v j (S) is the number of degree-j variable nodes in S, and c(S) is the number of graphs isomorphic with S. In [7], the PLR is approximated as
P ℓ ≈ P WF + P ℓ,EF(7)
for the single-class case. 2 It was shown to be accurate for degree distributions with degrees at least 3 (see, e.g., [7, Fig. 4]). These degree distributions are typically considered when the design goal is to minimize the EF or maximize the decoding threshold. In our setting, however, it is of interest to consider degree distributions with lower degrees to reduce Φ. In Figs. 2(a) and 2(b), we investigate the tightness of the approximations (7) and P ℓ ≈ P ℓ,DE for the setup in Example 1 and some degree distributions with degrees 1 and 2. Note that an accurate PLR approximation in the WF region is crucial for the computation of the AVP, whereas the AVP is insentitive to low values of the PLR in the EF region where update sporadicity is the dominating factor. As shown in Fig. 2(a) for the degree distributions Λ (1) (x) = Λ (2) (x) = 0.5x 2 + 0.5x 3 , the approximation (7) is loose in the WF region. The reason is that the finite-length scaling leading to (6) is not guaranteed to hold when the bipartite graph contains degree-2 variable nodes [11]. The situation is even worse when degree-1 users are present: P WF is near P G→1 for all channel load. This makes the approximation (7) inaccurate, as shown for the distributions {Λ (1) (x) = 0.7x + 0.3x 3 , Λ (2) (x) = 0.7x 2 + 0.3x 3 } in Fig. 2(b). On the other hand, P ℓ,DE is an accurate approximation of P ℓ for large values of Φ corresponding to G > G * , although P ℓ,DE is much lower than P ℓ when G < G * .
In Figs. 2(c) and 2(d), we show the AVP computed with different approximations of the PLRs. For both sets of degree distributions, setting P ℓ ≈ P ℓ,DE yields an accurate approximation of the AVP when G > G * . When class-1 users are present as in Fig. 2(d), we have G * = 0 and the AVP approximation obtained by setting P ℓ ≈ P ℓ,DE is accurate for all G > 0 (equivalently Φ > 0). For moderate values of G corresponding to the WF region, if there are both degree 2 and higher degrees, the approximation P ℓ ≈ P ℓ,DE leads to an optimistic approximation of the AVP while the approximation (7) is pessimistic in the WF region, as shown in Fig. 2(c). In this case, one needs to balance between these two approximations. Our experiments suggest that using P ℓ ≈ P ℓ,DE for ℓ = 2 and (7) for ℓ > 2, which leads to the dashed-dotted green line in Fig. 2(c), results in an accurate approximation of the AVP.
From the above observations, we propose the following heuristic PLR approximation. For G > G * , we set P ℓ ≈ P ℓ,DE for all ℓ. For G ≤ G * , we set P ℓ ≈ P ℓ,DE for ℓ ≤ 2 and P ℓ ≈ P WF + P ℓ,EF for ℓ > 2. We summarize the proposed approximation as
P ℓ ≈ P ℓ,DE , if ℓ ≤ 2 or G > G * , P WF + P ℓ,EF , if ℓ > 2 and G ≤ G * .(8)
V. NUMERICAL RESULTS We next solve the optimization (3) for the scenario in Example 1 with different target AVPs. We let d = 3, i.e.,
Λ (k) (x) = 3 ℓ=0 Λ (k) ℓ x ℓ .
We keep the same update probability µ 1 = µ 2 = µ for both classes and control the relative difference between the probabilities of activating users in different classes via (Λ
(1) 0 , Λ (2) 0 ). As Λ (k) 3 = 1 − 2 ℓ=0 Λ (k) ℓ , the optimization variables are µ, {Λ (k)
ℓ } k∈{1,2},ℓ∈{0,1,2} . The AVP is computed as in Proposition 2 with the approximated/simulated PLR. We numerically solve (3) by means of the Nelder-Mead simplex algorithm [13], a commonly-used search method for multidimensional nonlinear optimization. However, we note that this heuristic method can converge to nonstationary points and is highly sensitive to the initial values of {µ, Λ (k) }. We try multiple initializations by sampling the search space with a step 0.1, and by running the optimization multiple times.
The optimization results with approximate PLR for some different target AVPs (ǫ 1 , ǫ 2 ) are shown in Table I. We observe that for the mild requirement (ǫ 1 , ǫ 2 ) = (10 −3 , 10 −1 ), using degrees 0 and 1 is sufficient. As the requirement becomes more stringent, one needs to use an increasing fraction of degree 2 and eventually degree 3. Also, the users should be activated more frequently, indicated by the probability Table I. As compared to the regular distributions in Fig. 1, our optimized irregular degree distributions reduce Φ by about 8% and 20% for the requirements (ǫ 1 , ǫ 2 ) = (10 −5 , 10 −3 ) and (ǫ 1 , ǫ 2 ) = (10 −4 , 10 −2 ), respectively.
ǫ 1 ǫ 2 U µ Λ (1) 0 Λ (1) 1 Λ (1) 2 Λ (1) 3 Λ (2) 0 Λ (2) 1 Λ (2) 2 Λ (2) 3 Φ [1−(1−µ) M ](1−Λ 0 ) 10(b) PLR for Λ (1) (x) = 0.7x+0.3x 3 , Λ (2) (x) = 0.7x 2 +0.3x 3 (G * = 0)
In Fig. 3, we show the AVP of the optimized distributions using approximate PLR (8) shown in Table I and compare it with the optimized distributions using simulated PLR. The latter can further reduce Φ by no more than 0.04 packets/slot. This confirms that our proposed PLR approximation is sufficiently accurate. Further experiments also show that using degrees higher than 3 is not beneficial for the considered parameters.
VI. CONCLUSION
We investigated the trade-off between the AVP and power consumption in a status-update system with multiple classes of users operating according to the IRSA protocol. Specifically, we illustrate the benefits of jointly optimizing the update probability and the degree distributions of each class to minimize the average number of transmitted packets per slot. To perform this optimization efficiently, we proposed an easy-to-compute Table I. Blue dashed lines represent the optimized distributions with simulated PLR. PLR approximation, which yields an accurate approximation of the AVP. Our simulation results suggest that irregular distributions are needed, and degrees up to 3 are sufficient for the considered setting.
•
The target AVPs (ǫ 1 , ǫ 2 ) = (10 −4 , 10 −2 ) are not met whenΛ (1) (x) = Λ (2) (x) = x. The distributions Λ (1) (x) = Λ (2) (x) = x 2satisfy these requirements with a minimum number of packets per slot Φ ≈ 1.09. The distributions Λ (1) (x) = Λ (2) (x) = x 3 require a higher Φ to achieve the same requirements, but can yield a reduction of the AVP. For example, the more stringent requirements (ǫ 1 , ǫ 2
) AVP for Λ(1) (x) = 0.7x + 0.3x 3 , Λ (2) (x) = 0.7x 2 + 0.3x 3 (G * = 0)Fig. 2. The PLR obtained from simulation or approximation and the corresponding AVP vs. Φ and G for the scenario in Example 1. We consider two sets of degree distributions, namely, Λ (1) (x) = Λ (2) (x) = 0.5x 2 + 0.5x 3 and {Λ (1) (x) = 0.7x + 0.3x 3 , Λ (2) (x) = 0.7x 2 + 0.3x 3 }.
[ 1 −
1(1 − µ) M ](1 − Λ 0 ) shown in the last column of
) (ǫ 1 , ǫ 2 ) = (10 −3 , 10 −2 )Fig. 3. The AVP vs. Φ for the optimized distributions for the scenario in Example 1. Red solid lines represent the optimized distributions with approximate PLR shown in
TABLE I OPTIMIZED
IUPDATE PROBABILITY AND DEGREE DISTRIBUTIONS FOR EXAMPLE 1 WITH APPROXIMATE PLR AND DIFFERENT TARGET AVPS (ǫ 1 , ǫ 2 )
We use [m : n] to denote the set of integers from m to n, and [n] = [1 : n].
In the single-class case, this means that the overall PLR is approximated by P WF + d ℓ=0 Λ ℓ P ℓ,EF . In our paper, it is more convenient to write the approximation in terms of P ℓ .
Modern random access protocols. M Berioli, G Cocco, G Liva, A Munari, Foundations and Trends in Networking. 104M. Berioli, G. Cocco, G. Liva, and A. Munari, "Modern random access protocols," Foundations and Trends in Networking, vol. 10, no. 4, pp. 317-446, Nov. 2016.
Graph-based analysis and optimization of contention resolution diversity slotted ALOHA. G Liva, IEEE Trans. Commun. 592G. Liva, "Graph-based analysis and optimization of contention resolution diversity slotted ALOHA," IEEE Trans. Commun., vol. 59, no. 2, pp. 477-487, Feb. 2011.
Age of information: A new concept, metric, and tool. A Kosta, N Pappas, V Angelakis, Foundations and Trends in Networking. 123A. Kosta, N. Pappas, and V. Angelakis, "Age of information: A new concept, metric, and tool," Foundations and Trends in Networking, vol. 12, no. 3, pp. 162-259, Nov. 2017.
The age of information: Real-time status updating by multiple sources. R D Yates, S K Kaul, IEEE Trans. Inf. Theory. 653R. D. Yates and S. K. Kaul, "The age of information: Real-time status updating by multiple sources," IEEE Trans. Inf. Theory, vol. 65, no. 3, pp. 1807-1827, Sep. 2019.
Timely status update in wireless uplinks: Analytical solutions with asymptotic optimality. Z Jiang, B Krishnamachari, X Zheng, S Zhou, Z Niu, IEEE Internet Things J. 62Z. Jiang, B. Krishnamachari, X. Zheng, S. Zhou, and Z. Niu, "Timely status update in wireless uplinks: Analytical solutions with asymptotic optimality," IEEE Internet Things J., vol. 6, no. 2, pp. 3885-3898, Apr. 2019.
Timely status update in Internet of Things monitoring systems: An age-energy tradeoff. Y Gu, H Chen, Y Zhou, Y Li, B Vucetic, IEEE Internet Things J. 63Y. Gu, H. Chen, Y. Zhou, Y. Li, and B. Vucetic, "Timely status update in Internet of Things monitoring systems: An age-energy tradeoff," IEEE Internet Things J., vol. 6, no. 3, pp. 5324-5335, Jun. 2019.
Modern random access: An age of information perspective on irregular repetition slotted ALOHA. A Munari, IEEE Trans. Commun. 696A. Munari, "Modern random access: An age of information perspective on irregular repetition slotted ALOHA," IEEE Trans. Commun., vol. 69, no. 6, pp. 3572-3585, Jun. 2021.
Broadcast coded slotted ALOHA: A finite frame length analysis. M Ivanov, F Brannstrom, A Graell I Amat, P Popovski, IEEE Trans. Commun. 652M. Ivanov, F. Brannstrom, A. Graell i Amat, and P. Popovski, "Broadcast coded slotted ALOHA: A finite frame length analysis," IEEE Trans. Commun., vol. 65, no. 2, pp. 651-662, Feb. 2017.
Finite-length analysis of irregular repetition slotted ALOHA in the waterfall region. A Graell I Amat, G Liva, IEEE Commun. Lett. 225A. Graell i Amat and G. Liva, "Finite-length analysis of irregular repetition slotted ALOHA in the waterfall region," IEEE Commun. Lett., vol. 22, no. 5, pp. 886-889, May 2018.
Unequal error protection in coded slotted ALOHA. M Ivanov, F Brannstrom, A Graell I Amat, G Liva, IEEE Wireless Commun. Lett. 55M. Ivanov, F. Brannstrom, A. Graell i Amat, and G. Liva, "Unequal error protection in coded slotted ALOHA," IEEE Wireless Commun. Lett., vol. 5, no. 5, pp. 536-539, Oct. 2016.
Finitelength scaling for iteratively decoded LDPC ensembles. A Amraoui, A Montanari, T Richardson, R Urbanke, IEEE Trans. Inf. Theory. 552A. Amraoui, A. Montanari, T. Richardson, and R. Urbanke, "Finite- length scaling for iteratively decoded LDPC ensembles," IEEE Trans. Inf. Theory, vol. 55, no. 2, pp. 473-498, Feb. 2009.
Analytic determination of scaling parameters. A Amraoui, A Montanari, R Urbanke, Proc. IEEE Int. Symp. Inf. Theory (ISIT). IEEE Int. Symp. Inf. Theory (ISIT)Seattle, WA, USAA. Amraoui, A. Montanari, and R. Urbanke, "Analytic determination of scaling parameters," in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Seattle, WA, USA, Jul. 2006, pp. 562-566.
A simplex method for function minimization. J A Nelder, R Mead, The computer journal. 74J. A. Nelder and R. Mead, "A simplex method for function minimiza- tion," The computer journal, vol. 7, no. 4, pp. 308-313, Jan. 1965.
| []
|
[
"High-Field Magnetoresistance of Organic Semiconductors",
"High-Field Magnetoresistance of Organic Semiconductors"
]
| [
"G Joshi \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"M Y Teferi \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"S Jamali \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"M Groesbeck \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"J Van Tol \nNational High Magnetic Field Laboratory\nUSA\n",
"R Mclaughlin \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"Z V Vardeny \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"J M Lupton \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n\nInstitut für Experimentelle und Angewandte Physik\nUniversität Regensburg\nGermany\n",
"H Malissa \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n",
"C Boehme \nDepartment of Physics and Astronomy\nUniversity of Utah\nUTUSA\n"
]
| [
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"National High Magnetic Field Laboratory\nUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Institut für Experimentelle und Angewandte Physik\nUniversität Regensburg\nGermany",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA",
"Department of Physics and Astronomy\nUniversity of Utah\nUTUSA"
]
| []
| The magneto-electronic field effects in organic semiconductors at high magnetic fields are described by field-dependent mixing between singlet and triplet states of weakly bound charge carrier pairs due to small differences in their Landé g-factors that arise from the weak spin-orbit coupling in the material. In this work, we corroborate theoretical models for the high-field magnetoresistance of organic semiconductors, in particular of diodes made of the conducting polymer poly(3,4ethylenedioxythiophene):poly(styrene-sulfonate) (PEDOT:PSS) at low temperatures, by conducting magnetoresistance measurements along with multi-frequency continuous-wave electrically detected magnetic resonance experiments. The measurements were performed on identical devices under similar conditions in order to independently assess the magnetic field-dependent spin-mixing mechanism, the socalled Δ mechanism, which originates from differences in the charge-carrier g-factors induced by spinorbit coupling. | 10.1103/physrevapplied.10.024008 | [
"https://arxiv.org/pdf/1804.09297v1.pdf"
]
| 102,681,908 | 1804.09297 | 24527cc2d599de061a5855cb788df14c7605bfe2 |
High-Field Magnetoresistance of Organic Semiconductors
G Joshi
Department of Physics and Astronomy
University of Utah
UTUSA
M Y Teferi
Department of Physics and Astronomy
University of Utah
UTUSA
S Jamali
Department of Physics and Astronomy
University of Utah
UTUSA
M Groesbeck
Department of Physics and Astronomy
University of Utah
UTUSA
J Van Tol
National High Magnetic Field Laboratory
USA
R Mclaughlin
Department of Physics and Astronomy
University of Utah
UTUSA
Z V Vardeny
Department of Physics and Astronomy
University of Utah
UTUSA
J M Lupton
Department of Physics and Astronomy
University of Utah
UTUSA
Institut für Experimentelle und Angewandte Physik
Universität Regensburg
Germany
H Malissa
Department of Physics and Astronomy
University of Utah
UTUSA
C Boehme
Department of Physics and Astronomy
University of Utah
UTUSA
High-Field Magnetoresistance of Organic Semiconductors
(Dated: March 15, 2018)
The magneto-electronic field effects in organic semiconductors at high magnetic fields are described by field-dependent mixing between singlet and triplet states of weakly bound charge carrier pairs due to small differences in their Landé g-factors that arise from the weak spin-orbit coupling in the material. In this work, we corroborate theoretical models for the high-field magnetoresistance of organic semiconductors, in particular of diodes made of the conducting polymer poly(3,4ethylenedioxythiophene):poly(styrene-sulfonate) (PEDOT:PSS) at low temperatures, by conducting magnetoresistance measurements along with multi-frequency continuous-wave electrically detected magnetic resonance experiments. The measurements were performed on identical devices under similar conditions in order to independently assess the magnetic field-dependent spin-mixing mechanism, the socalled Δ mechanism, which originates from differences in the charge-carrier g-factors induced by spinorbit coupling.
The magnetic field dependencies of electronic and optoelectronic properties of organic semiconductors, such as organic magnetoresistance and organic magneto-electroluminescence, are frequently attributed to the spin dynamics in the material [1][2][3][4]. The observed functionalities in the magnetic-field dependencies are generally explained by the interplay of the hyperfine couplings between the charge carrier spins and the nuclear spins of the surrounding hydrogen nuclei which constitute a randomly varying magnetic field.
The effects of spin-orbit coupling (SOC) may dominate the magnetoresistance response at high magnetic fields [5]. In addition, at very low temperatures and high fields, thermal spin polarization affects spin statistics [6]. Whereas the latter effect arises due to incoherent spin-lattice relaxation, the first two processes are coherent in nature. Most organic semiconductors consist of light elements only, and SOC is generally very weak. Nevertheless, at sufficiently high magnetic fields, the minute effects of SOC on the g-factors of the charge carrier pairs, the so-called Δ effect, become apparent [7]. Theoretical models that explain the observed magnetoresistance within the context of the Δ mechanism have been developed [8], but these provide no microscopic insight into the underlying SOC responsible for this spin-mixing mechanism. These models depend on too many parameters to allow a direct estimation of Δ . To date, therefore, merely fitting a magnetoresistance curve does not provide a direct substantiation of the microscopic processes involved.
Here, we present a high-field magnetoresistance study of organic diodes made of the conducting polymer poly (3,4-ethylenedioxythiophene):poly(styrene-sulfonate) (PEDOT:PSS) [9] at liquid helium-4 temperature as well as an additional, independent direct measurement of the charge-carrier g-factors from electrically detected magnetic resonance experiments [10] performed on the same devices under similar measurement conditions. This approach provides direct quantitative access to the material parameter Δ , completely independent of the magnetoresistance effects. PEDOT:PSS is an organic conducting material which is technologically relevant due to its high room-temperature conductivity, mechanical flexibility, thermal stability, and processability [11], and is used widely in organic light-emitting diodes. At low temperatures, thin-film diode structures of PEDOT:PSS can show pronounced magnetoresistance effects [12]. In pulsed magnetic resonance experiments, pronounced coherent spin beating can be resolved, providing a clear demonstration that the magnetoconductivity is controlled by pair processes [9]. Even though PEDOT:PSS is a hole conductor, electron injection may still occur in diode structures enabling electron-hole pair processes to arise when the holes become localized at low temperatures.
The basic concept of Δ mixing is illustrated in Figure 1. A charge carrier with a Landé factor precesses in the presence of an external magnetic field with a frequency = ( /ℏ, where ( is the Bohr magneton; the case of Larmor precession. In a charge carrier pair, typically an electron-hole pair, where the individual charge carriers experience a small difference of g-factors Δ which arises from differences in the spin-orbit coupling that the charge carrier species experience, the precession rates of the two spins will differ. The system will therefore undergo oscillations between singlet | ⟩ and triplet | / ⟩ with a frequency Δ = Δ ( /ℏ which scales with Δ (i. e. with the strength of spin-orbit coupling)and magnitude of B [13,14]. This singlet-triplet oscillation constitutes a field-dependent spin mixing process.
The weakly coupled carrier pairs may either dissociate into free charge carriers and consequently contribute to a current which can be detected in an OMAR experiment, or recombine into a strongly bound excitonic state. The individual spins of the carriers may change coherent precession under microwave irradiation or through incoherent spin-lattice relaxation [15].
Theoretical work by Ehrenfreund et al. [8,13,14,16,17] has provided a consistent description of the field-effect measurements in the high-field regime above a few hundred mT. In particular, the effect of the Δ mixing between | ⟩ and | / ⟩ is found to have a pronounced effect on magnetoresistance and other magnetic field-effects, such as magneto-electroluminescence and magneto-photoconductivity. The magnetic field effect generally follows a -dependence in the form of a Lorentzian line
0 1 0 + Δ 3/0 0 4 5
with a width Δ 3/0 = ℏ 2 ( Δ ⁄ that is inversely proportional to the product of Δ and the lifetime of the charge carrier pairs τ [13,14]. This model describes the observed magnetoresistance phenomenologically correctly, but the resulting description depends on too many physical parameters. In particular, Δ 3/0 is determined by the product Δ even though both the g-factor spread Δ and the decay time τ cannot be estimated independently. Clearly the magnetoresistance in the high-field regime must originate from the Δ spread as described in the theoretical models [14] but additional, complimentary measurements are needed in order to quantitatively determine Δ .
A convenient way to access the g-factors of charge carriers is electron paramagnetic resonance spectroscopy, where the sample is irradiated with microwave (MW) radiation in the presence of an external magnetic field. At resonance, when the MW matches the Zeeman precession frequency for a field strength of , spin transitions are driven between the Zeeman-split levels. We employ electrically detected magnetic resonance (EDMR) spectroscopy [10,[18][19][20][21][22][23], where the change in OLED conductivity is measured under resonant MW excitation. The experimental setup for the detection of these current changes is equivalent to that used for magnetoresistance measurements [22,24], and from the resulting spectra the respective g-factors of both charge-carrier species can be determined directly along with the spread in g-factors Δ . The spectral shapes and widths of the resonances are governed by the interplay [25] of unresolved hyperfine couplings of the charge-carrier spins to the nuclear spins of the ubiquitous hydrogen nuclei [26] on the one hand, and the small, but non-zero spin-orbit coupling effects [10,18] on the other hand. The EDMR line shape at low MW frequencies is predominantly governed by these random hyperfine fields, and differences in g-factor are largely obscured by the inhomogeneously broadened lines. The effects of spin-orbit coupling on the g-factor and the spectral line shapes are only revealed in EDMR measurements at high MW frequencies, above tens of GHz. In recent studies [10,18] we described a method of acquiring EDMR spectra at a range of high frequencies, including the use of a dedicated high-field quasi-optical millimeter-wave spectrometer operating at MW frequencies of 120, 240, and 336 GHz at the National High Magnetic Field Laboratory [27]. These results
show that the effects of spin-orbit coupling may lead not only to clearly resolved differences Δ between the g-factors of electron and hole, 3 , 0 , but also to g-strain broadening [10] and anisotropic g-tensors due to the localization of the molecular orbitals on different regions of the polymer [18]. For the particular case of PEDOT:PSS, the relatively high charge-carrier mobilities can give rise to motional narrowing [28] and isotropic effective g-tensors which are accurately described by double-Gaussian line shapes accounting for the inhomogeneous broadening arising from the distributions of both electron and hole effective g-factors [9,10,28]. Diode structures made from PEDOT:PSS are therefore ideal systems to scrutinize models of magnetoresistance through an independent evaluation of Δ from EDMR measurements. Figure 2 shows the EDMR spectrum of a device with an ITO/PEDOT:PSS/Al structure at a temperature of 4 K. Following the procedure reported in Ref. [9] and Ref. [10], we recorded the current change under constant bias voltage of 1.2 V at a resonant MW excitation of 240 GHz [10]. While the individual Gaussian lines appear at almost the same resonance field, a small difference in g-factors Δ can be resolved at this high MW frequency. In the limit of completely uncorrelated g-factors 3 and 0 , Δ is given as
Δ ; = |〈 3 〉 − 〈 0 〉|(1)
i. e. the difference between the center-of-mass of the individual constituent spectra corresponding to electron and hole, expressed as g-factors. This simple expression, however, does not take into account the correlations between both charge carriers, i. e. the spread of g within each charge carrier pair. In the limit of fully correlated charge-carrier pairs, Δ is given as
Δ B = 〈| 3 − 0 |〉 = ∬|D E FD G |H(D E )H(D G )ID E ID G (∫ H(D E )ID E )(∫ H(D G )ID G ) (2),
where the distributions of 3 and 0 , ( 3 ) and ( 0 ), are given by the individual Gaussian distributions making up the spectrum as indicated in Fig. 2. For strongly overlapping resonance lines such as those shown in the figure, the difference between Δ ; and Δ B can be substantial. We evaluate Δ ; and Δ B numerically from the measured spectra at different frequencies by performing a global fit using a double-Gaussian line shape [10,25,29] and Eq. 1 and 2. The results are Δ ; = 1.1 × 10 FO and Δ B = 1.6 × 10 FQ , which differ by more than one order of magnitude.
To relate the spread of g-factors to magnetoresistance, we performed low-temperature high-field magnetoresistance measurements over a magnetic field range up to ±7 T in a superconducting magnet with an integrated cryostat (Janis SuperOptiMag). Using a SRS SIM928 low-noise voltage source, the PEDOT:PSS diode was biased at approximately 100 µA at constant voltage, and the change in device current was recorded with a low-pass filter setting of 30 Hz of a SR570 current amplifier while sweeping the magnetic field between -7 T and 7 T. We swept the field in both directions in order to minimize the effect of the magnet hysteresis. The measured magnetoconductance response is shown in Fig. 3(a). The steady state current at B = 0 T is 100 µA, and for simplicity, only the field-dependent change from this value is shown on the vertical axis so that the current value displayed at B = 0 T is 0 µA.
The overall functionality of the magneto-current is accurately described as a superposition of three effects. For fields exceeding a few hundred mT the Δ -dependent high-field magnetoconductance effect as described above dominates. At lower magnetic fields, below 200 mT, additional ultra-small and intermediate magnetic field effects are observed [5,12,17,30,31]. These effects are described in
Refs. [5], [17], and [31] and are described by the functionalities Δ ∝ 0 ( 0 + / 0 ) ⁄ , which is dominant at fields up to a few mT, and Δ ∝ 0 (| | + 3 ) 0 ⁄ , which occurs up to a few hundred mT. The overall magneto-current functionality is accurately described over the entire magnetic field range by a phenomenological model taking all three effects into account [31]: absence of a baseline in our magnetoconductance measurement appears to make the quantitative interpretation of the result ambiguous. This problem is illustrated in Fig. 3(b), where the results of partially constrained model fits with fixed Δ 3/0 values as indicated in the labels are compared to the experimental data. The calculated responses differ from each other substantially, but the agreement with the experimental dataset over the measurement range between -7 T to +7 T is excellent in all cases. From the experimentally accessible magnetic field range, a lower bound for Δ 3/0 of approximately 7 T can be established but obtaining an estimate for an upper bound appears to be not possible. Extending the experimentally available magnetic field range may help to establish the baseline of the magnetoconductance response, although we note that currently available DC magnet technology (at highly specialized largescale laboratories such as the National High Magnetic Field Laboratory in Tallahassee, Florida) is limited to fields below 50 T. From the calculated curves in Fig. 3(b) it is obvious that even this field range may well be insufficient to accurately constrain Δ 3/0 and the baseline of the magnetoconductance response in the case of PEDOT:PSS, since even Δ 3/0 = 50 T gives a reasonable description of the measured data. In order to address this problem, we developed a statistical analysis technique which allows to determine for a given experimentally determined magnetoresistance function whether or not the given finite magnetic field range is sufficient to provide an unambiguous determination of the upper bound of Δ 3/0 : First, we calculated the coefficient of determination 0 of several model fits as a function of presumed Δ 3/0 , which was no longer a free fit parameter. The results are shown in Knowing that Δ 3/0 = 10.14 T allows us to evaluate Eq. 1 and establish a value for the product Δ of 5.6 × 10 FO ns, although the results of Fig. 3 [26], which poses an implicit lower bound on 3 . Typically, 3 is known to be of the order of a few tens of microseconds in organic conductors [32]. The most likely path to reconciling these conflicting observations is that pulsed EDMR and steady-state magnetoresistance probe different carrier-pair ensembles with very different lifetimes.
Δ
Only the longest-lived, most stable pairs interact most strongly under resonance and therefore contribute most significantly to the EDMR signal. In contrast, it is the shortest-lived most weakly bound pairs which dominate the non-equilibrium current in magnetoresistance experiments. However, since the effect of spin-orbit coupling in conjugated polymers is purely monomolecular in nature [18], there is no reason not to expect the same Δ in both cases. Our magneto-transport based estimates here of the lifetime of the charge-carrier pair are comparable to the value of <1 ns given in Ref. [14] for a polythiophene-based blend material for magnetic fields >0.6 T which supports charge-transfer states. We stress that this large value in Ref. [14]was derived without direct knowledge of Δ .
In summary, we have introduced the technique of high-field EDMR to constrain the parameters for fitting magnetoresistance curves to a phenomenological model. Using PEDOT:PSS at low temperatures allows us to acquire both magnetoresistance and EDMR data of unprecedented quality, offering maximal constraints on fitting parameters. The metric of the molecular spin-orbit coupling, Δ , is estimated independently from magnetic resonance measurements on identical devices under comparable conditions.
The evaluation of the effective width of the magnetoresistance function is complicated by the fact that the measured response does not saturate over the entire field range investigated, but upper and lower bounds can be established through careful data analysis. The magnetoresistance together with the Δ distribution obtained from EDMR experiments reveal that the effective lifetime of charge-carrier pairs in this highmobility material is surprisingly short and likely in the sub-nanosecond region.
ACKNOWLEDGMENTS
3(a) we show a least-squares fit of this model (red curve) to the experimental data. We find best agreement for parameter values B0 = 2.64 mT, B1 = 199.5 mT, and Δ 3/0 = 10.14 T. The weighting factors for the individual contributions are A0 = -0.042 µA, A1 = 0.05933 µA, and A2 = -5.536 µA. The fit residuals, i. e. the deviation of the experimental data points from the model, are shown in the lower panel and indicate that the model describes the measured magnetoconductance response accurately. We note that the half width at half maximum of the response, Δ 3/0 , established from the model fit exceeds the magnetic field range accessible in the experiment. The magnetoconductivity does not saturate and so a pronounced baseline of the Lorentzian-like functionality characteristic of the model is not apparent. This
Fig. 3 (
3c) (blue trace). For clarity, we plot 1 − 0 on a logarithmic scale rather than 0 itself, since an 0 value close to unity (i. e. 1 − 0 = 0) indicates a perfect fit. A minimum with 1 − 0 ≈ 10 FO is found close to Δ 3/0 = 10 T, in agreement with the least-squares fitting in panel (a) which gave Δ 3/0 = 10.14 T; the fit becomes progressively worse for smaller Δ 3/0 values, and exhibits a plateau around 1 − 0 ≈ 10 F0 for larger values. This parametric fitting therefore appears to indicate that a value of Δ 3/0 ≈ 10 T yields the optimum fit, and that the deviation at larger values is small and possibly insignificant.However, this conclusion could be misleading and a consequence of the choice of the fitting region. To assess whether the optimum fit for Δ 3/0 ≈ 10 T is physically meaningful, we decreased the magnetic field range of the experimental data by omitting data points at higher field values and repeated the procedure for each case. The results are shown inFig. 3(c): the green trace arises from restricting the data points to ±5 T, with red and black corresponding to ±3 T and ±1 T, respectively. The apparent minimum close to 10 T becomes gradually weaker and disappears entirely for the narrowest range of ±1 T, which indicates that the full dataset does indeed include the onset of a baseline, i.e. the onset of the inflection of the Lorentzian which distinguishes this function in this magnetic field region from a simple parabolic magnetoconductance response. Thus, the pronounced minimum of 1 − 0 ≈ 10 FO (implying 0 ≈ 0.9999) validates the result of the least-squares fit in panel (a) with Δ 3/0 = 10.14 T.
This work was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award #DE-SC0000909. Part of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1157490 and the State of Florida.
(c) indicate that the uncertainties in this estimate may be substantial. Combined with the values of Δ established above from the EDMR experiments, we estimate as 5.1 ns for the extreme case of uncorrelated carrier pairs with Δ ; and 0.4 ns for correlated pair with Δ B . These values serve as upper and lower bounds for . Both values are much shorter than the spinlattice relaxation time 3 for this material. In Ref.[9], a spin dephasing time 0 ≈ 300 ns was established for PEDOT:PSS by means of electrically detected spin-echo measurements
. Ö Mermer, G Veeraraghavan, T L Francis, Y Sheng, D T Nguyen, M Wohlgenannt, A , Ö. Mermer, G. Veeraraghavan, T. L. Francis, Y. Sheng, D. T. Nguyen, M. Wohlgenannt, A.
. M K Köhler, M S Al-Suti, Khan, Phys. Rev. B: Condens. Matter Mater. Phys. 721Köhler, M. K. Al-Suti, and M. S. Khan, Phys. Rev. B: Condens. Matter Mater. Phys. 72, 1 (2005).
. F J Wang, H Bässler, Z Valy, Vardeny, Phys. Rev. Lett. 101236805F. J. Wang, H. Bässler, and Z. Valy Vardeny, Phys. Rev. Lett. 101, 236805 (2008).
. T D Nguyen, G Hukic-Markosian, F Wang, L Wojcik, X.-G Li, E Ehrenfreund, Z V Vardeny, Nat. Mater. 9345T. D. Nguyen, G. Hukic-Markosian, F. Wang, L. Wojcik, X.-G. Li, E. Ehrenfreund, and Z. V. Vardeny, Nat. Mater. 9, 345 (2010).
. M Reufer, M J Walter, P G Lagoudakis, A B Hummel, J S Kolb, H G Roskos, U Scherf, J M Lupton, Nat. Mater. 4340M. Reufer, M. J. Walter, P. G. Lagoudakis, A. B. Hummel, J. S. Kolb, H. G. Roskos, U. Scherf, and J. M. Lupton, Nat. Mater. 4, 340 (2005).
. Y Sheng, T D Nguyen, G Veeraraghavan, Ö Mermer, M Wohlgenannt, Phys. Rev. B. 7535202Y. Sheng, T. D. Nguyen, G. Veeraraghavan, Ö. Mermer, and M. Wohlgenannt, Phys. Rev. B 75, 35202 (2007).
. J Wang, A Chepelianskii, F Gao, N C Greenham, Nat. Commun. 31191J. Wang, A. Chepelianskii, F. Gao, and N. C. Greenham, Nat. Commun. 3, 1191 (2012).
. J M Lupton, C Boehme, Nat. Mater. 7598J. M. Lupton and C. Boehme, Nat. Mater. 7, 598 (2008).
. A J Schellekens, W Wagemans, S P Kersten, P A Bobbert, B Koopmans, Physical Review B. 8475204A. J. Schellekens, W. Wagemans, S. P. Kersten, P. A. Bobbert, and B. Koopmans, Physical Review B 84, 75204 (2011).
. K J Van Schooten, D L Baird, M E Limes, J M Lupton, C Boehme, Nat. Commun. 66688K. J. van Schooten, D. L. Baird, M. E. Limes, J. M. Lupton, and C. Boehme, Nat. Commun. 6, 6688 (2015).
. G Joshi, M Teferi, R Miller, S Jamali, D Baird, J Van Tol, H Malissa, J M Lupton, C Boehme, Unpublished). G. Joshi, M. Teferi, R. Miller, S. Jamali, D. Baird, J. Van Tol, H. Malissa, J. M. Lupton, and C. Boehme, (Unpublished) (2018).
A Elschner, S Kirchmeyer, W Lovenich, K Reuter, PEDOT: Principles and Applications of an Intrinsically Conductive Polymer. Taylors & Francis GroupA. Elschner, S. Kirchmeyer, W. Lovenich, and K. Reuter, PEDOT: Principles and Applications of an Intrinsically Conductive Polymer (Taylors & Francis Group, 2011).
. P Klemm, S Bange, H Malissa, C Boehme, J M Lupton, J. Photonics Energy. to be PublishedP. Klemm, S. Bange, H. Malissa, C. Boehme, J. M. Lupton, and (to be Published), J. Photonics Energy (2018).
. C Zhang, D Sun, C.-X Sheng, Y X Zhai, K Mielczarek, A Zakhidov, Z V Vardeny, Nat. Phys. 11427C. Zhang, D. Sun, C.-X. Sheng, Y. X. Zhai, K. Mielczarek, A. Zakhidov, and Z. V. Vardeny, Nat. Phys. 11, 427 (2015).
. A H Devir-Wolfman, B Khachatryan, B R Gautam, L Tzabary, A Keren, N Tessler, Z V Vardeny, E Ehrenfreund, Nat. Commun. 51A. H. Devir-Wolfman, B. Khachatryan, B. R. Gautam, L. Tzabary, A. Keren, N. Tessler, Z. V. Vardeny, and E. Ehrenfreund, Nat. Commun. 5, 1 (2014).
. J M Lupton, D R Mccamey, C Boehme, ChemPhysChem. 113040J. M. Lupton, D. R. McCamey, and C. Boehme, ChemPhysChem 11, 3040 (2010).
. E Ehrenfreund, Z V Vardeny, Isr. J. Chem. 52552E. Ehrenfreund and Z. V. Vardeny, Isr. J. Chem. 52, 552 (2012).
. W Wageman, P Janssen, A J Schellekens, F L Bloom, P A Bobbert, B Koopmans, SPIN. 193W. Wageman, P. Janssen, A. J. Schellekens, F. L. Bloom, P. A. Bobbert, and B. Koopmans, SPIN 1, 93 (2011).
. H Malissa, R Miller, D L Baird, S Jamali, G Joshi, M Bursch, S Grimme, J Van Tol, J M Lupton, C Boehme, Phys. Rev. B. 97161201H. Malissa, R. Miller, D. L. Baird, S. Jamali, G. Joshi, M. Bursch, S. Grimme, J. van Tol, J. M. Lupton, and C. Boehme, Phys. Rev. B 97, 161201 (2018).
. D R Mccamey, H A Seipel, S.-Y Paik, M J Walter, N J Borys, J M Lupton, C Boehme, Nat. Mater. 7723D. R. McCamey, H. A. Seipel, S.-Y. Paik, M. J. Walter, N. J. Borys, J. M. Lupton, and C. Boehme, Nat. Mater. 7, 723 (2008).
. M Kavand, D Baird, K Van Schooten, H Malissa, J M Lupton, C Boehme, Phys. Rev. B. 9475209M. Kavand, D. Baird, K. van Schooten, H. Malissa, J. M. Lupton, and C. Boehme, Phys. Rev. B 94, 75209 (2016).
. R Miller, K J Van Schooten, H Malissa, G Joshi, S Jamali, J M Lupton, C Boehme, Phys. Rev. B. 94214202R. Miller, K. J. van Schooten, H. Malissa, G. Joshi, S. Jamali, J. M. Lupton, and C. Boehme, Phys. Rev. B 94, 214202 (2016).
. D P Waters, G Joshi, M Kavand, M E Limes, H Malissa, P L Burn, J M Lupton, C Boehme, Nat. Phys. 11910D. P. Waters, G. Joshi, M. Kavand, M. E. Limes, H. Malissa, P. L. Burn, J. M. Lupton, and C. Boehme, Nat. Phys. 11, 910 (2015).
. W Baker, K Ambal, D Waters, R Baarda, H Morishita, K Van Schooten, D Mccamey, J Lupton, C Boehme, Nat. Commun. 3898W. Baker, K. Ambal, D. Waters, R. Baarda, H. Morishita, K. van Schooten, D. McCamey, J. Lupton, and C. Boehme, Nat. Commun. 3, 898 (2012).
. S Jamali, G Joshi, H Malissa, J M Lupton, C Boehme, Nano Lett. 17S. Jamali, G. Joshi, H. Malissa, J. M. Lupton, and C. Boehme, Nano Lett. 17, (2017).
. G Joshi, R Miller, L Ogden, M Kavand, S Jamali, K Ambal, S Venkatesh, D Schurig, H Malissa, J M Lupton, C Boehme, Appl. Phys. Lett. 109103303G. Joshi, R. Miller, L. Ogden, M. Kavand, S. Jamali, K. Ambal, S. Venkatesh, D. Schurig, H. Malissa, J. M. Lupton, and C. Boehme, Appl. Phys. Lett. 109, 103303 (2016).
. H Malissa, M Kavand, D P Waters, K J Van Schooten, P L Burn, Z V Vardeny, B Saam, J M Lupton, C Boehme, Science. 3451487H. Malissa, M. Kavand, D. P. Waters, K. J. van Schooten, P. L. Burn, Z. V. Vardeny, B. Saam, J. M. Lupton, and C. Boehme, Science 345, 1487 (2014).
. J Van Tol, L C Brunel, R J Wylde, Rev. Sci. Instrum. 7674101J. van Tol, L. C. Brunel, and R. J. Wylde, Rev. Sci. Instrum. 76, 74101 (2005).
. M Y Teferi, J Ogle, G Joshi, H Malissa, S Jamali, D L Baird, J M Lupton, L W Brooks, C Boehme, arXiv:1804.05139Cond-Mat.mes-Hall] 1M. Y. Teferi, J. Ogle, G. Joshi, H. Malissa, S. Jamali, D. L. Baird, J. M. Lupton, L. W. Brooks, and C. Boehme, arXiv:1804.05139 [Cond-Mat.mes-Hall] 1 (2018).
. S Stoll, A Schweiger, J. Magn. Reson. 17842S. Stoll and A. Schweiger, J. Magn. Reson. 178, 42 (2006).
. P Klemm, S Bange, A Pöllmann, C Boehme, J M Lupton, Phys. Rev. B. 95241407P. Klemm, S. Bange, A. Pöllmann, C. Boehme, and J. M. Lupton, Phys. Rev. B 95, 241407 (2017).
. P Janssen, M Cox, S H W Wouters, M Kemerink, M M Wienk, B Koopmans, Nat. Commun. 41P. Janssen, M. Cox, S. H. W. Wouters, M. Kemerink, M. M. Wienk, and B. Koopmans, Nat. Commun. 4, 1 (2013).
. W J Baker, T L Keevers, J M Lupton, D R Mccamey, C Boehme, Phys. Rev. Lett. 108267601W. J. Baker, T. L. Keevers, J. M. Lupton, D. R. McCamey, and C. Boehme, Phys. Rev. Lett. 108, 267601 (2012).
A small but nonzero difference in the charge carriers' g-factors leads to differences in their Larmor precession frequencies, and consequently to a field-dependent mixing between | ⟩ and | / ⟩. FIG. 2. High-field EDMR spectra measured at a microwave frequency. ⟩ , | F ⟩ , | ⟩ , | / ⟩ , FIG. 1. Magnetic field-dependent Δ mixing in a carrier-pair system. Charge carrier pairs can form in any of the four product states. of 240 GHz along with a doubleFIG. 1. Magnetic field-dependent Δ mixing in a carrier-pair system. Charge carrier pairs can form in any of the four product states, which are | [ ⟩, | F ⟩, and superpositions of | ⟩ and | / ⟩. A small but non- zero difference in the charge carriers' g-factors leads to differences in their Larmor precession frequencies, and consequently to a field-dependent mixing between | ⟩ and | / ⟩. FIG. 2. High-field EDMR spectra measured at a microwave frequency of 240 GHz along with a double
Gaussian line shape (red) obtained by summing the two individual Gaussian lines for the two g-values at a given MW frequency. The individual constituents (blue and green) are the calculated Gaussian line shapes for the g-factors and g-strain values obtained from the multi-frequency EDMR measurements as described in Ref. 10], whereas the experimental method is described in Ref. [18Gaussian line shape (red) obtained by summing the two individual Gaussian lines for the two g-values at a given MW frequency. The individual constituents (blue and green) are the calculated Gaussian line shapes for the g-factors and g-strain values obtained from the multi-frequency EDMR measurements as described in Ref. [10], whereas the experimental method is described in Ref. [18].
Upper panel: magneto-resistance response of a PEDOT:PSS diode at 5 K along with a fit to. FIG. 3. (a) Upper panel: magneto-resistance response of a PEDOT:PSS diode at 5 K along with a fit to
Several fits of Eq. 3 to the experimental data, with half-width half maximum of the magnetoconductance curve Δ 3/0 set to 7, 10, 20, and 50 T, respectively. All curves fit the measured data well but diverge at higher fields. (c) Coefficient of determination 0 of the model fit as a function of Δ 3/0 . We plot 1 − 0 on a logarithmic scale since 1 − 0 = 0 indicates a perfect fit. This procedure is performed for the entire dataset (blue) and for several limited magnetic-field ranges of the experimental magnetoconductance curve shown in (b). Eq, The vertical offset is adjusted as described in the text. Lower panel: residuals of the fit. (b). red curve. N. b. that the solid lines in panel c serve as guides to the eyeEq. 3 (red curve). The vertical offset is adjusted as described in the text. Lower panel: residuals of the fit. (b) Several fits of Eq. 3 to the experimental data, with half-width half maximum of the magnetoconductance curve Δ 3/0 set to 7, 10, 20, and 50 T, respectively. All curves fit the measured data well but diverge at higher fields. (c) Coefficient of determination 0 of the model fit as a function of Δ 3/0 . We plot 1 − 0 on a logarithmic scale since 1 − 0 = 0 indicates a perfect fit. This procedure is performed for the entire dataset (blue) and for several limited magnetic-field ranges of the experimental magnetoconductance curve shown in (b). (N. b. that the solid lines in panel c serve as guides to the eye
| []
|
[
"Theory of EELS in atomically thin metallic films",
"Theory of EELS in atomically thin metallic films"
]
| [
"A Rodríguez Echarri \nICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain\n",
"Enok Johannes ",
"Haahr Skjølstrup \nDepartment of Materials and Production\nAalborg University\nSkjernvej 4ADK-9220Aalborg EastDenmark\n",
"Thomas G Pedersen \nDepartment of Materials and Production\nAalborg University\nSkjernvej 4ADK-9220Aalborg EastDenmark\n",
"F Javier García De Abajo \nICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain\n\nICREA-Institució Catalana de Recerca i Estudis Avançats\nPasseig Lluís Companys 2308010BarcelonaSpain\n"
]
| [
"ICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain",
"Department of Materials and Production\nAalborg University\nSkjernvej 4ADK-9220Aalborg EastDenmark",
"Department of Materials and Production\nAalborg University\nSkjernvej 4ADK-9220Aalborg EastDenmark",
"ICFO-Institut de Ciencies Fotoniques\nThe Barcelona Institute of Science and Technology\n08860CastelldefelsBarcelonaSpain",
"ICREA-Institució Catalana de Recerca i Estudis Avançats\nPasseig Lluís Companys 2308010BarcelonaSpain"
]
| []
| We study strongly confined plasmons in ultrathin gold and silver films by simulating electron energy-loss spectroscopy (EELS). Plasmon dispersion relations are directly retrieved from the energy-and momentum-resolved loss probability under normal incidence conditions, whereas they can also be inferred for aloof parallel beam trajectories from the evolution of the plasmon features in the resulting loss spectra as we vary the impinging electron energy. We find good agreement between nonlocal quantum-mechanical simulations based on the random-phase approximation and a local classical dielectric description for silver films of different thicknesses down to a few atomic layers. We further observe only a minor dependence of quantum simulations for these films on the confining out-of-plane electron potential when comparing density-functional theory within the jellium model with a phenomenological experimentally-fitted potential incorporating atomic layer periodicity and in-plane parabolic bands of energy-dependent effective mass. The latter shows also a small dependence on the crystallographic orientation of silver films, while the unphysical assumption of energy-independent electron mass leads to spurious features in the predicted spectra. Interestingly, we find electron band effects to be more relevant in gold films, giving rise to blue shifts when compared to classical or jellium model simulations. In contrast to the strong nonlocal effects found in few-nanometer metal nanoparticles, our study reveals that a local classical description provides excellent quantitative results in both plasmon strength and dispersion when compared to quantummechanical simulations down to silver films consisting of only a few atomic layers, thus emphasizing the in-plane nearly-free conduction-electron motion associated with plasmons in these structures. | 10.1103/physrevresearch.2.023096 | null | 209,414,990 | 1912.09414 | f8cdf7f86870919ee844e714af2ed8f7045163ad |
Theory of EELS in atomically thin metallic films
A Rodríguez Echarri
ICFO-Institut de Ciencies Fotoniques
The Barcelona Institute of Science and Technology
08860CastelldefelsBarcelonaSpain
Enok Johannes
Haahr Skjølstrup
Department of Materials and Production
Aalborg University
Skjernvej 4ADK-9220Aalborg EastDenmark
Thomas G Pedersen
Department of Materials and Production
Aalborg University
Skjernvej 4ADK-9220Aalborg EastDenmark
F Javier García De Abajo
ICFO-Institut de Ciencies Fotoniques
The Barcelona Institute of Science and Technology
08860CastelldefelsBarcelonaSpain
ICREA-Institució Catalana de Recerca i Estudis Avançats
Passeig Lluís Companys 2308010BarcelonaSpain
Theory of EELS in atomically thin metallic films
(Dated: December 20, 2019)Physics Subject Headings: EELSSurface plas- monsThin filmsQuantum-well statesNanophotonicsNonlocal effects
We study strongly confined plasmons in ultrathin gold and silver films by simulating electron energy-loss spectroscopy (EELS). Plasmon dispersion relations are directly retrieved from the energy-and momentum-resolved loss probability under normal incidence conditions, whereas they can also be inferred for aloof parallel beam trajectories from the evolution of the plasmon features in the resulting loss spectra as we vary the impinging electron energy. We find good agreement between nonlocal quantum-mechanical simulations based on the random-phase approximation and a local classical dielectric description for silver films of different thicknesses down to a few atomic layers. We further observe only a minor dependence of quantum simulations for these films on the confining out-of-plane electron potential when comparing density-functional theory within the jellium model with a phenomenological experimentally-fitted potential incorporating atomic layer periodicity and in-plane parabolic bands of energy-dependent effective mass. The latter shows also a small dependence on the crystallographic orientation of silver films, while the unphysical assumption of energy-independent electron mass leads to spurious features in the predicted spectra. Interestingly, we find electron band effects to be more relevant in gold films, giving rise to blue shifts when compared to classical or jellium model simulations. In contrast to the strong nonlocal effects found in few-nanometer metal nanoparticles, our study reveals that a local classical description provides excellent quantitative results in both plasmon strength and dispersion when compared to quantummechanical simulations down to silver films consisting of only a few atomic layers, thus emphasizing the in-plane nearly-free conduction-electron motion associated with plasmons in these structures.
I. INTRODUCTION
Surface plasmons -the collective electron oscillations at material surfaces and interfaces-provide the means to concentrate and amplify the intensity of externally applied light down to nanoscale regions [1,2], where they interact strongly with molecules and nanostructures, thus becoming a powerful asset in novel applications [3] such as biosensing [2,4,5], photocatalysis [6,7], energy harvesting [8,9], and nonlinear optics [10][11][12][13].
Surface plasmons were first identified using electron energy-loss spectroscopy (EELS), starting with the prediction [14] and subsequent measurement of associated loss features in electrons scattered under grazing incidence from Al [15], Na and K [16,17], and Ag [17,18] surfaces. The main characteristics of surface plasmons in noble and simple metals were successfully explained using time-dependent density-functional theory (TD-DFT) [19] within the jellium model [20,21], while inclusion of electron band effects were required for other metals [22]. Interestingly, multipole surface plasmons were predicted * Corresponding author:[email protected] as additional resonances originating in the smooth electron density profile across metal-dielectric interfaces [22][23][24], and subsequently found in experiments performed on simple metals such as K and Na [25], but concluded to be too weak to be observed in Al [25] and Ag [26]. These studies focused on the relatively high-energy plasmons supported by planar surfaces in the short-wavelength regime. However, plasmons can hybridize with light forming surface-plasmon polaritons (SPPs) in planar surfaces, which become light-like modes at low energies, thus loosing confinement, as they are characterized by in-plane wavelengths slightly smaller than those of light and longrange penetration into the dielectric material or empty space outside the metal [27][28][29].
Highly confined plasmons can also be achieved in sharp metallic tips and closely spaced metal surfaces [30], where strong redshifts are produced due to the attractive Coulomb interaction between neighboring noncoplanar interfaces. This effect, which depends dramatically on surface morphology, can also be observed in planar systems such as ultrathin noble metal films [31,32] and narrow metal-dielectric-metal waveguides [12,31,33]. More precisely, hybridization takes place in metal films between the plasmons supported by their two interfaces, giving rise to bonding and antibonding dispersion branches that were first revealed also through EELS is self-standing aluminum foils [34]; in ultrathin films of only a few atomic layers in thickness, the antibonding plasmon dispersion is pushed close to the light line, whereas the bonding plasmon becomes strongly confined (reaching the quasistatic limit [31,32]), as experimentally corroborated through angle-resolved low-energy EELS in few-monolayer Ag films [35] and monolayer (ML) DySi 2 [36], as well as in laterally confined wires formed by In [37] and silicide [38], and even in monoatomic Au chains grown on Si(557) surfaces [39]. Additionally, graphene has been shown to support long-lived mid-infrared and terahertz plasmons [40] that can be tuned electrically [41,42] and confined vertically down to few nanometers when placed in close proximity to a planar metal surface [43,44]. While most graphene plasmon studies have been performed using far-and near-field optics setups [45][46][47], low-energy EELS has also revealed their dispersion relation in extended films [48,49]. Here, we focus instead on visible and near-infrared plasmons supported by atomically thin metal films, which have been recently demonstrated in crystalline Ag layers [32], where they also experience strong spatial confinement.
In this paper, we investigate plasmons in atomically thin noble metal films by theoretically studying EELS for electron beams either traversing them or moving parallel outside their surface. We provide quantum-mechanical simulations based on the random-phase approximation (RPA), which are found to be in excellent agreement with classical dielectric theory based on the use of frequencydependent dielectric functions for both Ag and Au films of small thickness down to a few atomic layers. This result is in stark contrast to the strong nonlocal effects observed in metal nanoparticles of similar or even larger diameter [50,51], a result that we attribute to the predominance of in-plane electron motion associated with the low-energy plasmons of thin films, unlike the combination of in-and out-of-plane motion in higher energy SPPs.
II. THEORETICAL FORMALISM
We present the elements needed to calculate EELS probabilities in the nonretarded approximation using the linear response susceptibility to represent the metallic thin film. The latter is obtained in the RPA, starting from the one-electron wave functions of the system, which are organized as vertical quantum-well (QW) states, discretized by confinement along the out-of-plane direction and exhibiting quasi-free motion along the plane of the film. We further specify the EELS probability for electron trajectories either parallel or perpendicular with respect to the metal surfaces.
A. Calculation of EELS probabilities from the susceptibility in the nonretarded limit
The loss probability Γ EELS (ω) measured through EELS in electron microscopes must be normalized in such a way that ∞ 0 dω ω Γ EELS (ω) gives the average energy loss experienced by the electrons. Taking the latter to follow a straight-line trajectory with constant velocity vector v parallel to the z axis and impact parameter R 0 = (x 0 , y 0 ), we can write [52]
Γ EELS (ω) = e π ω dz Re E ind z (R 0 , z, ω) e −iωz/v (1)
as the integral along the electron trajectory of the frequency-resolved self-induced field E ind z (r, ω) = dt E z (r, t)e iωt , which can be in turn calculated by solving the classical Maxwell equations with the electron point charge acting as an external source in the presence of the sample. This equation is rigorously valid within the approximations of linear response and nonrecoil (i.e., small energy loss ω compared with the electron kinetic energy E 0 ).
In the present study, we consider relatively small electron velocities v c and films of small thickness compared with the involved optical wavelengths. This allows us to work in the quasistatic limit and write the field E ind z (r, ω) = −∂ z φ ind (r, ω) as the gradient of a scalar potential, so Eq. (1) can be integrated by parts to yield
Γ EELS (ω) = e π v dz Im φ ind (R 0 , z, ω) e −iωz/v . (2)
We can now express the induced potential in terms of the induced charge as
φ ind (r, ω) = d 3 r ν(r, r ) ρ ind (r , ω),(3)
where ν(r, r ) is the Coulomb interaction between point charges located at positions r and r . Likewise, we write the induced charge as ρ ind (r, ω) = d 3 r χ(r, r , ω)φ ext (r , ω), where χ(r, r , ω) is the linear susceptibility, φ ext (r , ω) = d 3 r ν(r, r ) ρ ext (r , ω) is the external electric potential generated by the electron charge density ρ ext (r, ω) = −e dt δ(r − R 0 − vt) e iωt = (−e/v)δ(R − R 0 ) e iωz/v , and we use the notation r = (R, z) with R = (x, y).
In free space one has ν(r, r ) = ν 0 (r − r ) = 1/|r − r |, but we are interested in retaining a general spatial dependence of ν(r, r ) in order to describe the polarization background produced in the film by interaction with everything else other than conduction electrons (see below). Combining these elements with Eq. (2), we find the loss probability
Γ EELS (ω) = e 2 π v 2 d 3 r d 3 r w * (r) w(r ) (4) × Im {−χ(r, r , ω)} , where w(r) = dz ν(r, R 0 , z ) e iωz /v(5)
is the external potential created by the electron and we have made use of the reciprocity property χ(r, r , ω) = χ(r , r, ω) to extract the complex factors w outside the imaginary part. Next, we apply this expression to calculate EELS probabilities from the RPA susceptibility. But first, for completeness, we note that the integral in Eq. (5) can be performed analytically for the bare Coulomb interaction [53] yielding
w(r) = 2K 0 (ω|R − R 0 |/v) e iωz/v ,
where K 0 is a modified Bessel function [53], thus allowing us to write
Γ EELS (ω) = 4e 2 π v 2 d 3 r d 3 r cos ω v (z − z) × K 0 ω v |R − R 0 | K 0 ω v |R − R 0 | × Im {−χ(r, r , ω)}
for the loss probability, which we can directly apply to systems in which any background polarization is already contained in χ, or when ν is well described by the bare Coulomb interaction (e.g., in simple metals).
B. RPA susceptibility of thin metal films
We follow the same formalism as in Ref. [33], which is extended here to account for an energy-dependence of the in-plane electron effective mass. One starts by writing χ(r, r , ω) in terms of the non-interacting susceptibility χ 0 (r, r , ω) through χ = χ 0 · (I − ν · χ 0 ) −1 , where we use matrix notation with spatial coordinates r and r acting as matrix indices, so that matrix multiplication involves integration over r, and I(r, r ) = δ(r − r ). We further adopt the RPA by calculating χ 0 as [33,54]
χ 0 (r, r , ω) = 2e 2 ii (f i − f i ) ψ i (r)ψ * i (r )ψ * i (r)ψ i (r ) ω + iγ − (ε i − ε i )(6)
from the one-electron wave functions ψ i of energies ε i and Fermi-Dirac occupation numbers f i . Here, the factor of 2 accounts for spin degeneracy and γ is a phenomenological damping rate.
We describe metal films assuming translational invariance along the in-plane directions and parabolic electron dispersion with different effective mass m * j for each vertical QW band j. This allows us to write the electron wave functions as [55]
ψ i (r) = ϕ j (z)e ik ·R / √ A,
where k is the 2D in-plane wave vector, A is the quantization area, and the state index is multiplexed as i → (j, k ). Likewise, the electron energy can be separated as ε j,k = ε ⊥ j + 2 k 2 /2m * j , where ε ⊥ j is the out-of-plane energy that signals the QW band bottom. Inserting these expressions into Eq. (6) and making the customary substitution i → A j d 2 k /(2π) 2 for the state sums, we find [56]
χ(r, r , ω) = d 2 Q (2π) 2 χ(Q, z, z , ω) e iQ·(R−R ) ,(7)
which directly reflects the in-plane homogeneity of the film. We can now work in Q space, where Eq. (6) reduces, using the above assumptions for the wave functions, to
χ 0 (Q, z, z , ω) (8) = 2e 2 jj χ jj (Q, ω)ϕ j (z)ϕ * j (z )ϕ * j (z)ϕ j (z ) where χ jj (Q, ω) = d 2 k (2π) 2 f j ,|k −Q| − f j,k (9) × 1 ω + iγ − ε ⊥ j − ε ⊥ j + 2 k 2 /m * j − |k − Q| 2 /m * j ,
which only depends on the modulus of Q due to the inplane band isotropy. We evaluate the integral in Eq. (9) assuming zero temperature [i.e.,
f j,k = θ(E F − ε j,k ),
where E F is the Fermi energy] and taking Q = (Q, 0) without loss of generality. Incidentally, simple manipulations of the above expressions reveal a dependence on frequency and damping through (ω + iγ) 2 that is maintained in the local limit (Q → 0), in contrast to ω(ω + iγ) in the Drude model. The RPA formalism thus produces spectral features with roughly twice the width of the Drude model in the local limit. This problem (along with a more involved issue related to local conservation of electron number for finite attenuation) can be solved through a phenomenological prescription proposed by Mermin [57], which unfortunately becomes rather involved when applied to the present systems. As a practical and reasonably accurate solution, we proceed instead by setting γ = γ exp /2 in the above expressions (i.e., half the experimental damping rate, see Appendix A).
We obtain the out-of-plane wave functions ϕ j (z) as the eigenstates of the 1D Hamiltonian −( 2 /2m e )∂ zz + V (z), using the free-electron mass m e for the transversal kinetic term and two different models for the confining potential V (z): (i) the self-consistent solution in the jellium (JEL) approximation within density-functional theory (DFT) [20,21]; and (ii) a phenomenological atomic-layer potential (ALP) that incorporates out-of-plane bulk atomiclayer corrugation and a surface density profile with parameters fitted to reproduce relevant experimental band structure features, such as affinity, surface state energy, and projected bulk band gap, which depend on material and crystal orientation as compiled in Ref. [62].
The JEL model corresponds to the self-consistent DFT solution for a thin slab of background potential and energy-independent effective mass m * j = m e [20,21], computed here through an implementation discussed elsewhere [63].
In the ALP model we fit m * j to experimental data (see Table I) and consider an effective electron density n eff . Upon integration over the density of states of the parabolic QW bands, we can then write the Fermi energy Material a(eV −1 ) b m * (SS)/me m0/me n eff /n0 E F (eV) Ag(111) -0.1549 -0.5446 0.40 [58] 0.25 [59] 0.8381 -4.63 [60] Ag(100) -0.0817 0.2116 -0.40 [61] 0.8710 -4.43 [62] Au(111) -0.1660 -0.8937 0.26 [58] 0.26 [59] 0.9443 -5.50 [60] TABLE I: Parameters used to describe the parabolic dispersion of quantum wells (QWs) in Ag(111), Ag(100), and Au(111) films. We take the effective mass of each QW j to linearly vary as m * j /me = a ε ⊥ j + b with band-bottom energy ε ⊥ j , where the parameters a and b are taken to match m0 at the highest occupied QW (below the SS) in the semi-infinite surface and m * = me at the bottom of the conduction band. The effective electron density n eff , given here relative the bulk conduction electron density n0, is required to fit the experimentally observed Fermi energy E F and SS energy. of a N -layer film as
E F = M j=1 m * j −1 n eff a s N 2 π + M j=1 m * j ε ⊥ j ,(10)
where j = M is the highest partially populated QW band
(i.e., ε ⊥ M < E F / < ε ⊥ M +1
) and a s is the atomic interlayer spacing (i.e., the film thickness is d = N a s , with a s = 0.236 nm for Ag(111) and Au(111), and a s = 0.205 nm for Ag (100)). This expression reduces to a similar one in Ref. [33] when m * j is independent of j. We adjust n eff for each type of metal surface in such a way that Eq. (10) gives the experimental bulk values of E F listed in Table I. Incidentally, although the effective mass of surface states also varies with energy [64,65], we take it as constant because of the lack of data for ultrathin Au and Ag films; this should be a reasonable approximation for films consisting of N ≥5 layers, where the surface state energy is already close to the semi-infinite surface level.
Conduction electrons interact through the bare Coulomb potential in simple metals, which in Q space reduces to ν(Q, z, z ) = (2π/Q)e −Q|z−z | . However, polarization of inner electronic bands plays a major role in the dielectric response of Ag and Au. We describe this effect by modifying ν(Q, z, z ) in order to account for the interaction between point charges in the presence of a dielectric slab of local background permittivity fitted to experimental data [66] after subtracting a Drude term representing conduction electrons (see Appendix A). We thus adopt the local response approximation for this contribution originating in localized inner electron states, whereas conduction electrons are treated nonlocally through the above RPA formalism. Similar to Eq. (7), translational symmetry in the film allows us to write
ν(r, r ) = d 2 Q (2π) 2 ν(Q, z, z ) e iQ·(R−R ) ,(11)
where ν(Q, z, z ) is reproduced for convenience from Ref. [33] in Appendix A. We note that Eq. (11) neglects the effect of lateral atomic corrugation in this interaction (i.e., the background permittivity is taken to be homogeneous inside the film). Finally, we calculate χ(Q, z, z , ω) from the noninteracting susceptibility [Eq. (8)] and the screened interaction by discretizing both of them in real space coordinates (z, z ) and numerically performing the linear matrix algebra explained above. We obtain converged results with respect to the number of discretization points and also compared with an expansion in harmonic functions [33].
C. EELS probability under normal incidence
Direct insertion of Eqs. (7) and (11) into Eqs. (4) and (5) leads to the result
Γ EELS ⊥ (ω) = ∞ 0 dQ Γ EELS ⊥ (Q, ω) (12) with Γ EELS ⊥ (Q, ω) = e 2 Q 2π 2 v 2 (13) × dz dz I * ⊥ (Q, z)I ⊥ (Q, z ) Im {−χ(Q, z, z , ω)} , where I ⊥ (Q, z) = dz ν(Q, z, z ) e iωz /v(14)
contains the external electron potential. For completeness, we note that when ν(Q, z, z ) is the bare Coulomb interaction (2π/Q)e −Q|z−z | , Eq. (14) becomes
I ⊥ (Q, z) = 4πe iωz/v /(Q 2 + ω 2 /v 2 ), so Eq. (13) reduces to Γ EELS ⊥ (Q, ω) = 8e 2 v 2 Q (Q 2 + ω 2 /v 2 ) 2 (15) × dz dz cos [ω(z − z )/v] Im {−χ(Q, z, z , ω)} , where we have used reciprocity again [i.e., χ(Q, z, z , ω) = χ(Q, z , z, ω)].
In the simulations that we present below, we compare the RPA approach just presented with classical electromagnetic calculations based on the use of a local frequency-dependent dielectric function for the metal. This configuration has been theoretically studied for a long time [67], and in particular, we use the analytical expressions derived in a previous publication for an electron normally incident on a dielectric slab [68] with the bulk contribution integrated up to a cutoff wave vector Q = 5 nm −1 .
D. EELS probability in the aloof configuration
For an electron moving parallel to the film at a distance z 0 from the metal surface, it is convenient to make the substitutions z → x, R → (y, z), and R 0 → (0, z 0 ) in Eqs. (4) and (5), so combining them with Eqs. (7) and (11), and retaining R = (x, y) in the latter, we readily obtain
Γ EELS (ω) = e 2 L π 2 v 2 ∞ 0 dQ y (16) × dz dz ν * (Q, z, z 0 )ν(Q, z , z 0 ) Im {−χ(Q, z, z , ω)} ,
where Q = ω 2 /v 2 + Q 2 y and L is the electron path length. Again for completeness, when ν(Q, z, z ) is the bare Coulomb interaction, Eq. (16) reduces to
Γ EELS (ω) = 4e 2 L v 2 ∞ 0 dQ y Q 2 × dz dz e −Q(|z−z0|+|z −z0|) Im {−χ(Q, z, z , ω)} .
The above expressions can be applied to electron impact parameters z 0 both inside or outside the metal, but they can be simplified when the beam is not overlapping the conduction electron charge [see Fig. 3(a)], so that z 0 > z, z in the region inside the above integrals in which χ(Q, z, z , ω) is nonzero, and therefore, changing the variable of integration from Q y to Q, we can write
Γ EELS (ω) = 2e 2 L π v 2 ∞ ω/v dQ e −2Qz0 Q 2 − ω 2 /v 2 Im{r p (Q, ω)},(17)
where
r p (Q, ω) = − Q 2π dz dz ν * (Q, z, z 0 )ν(Q, z , z 0 ) × e 2Qz0 χ(Q, z, z , ω)(18)
is the Fresnel reflection coefficient of the film for p polarization in the quasistatic limit. Incidentally, Eq. (18) is independent of the source location z 0 when it does not overlap the metal because ν(Q, z, z 0 ) then depends on z 0 only through a factor e −Qz0 (see Appendix A). Equation (17), which agrees with previous derivations from classical dielectric theory [69], reveals Im{r p (Q, ω)} as a loss function, which is used below to visualize the surface plasmon dispersion. We also provide results from a local dielectric description based on the textbook solution of the Poisson equation for the reflection coefficient [33] r classical
p = ( 2 − 1) 1 − e −2Qd ( + 1) 2 − ( − 1) 2 e −2Qd(19)
for a metal film of thickness d and permittivity .
III. RESULTS AND DISCUSSION
We show examples of the two types of confining electron potentials used in our RPA calculations for Ag films in Fig. 1(a,b), along with the resulting conduction electron charge densities. The JEL potential is smooth at the surface and describes electron spill-out and Friedel oscillations [70]. The phenomenological ALP potential further incorporates corrugations due to the atomic planes in the bulk, which result in strong oscillations of the den- Fig. 1(c,d)]. The band structure quickly evolves toward the semi-infinite surface for a few tens of MLs in both models. Additionally, the ALP potential hosts surface states and a projected bulk gap of energies fitted to experiment [62]. We note that this gap depends on surface orientation: it is present in Ag(111) but absent in Ag(100) at the Fermi level, as revealed by photoemission measurements [? ] see also Fig. 9(a) in Appendix B]. Remarkably, despite the important differences in the details of the potentials and electron bands, both models predict a similar plasmon dispersion [ Fig. 1(e,f), density plots, obtained from Eq. (18)], which is in excellent agreement with classical theory [ Fig. 1(e,f), red curves, obtained from the poles of Eq. (19)]. Incidentally, we observe the response to also converge toward the semi-infinite surface limit for a few tens of atomic layers [see Fig. 10 in Appendix B. Similar good agreement is found in the reflection coefficients of Ag films computed for different thickness with either of these potentials, with a square-barrier potential, or with a model potential constructed by glueing on either film side a jellium DFT potential tabulated for semi-infinite surfaces [20] [see Fig. 11 in Appendix ??.
The transversal distribution of change densities associated with thin film plasmons show a clear resemblance when calculated using the ALP or JEL model potentials, although one can still observe substantial discrepancies between the two of them [see for example Fig. 2, where the ALP model charge appears to be smaller in magnitude]. However, this different behavior hardly reflects in the dispersion relation and plasmon strength [ Fig. 1]. Interestingly, the z-integrated charge is nonzero, revealing that plasmons involves net charge oscillations along the in-plane directions for finite wave vector Q.
We conclude from these results that it is the effective number of valence electrons participating in the plasmons what determines their main characteristics, irrespective of the details of the electron wave functions and induced charge densities.
The loss function Im{r p } provides a convenient way to represent the plasmon dispersion relation, as plasmons produce sharp features in the Fresnel reflection coefficient for p polarization. A weighted integral of this quantity over in-plane wave vectors gives the EELS probability under parallel aloof interaction [ Fig. 3(a)] according to Eq. (17). However, the integration limit has a threshold at ω = Qv and the weighting factor multiplying the loss function in the integrand diverges precisely at that point. The cutoff condition ω = Qv is represented in Fig. 3(b) for different electron velocities (white lines) along with the loss function (density plot). As expected, the points of intersection with the plasmon band produce a dominant contribution that pops up as sharp peaks in the resulting EELS spectra [ Fig. 3(c,d)]. An increase in electron velocity (i.e., in the slope of the threshold line) results in a redshift of the spectral peak [ Fig. 3(c)], and likewise, thinner films show plasmons moving farther away from the ω = Qc light line, thus producing shifts toward higher plasmon energies in the EELS spectra for fixed electron energy. We remark that RPA and classical calculations lead to quantitatively similar results for this configuration, and the former are roughly independent of the choice of confining electron potential. The ALP model incorporates experimental information on electronic bands, which depend on crystallographic orientation (see Table I). We explore the effects of this dependence by comparing aloof EELS spectra obtained from Ag(111) and Ag(100) films in Fig. 4. In order to eliminate discrepancies arising from differences in thickness, we consider films consisting of N = 13 and N = 15 MLs, respectively, so that the thickness ratio is (2/ √ 3) × (13/15) ≈ 1.001. We remind that Ag(111) displays a projected bulk gap in the electronic bands, in contrast to Ag(100) [see Fig. 9(a) in Appendix B]; as a consequence the former supports electronic surface states unlike the latter [62]. Despite these remarkable differences in electronic structure, the resulting spectra look rather similar, except for a small redshift of Ag(100) plasmon peaks relative to Ag(111), comparable in magnitude to those observed in semi-infinite Ag(111) and Ag(110) crystal surfaces through angle-resolved low-energy EELS [72], although the actual magnitude of the shift might be also influenced by electron confinement in our ultrathin films.
The presence of a dielectric substrate of permittivity s is known to redshift the plasmon frequency of thin films by a factor ∼ 1/ √ 1 + s due to the attractive image interaction [73]. This effect is observed in our calculated aloof EELS spectra, for which we obtain the combined filmsubstrate reflection coefficient by using a Fabry-Perot approach, as discussed elsewhere [33]. We find again excellent agreement between RPA simulations using the ALP potential and classical calculations [ Fig. 5], and in fact, the resemblance between the spectral profiles obtained with both methods increases with s . In Fig. 6 we examine the way lateral dispersion of QW states affects the plasmonic properties of ultrathin Ag films when using the ALP potential. Comparison of the band structures calculated with [ Fig. 6(b)] and without [ Fig. 6(a)] inclusion of an energy dependence in the inplane effective mass anticipates a clear difference between the two of them: the latter shows the same energy jumps between different bands irrespective of the electron parallel wave vector k ; those energy jumps will therefore be favored in the optical response, giving rise to spurious spectral features. In contrast, differences in lateral dispersion associated with the energy dependence of the effective mass (described here by fitting existing angleresolved photoemission data [58,59,61,74,75]) should at least partially wash out those spectral features. This is clearly observed in the resulting dispersion diagrams [ Fig. 6(c,d)] and aloof EELS spectra [ Fig. 6(e,f)]. In particular, the dispersion relation for constant m * j [ Fig. 6(c)] reveals a complex mixture of resonances at energies above 3 eV, which we find to be strongly affected by the HOMO-LUMU gap energy (not shown); these resonances cause fine structure in the EELS spectra that disappears when a realistic energy dependence is introduced in the lateral effective mass [ Fig. 6(e)].
We also analyse EELS spectra for normally impinging electron beams [Fig. 7]. The momentum-and energyresolved EELS probability given by Eq. Table I). The surface state bands (blue curves) have a mass 0.4 me. Solid (dashed) curves represent bands that are occupied (unoccupied) at k = 0. The Fermi level is shown as a horizontal red line. (c,d) Loss function Im{rp} under the conditions of (a,b), respectively. (e,f) EELS probability under parallel aloof interaction at a distance z0 = 0.5 nm for two different electron energies corresponding to the ω = Qv lines shown in (c,d) and different film thicknesses (see labels) calculated in the ALP model with constant (dashed curves) and energy-dependent (solid curves) electron effective mass.
sions [34]. In contrast to the aloof configuration, the transmission EELS spectra exhibit broader plasmon features [ Fig. 7(c,d)], which in the thin film limit [69] are the result of weighting the loss function with a profile Q 2 /(Q 2 + ω 2 /v 2 ) 2 [see also Eq. (15), where an extra factor of Q emerges from χ in the small Q limit], represented in Fig. 7(b) for 2.5 keV electrons and different energies ω (colored curves); these spectra reveal indeed a broad spectral overlap with the plasmon band. Again, we observe very similar results from RPA and classical descriptions, and just a minor dependence on electron potential in the former.
We conclude by showing EELS calculations for Au(111) films in Fig. 8. This noble metal has a similar conduction electron density as Ag, but the Au d-band is closer to the Fermi energy, therefore producing large screening ( b ∼ 9 in the plasmonic region) compared with Ag ( b ∼ 4, see Fig. S4 in SI). This causes a shift of the high-Q surface plasmon asymptote down to ω s 2.5 eV. Additionally, damping is also stronger (more than three times larger than in Ag, see Appendix A), which results in broader spectral features [cf. Fig. 8 for Au and Figs. 3(c,d) and 7(c,d)]. Interestingly, we observe significant blue shifts in the plasmon spectral features when using the ALP potential as compared with both jellium DFT and classical models. This effect could originate in a more substantial role played by the electronic band structure in Au(111) because the projected bulk gap extends further below the Fermi level, and additionally, the surface state band is also more deeply bound [see Fig. 9(b) in Appendix B]. This is consistent with the general dependence of the optical surface conductivity on Fermi momentum k F and velocity v F : in the Drude model for graphene and the two-dimensional electron gas, this quantity is proportional to k F v F and the surface plasmon frequency scales as ∝ √ k F v F ; the situation is more complicated in our thin films because they have multiple 2D bands crossing the Fermi level, but the presence of a deeper gap in Au (111) indicates that the effective band-averaged value of k F v F (i.e., with k F defined by the crossing of each QW at the Fermi level and v F as the slope of the parabolic dispersion at that energy) is larger than in Ag surfaces, characterized by the presence of shallower bands near E F ; we thus expect an increase in Drude weight, and consequently, a plasmon blue shift, in Au(111) relative to Ag; this argument is reinforced by the small effective mass of surface states in Au(111) compared with Ag(111), which also pushes up their associated v F . In summary, the plasmon blue shifts observed in Au(111) when using the realistic ALP potential seem to have a physical origin, although more sophisticated first principles simulations might be needed to conclusively support this finding.
IV. CONCLUSION
In summary, we have shown that a local classical dielectric model predicts reasonably well the intensities and dispersion relations of plasmons in ultrathin silver films when compared to quantum-mechanical simulations based on the RPA with different potentials used to simulate the conduction one-electron wave functions. We attribute the small effect of nonlocality in the plasmonic response of these films to the fact that their associated electron motion takes place along in-plane directions, in contrast to metal nanoparticles with a similar size as the film thickness here considered (i.e., electron surface scattering is unavoidable in such particles, thus introducing important nonlocal effects). We confirm this agreement between classical and quantum simulations in Ag films down to a few atomic layers in thickness [33,63], consis-tent with previous smooth-interface hydrodynamic theory [76]. Additionally, our quantum RPA simulations are relatively insensitive to the details of the confining electron potential, so similar results are obtained when using either a smooth jellium DFT model or a phenomenological potential that incorporates atomic-layer corrugation to fit relevant elements of the electronic band structure. In particular, the latter produces results that are rather independent of the crystallographic orientation of the film. Nonetheless, it is important to introduce the correct energy dependence of the out-of-plane effective mass in the phenomenological potential model, as otherwise spurious features show up in the calculated plasmon spectra. Although these potentials lead to substantially different plasmon charge distributions, spatial integration gives rise to similar plasmon dispersion relations. Interestingly, band effects described in the ALP potential model are more significant in Au, where they produce plasmon blue shifts relative to the predictions of classical and jellium DFT simulations; we attribute this different behavior in Au(111) relative to Ag(111) and Ag(100) to the fact that the former surface exhibits a projected bulk gap that extends further below the Fermi level, and additionally, this gives rise to more bound surface states. We remark that EELS provides the means to access the dispersion relations of strongly confined plasmons in ultrathin metal films, which are too far from the light line to be measured by means of optical techniques. the outer atomic plane on each side of the film.
We reproduce for convenience a previously reported expression [33] for the screened interaction, used here to account for background polarization in the a self-standing metal film of thickness d and background permittivity b contained in the 0 < z < d region:
ν(Q, z, z ) = ν dir (Q, z, z ) + ν ref (Q, z, z ), where ν dir (Q, z, z ) = 2π Q e −Q|z−z | × 1, z, z ≤ 0 or z, z > d 1, 0 < z, z ≤ d 0, otherwise and ν ref (Q, z, z ) = (2π/Q) ( b + 1) 2 − ( b − 1) 2 e −2Qd × (1 − 2 b ) e 2Qd − 1 e −Q(z+z ) , d < z, z 2 ( b + 1)e −Q(z−z ) + ( b − 1)e −Q(z+z ) , 0 < z ≤ d < z 4 b e −Q(z−z ) , z ≤ 0 and d < z 2 ( b + 1)e Q(z−z ) + ( b − 1)e −Q(z+z ) , 0 < z ≤ d < z (1/ b ) ( 2 b − 1) e −Q(z+z ) + e −Q(2d−z−z ) +( b − 1) 2 e −Q(2d+z−z ) + e −Q(2d−z+z ) , 0 < z, z ≤ d 2 ( b + 1)e −Q(z−z ) + ( b − 1)e −Q(2d−z−z ) , z ≤ 0 < z ≤ d 4 b e Q(z−z ) , z ≤ 0 and d < z 2 ( b + 1)e Q(z−z ) + ( b − 1)e −Q(2d−z−z ) , z ≤ 0 < z ≤ d (1 − 2 b ) 1 − e −2Qd e Q(z+z ) . z, z ≤ 0
For completeness, we illustrate the dramatic effects of interband processes in Fig. 13 in Appendix B by com-paring calculations obtained for Ag films using either screened or bare Coulomb interactions. (ω) = (ω) + ω 2 p /ω(ω + iγ exp ) obtained by subtracting a Drude term with parameters ωp = 9.17 eV and γ exp = 21 meV for Ag, and ωp = 9.06 eV and γ exp = 71 meV for Au. We take b = 4 and b = 9.5 for Ag and Au in the ω < 0.6 eV region, which is not covered in the above reference. Fig. 12) or set to a constant value b = 4 (middle) or b = 1 (bottom). We find b = 4 to represent approximately well background screening in silver over the plasmonic spectral region, whereas b = 1 gives rise to unrealistic blue shifts. These conclusions are maintained when examining aloof EELS spectra (right plots, calculated for 2.5 keV electrons passing at a distance of 0.5 nm from the metal surface).
FIG. 1 :
1RPA description of plasmons in atomically thin Ag(111) films. (a,b) Effective confining potential for conduction electrons across a 10 ML film. The conduction charge density is shows as shaded areas. (d,c) Electronic energies as a function of film thickness expressed as the number of (111) atomic layers (blue dots). Red curves and green dots represent the Fermi energy and the surface states (SSs). (e,f) Loss function Im{rp} calculated in the RPA [color plot, Eq. (18)], compared with the plasmon dispersion relation in the local Drude dielectric model (red curves). Left (right) panels are calculated in the jellium (ALP) model.
FIG. 2 :
2Plasmon charge density across a thin 10 ML Ag(111) film. We plot the real (solid curves) and imaginary (dashed curves) parts of the induced charge density ρ ind as calculated in the RPA for excitation by a source placed to the left of the film at the plasmon energies ω = 3.54 eV and ω = 3.47 eV corresponding to a parallel wave vector Q = 0.5 nm −1 in the ALP (blue) and JEL (orange) models, respectively.sity. The computed electron energies ε j (see Sec. II B), which correspond to the bottom points of the QW bands (i.e., for vanishing in-plane momentum), are distributed with N of them below the Fermi level in a Ag(111) film consisting of N monolayers [
FIG. 3 :
3Aloof EELS in thin Ag(111) films. (a) Scheme showing an electron moving parallel to a N = 5 ML Ag(111) metal film at a distance z0 from its upper surface. (b) Dispersion diagram showing Im{rp} calculated in the ALP model for the film shown in (a). White solid lines correspond to ω = vQ for different velocities v, while the dashed horizontal line shows the classical high-Q asymptotic surface-plasmon energy ωs 3.7 eV. (c,d) EELS probability per unit of path length for z0 = 0.5 nm calculated using different models [see legend in (c)] for (c) different electron kinetic energies E0 with fixed N = 5 and (d) different N 's with E0 = 2.5 keV.
FIG. 4 :
4Plasmon dependence on crystallographic surface orientation: Ag(111) and Ag(100) films. We compare EELS spectra calculated in the ALP model under the same conditions as inFig. 3(c,d)for N = 13 ML Ag(111) and N = 15 ML Ag(100) films (thickness ratio differing by < 0.1%).
FIG. 5 :
5Substrate-induced plasmon shift. We show EELS spectra for 2.5 keV electrons calculated in either the ALP model or the local classical description under the same conditions as inFig. 3for a Ag(111) film consisting of N = 5 MLs supported on a planar dielectric substrate of permittivity s as indicated by labels.
FIG. 6 :
6(13) reveals the plasmon dispersion in analogy to the loss function [cf. Figs. 3(b) and 7(b)]. But now, this quantity is directly accessible under normal incidence by recording angle-and energy-dependent electron transmission intensities, as already done in pioneering experiments for thicker Al films showing both bonding and antibonding plasmon disper-The role of the electron effective mass. (a,b) Inplane parabolic QW bands of a N = 10 ML Ag(111) film in the ALP model with (a) constant and (b) energy-dependent effective mass (m * j = me and m * j = (a ε ⊥ j +b)me, respectively, see
FIG. 7 :FIG. 8 :
78EELS in thin Ag(111) films under normal incidence. (a) Scheme showing an electron normally traversing a N = 5 ML Ag(111) metal film. (b) Momentum-and energy-resolved EELS probability Γ EELS ⊥ (Q, ω) [Eq. (13)] calculated for E0 = 2.5 keV electrons (v/c ≈ 0.1) in the ALP model for the film shown in (a). Colored solid curves show Q 2 /(Q 2 + ω 2 /v 2 ) 2 profiles as a function of Q for different energy losses ω = 2, 3, and 5 eV, while the dashed horizontal line indicates ωs. (c,d) EELS probability calculated using different models [see legend in (d)] for (c) different electron kinetic energies E0 with fixed N = 5 and (d) different N 's with E0 = 2.5 keV. EELS spectra for gold Au(111) films. We consider for (a,c) aloof and (b,d) normal trajectories for either (a,b) fixed electron energy (E0 = 2.5 keV) and varying film thickness (N = 5-30 MLs) or (c,d) fixed N = 5 and varying electron energy. Calculations for the same models as in Fig. 3 are presented. The plasmon dispersion is shown for N = 10 MLs using the ALP model in the inset of (b).
FIG. 9 :
9ALP model calculations similar to those of Fig. 1(d), but for Ag(100) and Au(111) films. . 10: ALP model calculations for (a-d) the loss function Im{rp} of Ag(111) films of different thickness N and (b) the resulting EELS spectra of normally incident 2.5 keV electrons.
FIG. 12 :
12z (nm) Distance along z (nm) FIG. 11: Dependence of the RPA response on model potential. We show (a) the binding conduction electron energies, (b) the confining potential, and (c) the conduction electron density for a N = 10 ML Ag(111) film, as well as (d-i) the reflection coefficient rp of Ag(111) films of different thickness N for either (d,f,h) fixed photon energy ω as a function parallel wave vector Q or (e,g,i) fixed Q as a function of ω. We calculate rp in the RPA and consider different confining electron potentials, as indicated by the upper labels: JEL and ALP, defined in the main text; LK, a superposition of the parametrized jellium DFT potential for semi-infinite surfaces taken from Lang and Kohn [Phys. Rev. B 1, 4555 (1970)] for a one-electron radius rs = 3 a.u., adopted for each of the film surfaces and glued by hand at the film center; and FBM, a square-well finite-barrier model potential. Only the ALP incorporates an energy dependence on the lateral effective mass, while the rest of the models assume m * j = me. Dielectric function (ω) of Ag and Au taken from Jonhson and Christy [Phys. Rev. B 6, 4370 (1972)] and background permittivity b
FIG. 13 :
13Effect of background screening. We show dispersion diagrams (color plots) showing the loss function Im{rp} of N = 10 ML Ag(111) films as calculated in the RPA for ALP (left) and JEL (center) model potentials when the background permittivity b is obtained from optical data (upper plots, see
AcknowledgmentsThis work has been supported in part by the ERC (Advanced Grant 789104-eNANO), the Spanish MINECO (MAT2017-88492-R and SEV2015-0522), the Catalan CERCA Program, the Fundació Privada Cellex, and the Quscope center sponsored by the Villum Foundation.Appendix A: Background screened interactionWe introduce the effect of interband polarization in the plasmonic spectral region of noble metals through a dielectric slab of permittivity b (ω) = (ω) + ω 2 p /ω(ω + iγ exp ), that is, the local dielectric function of the bulk metal (ω) from which we subtract a classical bulk Drude term representing the contribution of conduction electrons. In practice, we take (ω) from measured optical data[66]and use parameters ω p = 9.17 eV and γ exp = 21 meV for Ag, and ω p = 9.06 eV and γ exp = 71 meV for Au. The resulting b (ω) is plotted inFig. 12in Appendix B. Incidentally, as we explain in Sec. II B, we set the damping parameter to γ = γ exp /2 in the RPA formalism in order to fit the experimental plasmon width. Following previous work[19], we take the background dielectric slab to have a thickness d = N a s , where N is the number of atomic layers and a s is the interlayer spacing, so that it extends symmetrically a distance a s /2 outside
. D J Bergman, M I Stockman, Phys. Rev. Lett. 9027402D. J. Bergman and M. I. Stockman, Phys. Rev. Lett. 90, 027402 (2003).
. H Xu, E J Bjerneld, M Käll, L Börjesson, Phys. Rev. Lett. 834357H. Xu, E. J. Bjerneld, M. Käll, and L. Börjesson, Phys. Rev. Lett. 83, 4357 (1999).
. A Polman, Science. 322868A. Polman, Science 322, 868 (2008).
. J N Anker, W P Hall, O Lyandres, N C Shah, J Zhao, R P Van Duyne, Nat. Mater. 7442J. N. Anker, W. P. Hall, O. Lyandres, N. C. Shah, J. Zhao, and R. P. Van Duyne, Nat. Mater. 7, 442 (2008).
. D Rodrigo, O Limaj, D Janner, D Etezadi, F J García De Abajo, V Pruneri, H Altug, Science. 349165D. Rodrigo, O. Limaj, D. Janner, D. Etezadi, F. J. García de Abajo, V. Pruneri, and H. Altug, Science 349, 165 (2015).
. Z W Seh, S Liu, M Low, S.-Y Zhang, Z Liu, A Mlayah, M.-Y. Han, Adv. Mater. 242310Z. W. Seh, S. Liu, M. Low, S.-Y. Zhang, Z. Liu, A. Mlayah, and M.-Y. Han, Adv. Mater. 24, 2310 (2012).
. C Clavero, Nat. Photon. 895C. Clavero, Nat. Photon. 8, 95 (2014).
. H A Atwater, A Polman, Nat. Mater. 9205H. A. Atwater and A. Polman, Nat. Mater. 9, 205 (2010).
. C F Guo, T Sun, F Cao, Q Liu, Z Ren, Light Sci. Appl. 3161C. F. Guo, T. Sun, F. Cao, Q. Liu, and Z. Ren, Light Sci. Appl. 3, e161 (2014).
. M Danckwerts, L Novotny, Phys. Rev. Lett. 9826104M. Danckwerts and L. Novotny, Phys. Rev. Lett. 98, 026104 (2007).
R W Boyd, Nonlinear optics. Academic Press3rd edR. W. Boyd, Nonlinear optics (Academic Press, Amster- dam, 2008), 3rd ed.
. A R Davoyan, I V Shadrivov, Y S Kivshar, Opt. Express. 1621209A. R. Davoyan, I. V. Shadrivov, and Y. S. Kivshar, Opt. Express 16, 21209 (2008).
. S Palomba, L Novotny, Phys. Rev. Lett. 10156802S. Palomba and L. Novotny, Phys. Rev. Lett. 101, 056802 (2008).
. R H Ritchie, Phys. Rev. 106R. H. Ritchie, Phys. Rev. 106, 874 (1957).
. C J Powell, J B Swan, Phys. Rev. 115869C. J. Powell and J. B. Swan, Phys. Rev. 115, 869 (1959).
. K D Tsuei, E W Plummer, P J Feibelman, Phys. Rev. Lett. 632256K. D. Tsuei, E. W. Plummer, and P. J. Feibelman, Phys. Rev. Lett. 63, 2256 (1989).
. M Rocca, L Yibing, F B De Mongeot, U Valbusa, Phys. Rev. B. 5214947M. Rocca, L. Yibing, F. B. de Mongeot, and U. Valbusa, Phys. Rev. B 52, 14947 (1995).
. J Daniels, Z. Physik. 203235J. Daniels, Z. Physik 203, 235 (1967).
. A Liebsch, Phys. Rev. B. 4811317A. Liebsch, Phys. Rev. B 48, 11317 (1993).
. N D Lang, W Kohn, Phys. Rev. B. 14555N. D. Lang and W. Kohn, Phys. Rev. B 1, 4555 (1970).
. N D Lang, W Kohn, Phys. Rev. B. 73541N. D. Lang and W. Kohn, Phys. Rev. B 7, 3541 (1973).
. J M Pitarke, V M Silkin, E V Chulkov, P M Echenique, Rep. Prog. Phys. 701J. M. Pitarke, V. M. Silkin, E. V. Chulkov, and P. M. Echenique, Rep. Prog. Phys. 70, 1 (2007).
. J F Dobson, G H Harris, J. Phys. C. 21729J. F. Dobson and G. H. Harris, J. Phys. C 21, L729 (1988).
. A Eguiluz, S C Ying, J J Quinn, Phys. Rev. B. 112118A. Eguiluz, S. C. Ying, and J. J. Quinn, Phys. Rev. B 11, 2118 (1975).
. K D Tsuei, E W Plummer, A Liebsch, K Kempa, P Bakshi, Phys. Rev. Lett. 6444K. D. Tsuei, E. W. Plummer, A. Liebsch, K. Kempa, and P. Bakshi, Phys. Rev. Lett. 64, 44 (1990).
. A Liebsch, W L Schaich, Phys. Rev. B. 5214219A. Liebsch and W. L. Schaich, Phys. Rev. B 52, 14219 (1995).
H Raether, Springer Tracks in Modern Physics. BerlinSpringer-Verlag111H. Raether, Surface Plasmons on Smooth and Rough Sur- faces and on Gratings, vol. 111 of Springer Tracks in Modern Physics (Springer-Verlag, Berlin, 1988).
. S A Maier, Plasmonics: Fundamentals and Applications. SpringerS. A. Maier, Plasmonics: Fundamentals and Applications (Springer, New York, 2007).
. W L Barnes, A Dereux, T W Ebbesen, Nature. 424824W. L. Barnes, A. Dereux, and T. W. Ebbesen, Nature 424, 824 (2003).
. R A Álvarez-Puebla, L M Liz-Marzán, F J García De Abajo, J. Phys. Chem. Lett. 12428R. A. Álvarez-Puebla, L. M. Liz-Marzán, and F. J. García de Abajo, J. Phys. Chem. Lett. 1, 2428 (2010).
. E N Economou, Phys. Rev. 182539E. N. Economou, Phys. Rev. 182, 539 (1969).
. Z M El-Fattah, V Mkhitaryan, J Brede, L Fernández, C Li, Q Guo, A Ghosh, A R Echarri, D Naveh, F Xia, ACS Nano. 137771Z. M. Abd El-Fattah, V. Mkhitaryan, J. Brede, L. Fer- nández, C. Li, Q. Guo, A. Ghosh, A. R. Echarri, D. Naveh, F. Xia, et al., ACS Nano 13, 7771 (2019).
. A R Echarri, J D Cox, F J García De Abajo, Optica. 6630A. R. Echarri, J. D. Cox, and F. J. García de Abajo, Optica 6, 630 (2019).
. R Vincent, J Silcox, Phys. Rev. Lett. 311487R. Vincent and J. Silcox, Phys. Rev. Lett. 31, 1487 (1973).
. F Moresco, M Rocca, T Hildebrandt, M Henzler, Phys. Rev. Lett. 832238F. Moresco, M. Rocca, T. Hildebrandt, and M. Henzler, Phys. Rev. Lett. 83, 2238 (1999).
. E P Rugeramigabo, T Nagao, H Pfnür, Phys. Rev. B. 78155402E. P. Rugeramigabo, T. Nagao, and H. Pfnür, Phys. Rev. B 78, 155402 (2008).
. H V Chung, C J Kubber, G Han, S Rigamonti, D Sánchez-Portal, D Enders, A Pucci, T Nagao, Appl. Phys. Lett. 96243101H. V. Chung, C. J. Kubber, G. Han, S. Rigamonti, D. Sánchez-Portal, D. Enders, A. Pucci, and T. Nagao, Appl. Phys. Lett. 96, 243101 (2010).
. E P Rugeramigabo, C Tegenkamp, H Pfnür, T Inaoka, T Nagao, Phys. Rev. B. 81165407E. P. Rugeramigabo, C. Tegenkamp, H. Pfnür, T. Inaoka, and T. Nagao, Phys. Rev. B 81, 165407 (2010).
. T Nagao, S Yaginuma, T Inaoka, T Sakurai, Phys. Rev. Lett. 97116802T. Nagao, S. Yaginuma, T. Inaoka, and T. Sakurai, Phys. Rev. Lett. 97, 116802 (2006).
. G X Ni, A S Mcleod, Z Sun, L Wang, L Xiong, K W Post, S S Sunku, B.-Y Jiang, J Hone, C R Dean, Nature. 557530G. X. Ni, A. S. McLeod, Z. Sun, L. Wang, L. Xiong, K. W. Post, S. S. Sunku, B.-Y. Jiang, J. Hone, C. R. Dean, et al., Nature 557, 530 (2018).
. Z Fei, A S Rodin, G O Andreev, W Bao, A S Mcleod, M Wagner, L M Zhang, Z Zhao, M Thiemens, G Dominguez, Nature. 48782Z. Fei, A. S. Rodin, G. O. Andreev, W. Bao, A. S. McLeod, M. Wagner, L. M. Zhang, Z. Zhao, M. Thiemens, G. Dominguez, et al., Nature 487, 82 (2012).
. J Chen, M Badioli, P Alonso-González, S Thongrattanasiri, F Huth, J Osmond, M Spasenović, A Centeno, A Pesquera, P Godignon, Nature. 48777J. Chen, M. Badioli, P. Alonso-González, S. Thongrat- tanasiri, F. Huth, J. Osmond, M. Spasenović, A. Cen- teno, A. Pesquera, P. Godignon, et al., Nature 487, 77 (2012).
. M B Lundeberg, Y Gao, R Asgari, C Tan, B V Duppen, M Autore, P Alonso-González, A Woessner, K Watanabe, T Taniguchi, Science. 8935004M. B. Lundeberg, Y. Gao, R. Asgari, C. Tan, B. V. Duppen, M. Autore, P. Alonso-González, A. Woessner, K. Watanabe, T. Taniguchi, et al., Science 89, 035004 (2017).
. D Iranzo, S Nanot, E J C Dias, I Epstein, C Peng, D K Efetov, M B Lundeberg, R Parret, J Osmond, J.-Y Hong, Science. 360291D. Alcaraz Iranzo, S. Nanot, E. J. C. Dias, I. Epstein, C. Peng, D. K. Efetov, M. B. Lundeberg, R. Parret, J. Osmond, J.-Y. Hong, et al., Science 360, 291 (2018).
. L Ju, B Geng, J Horng, C Girit, M Martin, Z Hao, H A Bechtel, X Liang, A Zettl, Y R Shen, Nat. Nanotech. 6630L. Ju, B. Geng, J. Horng, C. Girit, M. Martin, Z. Hao, H. A. Bechtel, X. Liang, A. Zettl, Y. R. Shen, et al., Nat. Nanotech. 6, 630 (2011).
. A N Grigorenko, M Polini, K S Novoselov, Nat. Photon. 6749A. N. Grigorenko, M. Polini, and K. S. Novoselov, Nat. Photon. 6, 749 (2012).
. D N Basov, M M Fogler, F J García De Abajo, Science. 3541992D. N. Basov, M. M. Fogler, and F. J. García de Abajo, Science 354, aag1992 (2016).
. Y Liu, R F Willis, K V Emtsev, T Seyller, Phys. Rev. B. 78201403Y. Liu, R. F. Willis, K. V. Emtsev, and T. Seyller, Phys. Rev. B 78, 201403 (2008).
. Y Liu, R F Willis, Phys. Rev. B. 8181406Y. Liu and R. F. Willis, Phys. Rev. B 81, 081406(R) (2010).
U Kreibig, M Vollmer, Optical Properties of Metal Clusters. BerlinSpringer-VerlagU. Kreibig and M. Vollmer, Optical Properties of Metal Clusters (Springer-Verlag, Berlin, 1995).
. J A Scholl, A L Koh, J A Dionne, Nature. 483421J. A. Scholl, A. L. Koh, and J. A. Dionne, Nature 483, 421 (2012).
. F J García De Abajo, Rev. Mod. Phys. 82209F. J. García de Abajo, Rev. Mod. Phys. 82, 209 (2010).
I S Gradshteyn, I M Ryzhik, Table of Integrals, Series, and Products. LondonAcademic PressI. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products (Academic Press, London, 2007).
L Hedin, S Lundqvist, Solid State Physics. D. T. Frederick Seitz and H. EhrenreichAcademic Press23Solid State PhysicsL. Hedin and S. Lundqvist, in Solid State Physics, edited by D. T. Frederick Seitz and H. Ehrenreich (Academic Press, 1970), vol. 23 of Solid State Physics, pp. 1-181.
. O Keller, Phys. Rev. B. 33990O. Keller, Phys. Rev. B 33, 990 (1986).
. L Marušić, M Šunjić, Phys. Scripta. 63336L. Marušić and M. Šunjić, Phys. Scripta 63, 336 (2001).
. N D Mermin, Phys. Rev. B. 12362N. D. Mermin, Phys. Rev. B 1, 2362 (1970).
. F Reinert, G Nicolay, S Schmidt, D Ehm, S Hüfner, Phys. Rev. B. 63115415F. Reinert, G. Nicolay, S. Schmidt, D. Ehm, and S. Hüfner, Phys. Rev. B 63, 115415 (2001).
. V M Silkin, J M Pitarke, E V Chulkov, P M Echenique, Phys. Rev. B. 72115435V. M. Silkin, J. M. Pitarke, E. V. Chulkov, and P. M. Echenique, Phys. Rev. B 72, 115435 (2005).
. R Paniago, R Matzdorf, G Meister, A Goldmann, Surf. Sci. 336113R. Paniago, R. Matzdorf, G. Meister, and A. Goldmann, Surf. Sci. 336, 113 (1995).
. A Garcia-Lekue, J M Pitarke, E V Chulkov, A Liebsch, P M Echenique, Phys. Rev. B. 6845103A. Garcia-Lekue, J. M. Pitarke, E. V. Chulkov, A. Lieb- sch, and P. M. Echenique, Phys. Rev. B 68, 045103 (2003).
. E Chulkov, V Silkin, P Echenique, Surf. Sci. 437330E. Chulkov, V. Silkin, and P. Echenique, Surf. Sci. 437, 330 (1999).
. E J H Skjølstrup, T Søndergaard, T G Pedersen, Phys. Rev. B. 99155427E. J. H. Skjølstrup, T. Søndergaard, and T. G. Pedersen, Phys. Rev. B 99, 155427 (2019).
. D Popović, F Reinert, S Hüfner, V G Grigoryan, M Springborg, H Cercellier, Y Fagot-Revurat, B Kierren, D Malterre, Phys. Rev. B. 7245419D. Popović, F. Reinert, S. Hüfner, V. G. Grigoryan, M. Springborg, H. Cercellier, Y. Fagot-Revurat, B. Kier- ren, and D. Malterre, Phys. Rev. B 72, 045419 (2005).
The scattering of light and other electromagnetic radiation: physical chemistry: a series of monographs. M Kerker, Academic Press16M. Kerker, The scattering of light and other electromag- netic radiation: physical chemistry: a series of mono- graphs, vol. 16 (Academic Press, 2013).
. P B Johnson, R W Christy, Phys. Rev. B. 64370P. B. Johnson and R. W. Christy, Phys. Rev. B 6, 4370 (1972).
. E Kröger, Z. Phys. 216115E. Kröger, Z. Phys. 216, 115 (1968).
. F J García De Abajo, A Rivacoba, N Zabala, N Yamamoto, Phys. Rev. B. 69155420F. J. García de Abajo, A. Rivacoba, N. Zabala, and N. Yamamoto, Phys. Rev. B 69, 155420 (2004).
. F J García De Abajo, ACS Nano. 711409F. J. García de Abajo, ACS Nano 7, 11409 (2013).
N W Ashcroft, N D Mermin, Solid State Physics. PhiladelphiaHarcourt College PublishersN. W. Ashcroft and N. D. Mermin, Solid State Physics (Harcourt College Publishers, Philadelphia, 1976).
. A Goldman, V Dose, G Borstel, Phys. Rev. B. 321971A. Goldman, V. Dose, and G. Borstel, Phys. Rev. B 32, 1971 (1985).
. S Suto, K.-D Tsuei, E W Plummer, E Burstein, Phys. Rev. Lett. 632590S. Suto, K.-D. Tsuei, E. W. Plummer, and E. Burstein, Phys. Rev. Lett. 63, 2590 (1989).
. F J García De Abajo, Acs Photon, 1135F. J. García de Abajo, ACS Photon. 1, 135 (2014).
. M A Mueller, T M , T.-C Chiang, Phys. Rev. B. 415214M. A. Mueller and T. M. T.-C. Chiang, Phys. Rev. B 41, 5214 (1990).
. I Matsuda, T Tanikawa, S Hasegawa, H W Yeom, K Tono, T Ohta, J. Surf. Sci. Nanotechnol. 2169I. Matsuda, T. Tanikawa, S. Hasegawa, H. W. Yeom, K. Tono, and T. Ohta, e-J. Surf. Sci. Nanotechnol. 2, 169 (2004).
. C David, F J García De Abajo, ACS Nano. 89558C. David and F. J. García de Abajo, ACS Nano 8, 9558 (2014).
| []
|
[
"Fault friction under thermal pressurization during large coseismic-slip Part II: Expansion to the model of frictional slip",
"Fault friction under thermal pressurization during large coseismic-slip Part II: Expansion to the model of frictional slip"
]
| [
"Alexandros Stathas \nInstitut de Recherche en Génie Civil et Mécanique\nUMR CNRS 6183)\nEcole Centrale de Nantes\nNantesFrance\n",
"Ioannis Stefanou \nInstitut de Recherche en Génie Civil et Mécanique\nUMR CNRS 6183)\nEcole Centrale de Nantes\nNantesFrance\n"
]
| [
"Institut de Recherche en Génie Civil et Mécanique\nUMR CNRS 6183)\nEcole Centrale de Nantes\nNantesFrance",
"Institut de Recherche en Génie Civil et Mécanique\nUMR CNRS 6183)\nEcole Centrale de Nantes\nNantesFrance"
]
| []
| In Stathas and Stefanou (2022) we presented the frictional response of a bounded fault gouge under large coseismic slip. We did so by taking into account the evolution of the Principal Slip Zone (PSZ) thickness using a Cosserat micromorphic continuum model for the description of the fault's mechanical response. The numerical results obtained differ significantly from those predicted by the established model of thermal pressurization during slip on a mathematical plane (seeMase and Smith (1987); Rice (2006a); Platt et al. (2014a) among others). These differences prompt us to reconsider the basic assumptions of a stationary strain localization on an unbounded domain present in the original model. We depart from these assumptions, extending the model to incorporate different strain localization modes, temperature and pore fluid pressure boundary conditions. The resulting coupled linear thermo-hydraulic problem, leads to a Volterra integral equation for the determination of fault friction. We solve the Volterra integral equation by application of a spectral collocation method (seeTang et al. (2008)), using Gauss-Chebyshev quadrature for the integral evaluation. The obtained solution allows us to gain significant understanding of the detailed numerical results of Part I. We investigate the influence of a traveling strain localization inside the fault gouge considering isothermal, drained boundary conditions for the bounded and unbounded domain respectively. We compare our results to the ones available in Lachenbruch (1980);Lee and Delaney (1987);Mase and Smith (1987)and Rice (2006a). Our results establish that when a stationary strain localization profile is applied on a bounded domain, the boundary conditions lead to a steady state, where total strength regain is achieved. In the case of a traveling instability such a steady state is not possible and the fault only regains part of its frictional strength, depending on the seismic slip velocity and the traveling velocity of the shear band. In this case frictional oscillations increasing the frequency content of the earthquake are also developed. Our results indicate a reappraisal of the role of thermal pressurization as a frictional weakening mechanism. | null | [
"https://arxiv.org/pdf/2205.00316v1.pdf"
]
| 248,496,190 | 2205.00316 | 0f6878b8b7ec9fb0589cd301494c71b90284fbd4 |
Fault friction under thermal pressurization during large coseismic-slip Part II: Expansion to the model of frictional slip
Alexandros Stathas
Institut de Recherche en Génie Civil et Mécanique
UMR CNRS 6183)
Ecole Centrale de Nantes
NantesFrance
Ioannis Stefanou
Institut de Recherche en Génie Civil et Mécanique
UMR CNRS 6183)
Ecole Centrale de Nantes
NantesFrance
Fault friction under thermal pressurization during large coseismic-slip Part II: Expansion to the model of frictional slip
strain localizationtraveling instabilitytraveling wavesthermal pressurizationspectral methodGreen's kernel
In Stathas and Stefanou (2022) we presented the frictional response of a bounded fault gouge under large coseismic slip. We did so by taking into account the evolution of the Principal Slip Zone (PSZ) thickness using a Cosserat micromorphic continuum model for the description of the fault's mechanical response. The numerical results obtained differ significantly from those predicted by the established model of thermal pressurization during slip on a mathematical plane (seeMase and Smith (1987); Rice (2006a); Platt et al. (2014a) among others). These differences prompt us to reconsider the basic assumptions of a stationary strain localization on an unbounded domain present in the original model. We depart from these assumptions, extending the model to incorporate different strain localization modes, temperature and pore fluid pressure boundary conditions. The resulting coupled linear thermo-hydraulic problem, leads to a Volterra integral equation for the determination of fault friction. We solve the Volterra integral equation by application of a spectral collocation method (seeTang et al. (2008)), using Gauss-Chebyshev quadrature for the integral evaluation. The obtained solution allows us to gain significant understanding of the detailed numerical results of Part I. We investigate the influence of a traveling strain localization inside the fault gouge considering isothermal, drained boundary conditions for the bounded and unbounded domain respectively. We compare our results to the ones available in Lachenbruch (1980);Lee and Delaney (1987);Mase and Smith (1987)and Rice (2006a). Our results establish that when a stationary strain localization profile is applied on a bounded domain, the boundary conditions lead to a steady state, where total strength regain is achieved. In the case of a traveling instability such a steady state is not possible and the fault only regains part of its frictional strength, depending on the seismic slip velocity and the traveling velocity of the shear band. In this case frictional oscillations increasing the frequency content of the earthquake are also developed. Our results indicate a reappraisal of the role of thermal pressurization as a frictional weakening mechanism.
Introduction
The results of Part I (see Stathas and Stefanou, 2022), concerning the influence of the weakening mechanism of thermal pressurization, diverge -spectacularly-from the expected behavior based on the model of Mase and Smith (1987); Rice (2006a). Furthermore, we note that these results, indicate the divergence to take place long before the completion of the seismic slip δ. This holds true for the range of commonly observed seismic slip velocitiesδ ∈ {0.1 ∼ 1} m/s and seismic slip displacements δ ∈ {0.1 ∼ 1} m (see Harbord et al. (2021); Rempe et al. (2020)). In this follow-up paper, Part II, we investigate the reasons for this divergence between the theoretical results and their implications for the appreciation of thermal pressurization as one of the main weakening mechanism during coseismic slip. Our investigation leads us to extend the existing model of slip on a mathematical plane by relaxing its key assumptions.
In Figure 1 we compare the frictional response of the micromorphic model used in Part I (see Stathas and Stefanou, 2022), with the response of the established model for the limiting cases of uniform shear Lachenbruch (1980) and shear on a mathematical plane Mase and Smith (1984); Rice (2006a). In particular, the two limiting responses of the established model, depend on the width of accumulating strain localization inside the fault gouge, which we call the Principal Slip Zone (PSZ). They are characterized respectively by: a) the uniform slip across the fault gouge (see Lachenbruch (1980)), and b) the localization of slip on a mathematical plane (see Lee and Delaney (1987); Mase and Smith (1987); Rempel (2006); Rice (2006a)). We note that while at the initial stages of slip (see inset of Figure 1) the response of the micromorphic model lies inside the the envelope of the limit cases, at larger values of slip it diverges, presenting frictional regain and the initiation of frictional oscillations. These results come in contrast to the strictly monotonous behavior predicted by the limiting cases of uniform slip and slip on a mathematical plane. Mase and Smith (1987); Rice (2006a). The yellow-circle curve presents the frictional response of the established model when uniform slip occurs under adiabatic undrained boundary conditions for a fault gouge of height H=1 mm under shear velocity V=1 m/s Lachenbruch (1980). The black-triangle line corresponds to the frictional response of the micromorphic model of Part I, Stathas and Stefanou (2022) for the same fault gouge under isothermal drained boundary conditions. For small values of slip δ ≤ 10 mm, the response of the micromorphic model lies between the two limiting cases, however, it diverges as seismic slip δ accumulates.
We note here, that the limiting cases are predicted by the established model of thermal pressurization, under three important assumptions (see Lachenbruch (1980); Mase and Smith (1987); Rice (2006a)): First of all, the thickness of the yielding region, which corresponds to the PSZ, coincides with the fault gouge. Prescribing the thickness, and therefore, the shape of plastic strain profile, essentially decouples the mechanical and thermo-hyraulic components of the coupled THM problem (see Mase and Smith (1984)). Secondly, the variability between the thermal and hydraulic parameters of the gouge and the surrounding rock is assumed to be small, thus the thermo-hydraulic boundaries for the THM coupled problem lie at infinity. In essence the change of hydrothermal parameters between the fault gouge and the surrounding rock is neglected. Lastly, the position of the heat source due to the dissipation inside the PSZ, remains stationary inside the fault, and coincides to the position of the fault gouge.
These assumptions, however, are not representative of observations. We know from laboratory experiments and in situ observations that the fault gouge, has a finite thickness of the order of some milimeters, and it does not deform in a uniform manner (see Myers and Aydin (2004); Brantut et al. (2008)). In fact, inside the fault gouge, the principal slip zone (PSZ) is a region of finite thickness of the order of some micrometers depending on the geomaterial's microstructure (see Muhlhaus and Vardoulakis (1988); Sibson (1977)). In this configuration the fault gouge and the region that accumulates the majority of the plastic deformation inside it -the PSZ-do not coincide. Furthermore, one needs to acknowledge, that the frictional response inside the fault gouge is dependent on the ratio of thermal to hydraulic diffusivity of the fault gouge and the surrounding rock. In particular we know from the works of Aydin (2000); Passelègue et al. (2014); Yao et al. (2016) that the hydraulic and thermal diffusivity of the gouge is smaller than that of the surrounding rock by 1 to 2 orders of magnitude. This large difference between the parameters of the fault system needs to be accounted for. Finally, there is experimental evidence of fault gouges, that are thicker than expected according to the existing models of Platt et al. (2014a) and Rice (2006a), and of closely adjacent fault gouges, see Nicchio et al. (2018), whose existence can be linked to the possibility that the position of the PSZ is not stationary, rather it is traveling inside the fault gouge, possibly expanding the latter in the process.
There is also theoretical evidence considering the possibility of a traveling PSZ inside the fault gouge. In this case the preferred mode of strain localization might not be that of the divergence kind described in Rice (1975), rather it can be a "flutter" type instability, corresponding to a traveling strain localization profile (PSZ). According to the Lyapunov theory of stability (see Lyapunov (1893); Brauer and Nohel (1969)), a traveling strain localization (PSZ) is manifested by the appearance of a Lyapunov exponent with imaginary parts. The transition form a stationary instability of divergence type to a flutter traveling instability is called a Hopf bifurcation. For more details we refer to section ?? of Part I, Stathas and Stefanou (2022), where we have shown numerically that, for stress states common in faults, traveling instabilities are present in the linear stability analysis for Cosserat continua under strain softening and apparent softening due to multiphysical couplings. In the broader context of a classical continuum, under hydraulic couplings, Benallal and Comi (2003) have shown that traveling instabilities are also present. It is not yet clear if this is the case for a classical continuum under THM couplings (see Benallal (2005)).
In this paper we provide an explanation concerning the differences between the numerical results of Part I (see Stathas and Stefanou (2022)) and the frictional response predicted by limit cases of the classical model Lachenbruch (1980); Mase and Smith (1987); Rice (2006a). To this end we expand the classical model of thermal pressurization described in Rice (2006a), and extend its applicability to cases of bounded fault gouges and traveling strain localization modes of the PSZ. We will use the same thermal, hydraulic and geometric parameters for the fault gouge as in the model of Part I, Stathas and Stefanou (2022). Next, we will collapse the PSZ, where yielding and frictional heating takes place, onto a mathematical plane by employing the same formalism used in Lee and Delaney (1987); Mase and Smith (1987); Rice (2006a). We assume further, that the yield (dissipation) obeys a Coulomb friction law with the Terzaghi normal effective stress. The mechanical behavior of the layer outside the yielding plane is ignored and for the purposes of this model it can be considered as rigid. This allows us to avoid solving a BVP for the mechanical part, which significantly simplifies the problem (c.f. Stathas and Stefanou (2022)).
The decision to collapse the PSZ onto a mathematical plane can be justified based on the results of Part I, see sections ??, ?? and Figures ??,??). These lead us to the observation that it is the hydraulic and thermal parameters of the fault that mainly affect thermal pressurization. We note, however, that the Cosserat radius, which is a parameter connected with the grain size and the material properties of the granular medium is still an indispensable internal length for the numerical analyses of Part I because: a) it assures the mesh independence of the numerical results, and b) it provides finite localization width over which frictional heating takes place. However, for the analyses performed in this part (Part II), the introduction of the Dirac delta distribution prescribing the profile of the plastic strain rate, thus decoupling the mechanical and thermohydraulic component of the propblem of thermal pressurization, allows us to overcome the problem of incorporating the microstructure to the model, considerably simplifying the analysis. This allows us to elaborate on the effect of the boundary conditions on the frictional response. Furthermore, this simplification allows us to gain further insight into the problem, because the mechanisms responsible for the principal characteristics of the response of the micromorphic model described in Part I (restrengthening, frictional oscillations) can be isolated and investigated separately, corroborating the numerical results of Part I. This paper is structured as follows: In section 2 we present the basic equations of the classical model of thermal pressurization (see Mase1987,Mase1984,Rice2006) and our proposed expansion to the cases of bounded fault gouges and a traveling PSZ, by elaborating further on their differences. Our extended model leads to a Volterra integral equation of the second kind, which cannot be solved analytically as in the case of Rice (2006a). In section 3, we solve the Volterra integral equation of the second kind by applying a Spectral Collocation Method with Lagrange basis functions (SCML), based on the work of Evans et al. (1981); Elnagar and Kazemi (1996); Tang et al. (2008). This is a general spectral method and can handle the challenging task of integrating the Volterra equation under different assumption of boundary conditions and traveling strain localization modes, when other analytical approaches such as Laplace transform, Adomian decomposition Method and Taylor series expansion fail (see Wazwaz, 2011;Boyd, 2006). Having described our model and the solution procedure, we present in section 4 a series of applications showcasing the differences with the analyses in Rice (2006a). The applications include the frictional responses of: (a) a stationary PSZ on a bounded isothermal drained domain, (b) a moving PSZ on an unbounded isothermal drained domain, and (c) a moving PSZ on a bounded isothermal drained domain.
The original solution in Rice (2006a) is obtained as a special case of the more general solutions presented here and is taken as reference (see Figure 3).
Finally, in conclusions we discuss the implications of our results concerning the introduction of a traveling PSZ inside the fault gouge. Our results are important as they describe better the underlying physical process of seismic slip. Moreover, a traveling PSZ naturally enhances the frictional response with oscillations, which in turn can enhance the ground acceleration spectra with higher frequencies as observed in nature Aki (1967); Brune (1970). Moreover, our results are valuable in the context of experiments for the description of the weakening behavior due to thermal pressurization (see Badt et al. (2020)), for controlling the transition from steady to unsteady slip and for the nucleation of an earthquake (see Rice (1973); Viesca and Garagash (2015)). They are also central in earthquake control, as they provide bounds for the apparent friction coefficient with slip and slip-velocity enabling modern control strategies (see Stefanou (2019); Stefanou and Tzortzopoulos (2020); Tzortzopoulos (2021).
2. Thermal pressurization model of slip on a plane
Problem statement
We already discussed in the introduction, that the current model of thermal pressurization, shown in Figure 2, assumes that yielding is constrained on a mathematical plane inside the domain, which is modeled based on the Coulomb friction criterion (see equation (3) below). This plane will be also called yielding plane in the following. Contrary to Mase and Smith (1987); Rice (2006a)), the yielding plane is not considered stationary inside the domain. Instead its position u(t) is allowed to change with a velocitẏ u(t) = v(t). Furthermore, we will not consider only isothermal drained boundary conditions lying at infinity. In particular, we will also take into account the case, where the fault gouge is bounded under isothermal drained boundary conditions lying at y = 0 y = H. At the yielding region inside the layer heat is produced due to dissipation, D. The thermal source then is described by the calculation of the plastic work rate,Ḋ, at the position of the failure plane, namely:
D = τ (t)γ p (y, t).
(1)
In the above formula, friction τ (t), which is the main unknown of the problem, is independent of position y, due to equilibrium considerations along the height of the fault gouge ( ∂τ ∂y = 0, in the absence of inertia, see also Rice (2006a)). The termγ p (y, t) is the plastic strain rate inside the fault gouge. In the established model of thermal pressurization this term is prescribed with the help of a Dirac distribution stationed at the plane of symmetry, y = 0 (see Mase and Smith (1984); Rice (2006a)). Here we expand the terṁ γ p (y, t) to account for a traveling PSZ at position y = u(t) as follows:
γ p (y, t) = V (t)δ Dirac (y − u(t)).(2)
In the case of u(t) = 0, no traveling can take place and the stationary condition of Mase and Smith (1987); Rice (2006a) is recovered. In the model of Rice (2006a) the author considers that the shear rate V (t) applied at the boundaries of the fault gouge is constant V (t) = V . We adopt this assumption although seismic slip rate during coseismic slip may vary significantly (see Rempe et al. (2020)). The equations of the established model Mase and Smith (1987); Rice (2006a) are then written as follows:
τ (t) = f (σ n − P max (t)), on the yielding plane, (3) ∂∆T ∂t = c th ∂∆T ∂y 2 + 1 ρC τ (t)V δ Dirac (y − u(t)),(4)∂∆P ∂t = c hy ∂∆P ∂y 2 + Λ ∂∆T ∂t ,(5)
∆T y=0,H = ∆P y=0,H = 0,
∆T (y, 0) = ∆P (y, 0) = 0, P (y, t) = ∆P (y, t) + P 0 ,
where f is the friction coefficient, c th , c hy are the thermal and hydraulic diffusivity of the layer (same values for the fault gouge and the fault walls) respectively, ρC is the specific heat density of the layer, V is the shearing rate of the layer, assumed here to be constant, and Λ = λ β is the thermal pressurization coefficient (see Table 1). The symbol ( α ) indicates the value of temperature and pressure fields at position α of the model, while P 0 is the ambient value of pore fluid pressure at the boundaries of the fault gouge. We note that if we set the boundary conditions at infinity (i.e. ±∞ ) the boundary assumptions of Mase and Smith (1987) and Rice (2006a) are recovered.
We note here, that prescribing the position of the yielding plane y = u(t) implies that the position of P max is known, and coincides with the position of the thermal load. Thus the above model is valid if the position of the maximum pressure P max (t) and the yielding plane coincide. In this case, because the yielding position is prescribed and the plastic strain profile known, the mechanical behavior is decoupled and the resulting coupled thermo-hydraulic problem described above is linear.
Applying the pore fluid pressure solution (see also equation (A.8) in Appendix A) to the failure criterion results finally to the following Volterra integral equation of the second kind for the determination of the layer's frictional response under constant shearing rate (see Rice (2006a); Wazwaz (2011)):
τ (t) = f (σ n − P 0 ) − f ΛV ρC(c hy − c th ) t 0 τ (t )G (y, t; y , t , c hy , c th ) y=y dt ,(8)
where G (y, t; , y , t , c hy , c th ) is the kernel of the integral equation , which we present further in section 2.2 (see also Cole et al. (2010)). The kernel indicates the influence of the thermal load applied at position y and time t in the pore fluid pressure observed at position y and time t. Throughout our analysis we make the assumption that the maximum value of pore fluid pressure P max (t), at observation time (t), lies at the point of application of the thermal load y . This assumption is then verified numerically. Hence, the position of observation of P max (t), y, is equal to y = y , and the kernel G (y, t; y , t , c hy , c th ) needs to be calculated at y = y .
We note that the frictional response is dependent on the strain localization mode and the boundary conditions applied at the fault gouge. The first influences the form of the thermal load as a function of time and position, while the latter influences the form of the kernel of the coupled linear thermo-hydraulic problem at hand. For the purposes of our analyses we will consider the cases of: 1) an unbounded fault gouge under a) a stationary PSZ, described in Rice (2006a), b) a traveling PSZ at a constant velocity v, and 2) a bounded fault gouge under a) a stationary PSZ, and b) a traveling PSZ, where position is a periodic function of time i.e. y = u(t ) (see equation (2) and section 4). The periodic movement of the PSZ is justified on the basis of the numerical analyses presented previously in Part I (see Stathas and Stefanou (2022), Figures ??,??.) We present the relevant Green's function kernels in section 2.2. In order to solve the modified resulting Volterra integral equation (8), we have employed the collocation quadrature method described in Tang et al. (2008) as explained in section 3.
Having defined the differences between the classical and the extended model of thermal pressurization described in this section, we comment further on the differences between our linear extended model and the one used in the fully nonlinear analyses of Part I, Stathas and Stefanou (2022). In particular, in Part I, a micromorphic model together with THM couplings, was used for the determination of the PSZ thickness during coseismic slip. The application of a micromorphic continuum leads to a finite thickness for the PSZ, which guarantees mesh objectivity of the numerical results. Because the thickness of the PSZ is finite, the thermal load applied inside the PSZ is distributed over the PSZ thickness. Furthermore, the finite thickness of the PSZ is a crucial part of the mechanism explaining the appearance of traveling PSZ inside the fault gouge as we have argued in Part I. We further note that the yield criterion employed in the analyses of Part I was a Drucker-Prager yield criterion, while, here we make use of a Mohr-Coulomb yield criterion. The use of the Mohr Coulomb criterion allows us to describe the friction τ (t) with the help of the normal stress σ n to the yielding plane, instead of the combination of normal stresses required in the case of the Drucker-Prager.
Cases of Interest
We consider four cases for the loading and boundary conditions concerning the evaluation of the fault friction during coseismic slip. We first separate between stationary and traveling modes of strain localization and then we further discriminate between unbounded and bounded domains in order to cover all possible cases. The separation of the fault's frictional response into these categories leads to four different expressions for the Green's function kernel G (y, t; y , t , c hy , c th ) in equation (8).
Here we will provide the analytical expressions, for the kernels to be substituted into equations (8).
In naming the Green's function kernels we used the subscript naming conventions of Cole et al. (2010). Namely for diffusion in 1D line segment domains, the letter Xαβ is adopted, where α, β are the left (y = 0) and right (y = H) boundaries of the domain respectively. They can take the values 0 or 1 indicating an unbounded or a bounded domain respectively, under homogeneous Dirichlet boundary conditions.
We begin by introducing the Green's function kernels of the unbounded X00 and the bounded X11 cases in the case of a 1D diffusion equation under homogeneous Dirichlet boundary conditions.
For the unbounded case we use:
G X00 (y, t; y , t , c) = 1 2 πc(t − t ) exp − (y − y ) 2 4c(t − t ) .(9)
Similarly for the bounded case we use:
G X11 (y, t; y , t , c) = 2 L ∞ m=1 exp −m 2 π 2 c t − t H 2 sin mπ y H sin mπ y H .(10)
We note here that c can be either c th or c hy depending on the diffusion problem in question. The kernels G Xαβ (y, y , t−t , c hy , c th ) of the pressure diffusion problem based on the impulse of the frictional response for given boundary strain localization modes and boundary conditions are given by:
• Stationary mode of strain localization
• Unbounded domain, α = 0, β = 0, x = 0, (see Rice, 2006a) G X00 (y, t; 0, t , c hy , c th ) = c hy G X00 (y, t; 0, t , c hy ) − c th G X00 (y, t; 0, t , c th ). (11) • Bounded domain α = 1, β = 1, , x = 0 G X11 (y, t; 0, t , c hy , c th ) = c hy G X11 (y, t; , 0, t , c hy ) − c th G X11 (y, t; 0, t , c th ).(12)
• Traveling mode of strain localization
• Unbounded domain, α = 0, β = 0, y = u(t ): G X00 (y, t; y , t , c hy , c th ) = c hy G X00 (y, t; u(t ), t , c hy ) − c th G X00 (y, t; u(t ), t , c th ).(13)
• Bounded domain, periodic trajectory in time, α = 1, β = 1, y = u(t ):
G X11 (y, t; y , t , c hy , c th ) = c hy G X11 (y, t; u(t ), t , c hy ) − c th G X11 (y, t; u(t ), t , c th ).(14)
Methods for the numerical solution of linear Volterra integral equations of the second kind
The solution of linear integral equations of the second kind can be sought with a variety of different analytical and numerical methods. From an analytical standpoint, these methods include methods from operational calculus namely, Laplace, Fourier or Z-Transform (see Churchill, 1972;Brown et al., 2009;Mavaleix-Marchessoux et al., 2020), the use of Taylor expansions for the integrand inside the integral and the method of Adomian decomposition (see Wazwaz, 2011;Evans et al., 1981). The case of a stationary yielding mathematical plane described in Rice (2006a) can, and has been solved analytically, making use of the Laplace transform. Those methods depend on the convolution property of the integral in the integral equation to transform it into a simpler algebraic equation. The challenge then lies in the inversion of the relation obtained in the auxiliary (frequency) domain back to the time domain. However, as the complexity of the Green's function kernels and the loading function increases due to the introduction of boundary conditions and different assumptions concerning the trajectory of the shear band along the fault gouge, such an inversion is not always possible analytically. We are then forced to use numerical methods for the solution of the above Volterra integral equation.
The above analytical methods have also their numerical counterparts, with the use of the Discrete Fourier Transform (DFT) being a central part in most numerical solution procedures. However, use of the DFT is most efficient when the integral equation to be solved has the form of a convolution. This is not always the case in our problem. For instance, the kernel described in equation (14) has terms in (t, t ) that do not involve their difference (t − t ), and therefore, its use in equation (8) results in the integral term not being a convolution. In order to handle the above difficulty then, we will make use of another class of numerical methods called spectral collocation methods, which solve the integral equation (8) directly in the time domain. These methods are conceptually easy to use, and since no inversion is required, they are able to handle very general cases of Green's function kernels and loading functions.
In what follows, we will make use of the Spectral Collocation Method with Lagrange basis functions (SCML) for the numerical solution of the integral equation (8) (see Tang et al., 2008;Elnagar and Kazemi, 1996, and section 3.1). The SCML method will be shown to handle both the bounded and unbounded domains and the cases of stationary vs traveling strain localization.
Collocation method
We begin by normalizing equation 8. We choose the following normalization parameters t 0 = H 2 c th , τ 0 = f (σ n − p 0 ), y 0 = H, r c = c hy c th . The normalized equation is the given by:
τ (t) = 1 − f ΛV ρC H c th (r c − 1) t 0τ ((t) )Ḡ (ȳ,t;ȳ ,t ) y=y dt (15) whereτ = τ τ0 ,t = t t0 ,t = t t0ȳ = y y0 ,ȳ = y y0 andḠ (ȳ,t;ȳ ,t )
is the normalized Green's function kernel given by:
• In the unbounded case:
G X00 (ȳ,t;ȳ ,t ) = 1 2 r c πr c (t −t ) exp − (y − y ) 2 4r c (t −t ) − 1 π(t −t ) exp − (y − y ) 2 4(t −t )(16)
• In the bounded case:
G X11 (ȳ,t;ȳ ,t ) = 2 r c ∞ m=1 exp −(mπ) 2 r c (t −t ) − ∞ m=1 −(mπ) 2 r c (t −t ) sin (mπȳ) sin (mπȳ )(17)
Based on the work of Tang et al. (2008), we apply a spectral collocation method for the calculation of the frictional response described by equation (15). Spectral methods allow for evaluation of the solution in the whole domain of the problem yielding exponential degree of convergence (see Tang et al. (2008)). The principle of the method is the substitution of the unknown functionτ (t) inside the integral equation by a series of polynomials that constitute a polynomial basis. We then opt for the minimization of the residual between the exact and the approximate solution at specific collocation points inside the problem's domain. Here we use the Chebyshev orthogonal polynomials of the first kind (see Trefethen (2019)). Because the Chebyshev polynomial of the first kind constitute a basis in the interval [-1,1], we transform the integral equation (15) to lie in this interval (see Appendix Appendix C).The integral equation then reads:
U (z) = 1 − f ΛV ρC H c th (r c − 1)T 2 z −1 U (s)G ȳ,T 2 (z + 1);ȳ ,T 2 (s + 1) ds,(18)
where we note that U (z) =τ (T 2 (z+1)). In the previous equation we performed a change in the integration variable fromt ∈ [0,T 2 (z +1)] to s ∈ [−1,z] so that the unknown function U (s) inside the integral remains in the same form as U (z) outside the integral. Next, we choose to approximate the unknown function in equation (18) (i.e. frictional response) by its Lagrange interpolation i.e:
U (σ) ≈ N j=0 U (z j )F j (σ)(19)
The Lagrange interpolation allows a function to be approximated as a linear combination of the Lagrange cardinal polynomials F j (σ), and weights U (z j ) corresponding to the values of the function at specific points z j . The Lagrange cardinal polynomials have the property that F m (z n ) = δ mn , where δ mn is the kronecker symbol. We choose to express the Lagrange polynomials with the help of the Chebyshev polynomials of the first kind, and we choose the set of approximation nodesz j to correspond to the extrema of the Chebyshev polynomial of the first kind, of degree N (see Trefethen (2019)). In this case the interpolating polynomial is written as follows:
U (σ) ≈ N j=0 U (z j )P j (σ),(20)P j (σ) = (−1) j σ−zj / N k=0 (−1) k σ−z k σ =z j or σ =z k 2 σ =z 0 or σ =z N 1 σ =z j and j = 0 or j = N 0 σ =z k (21) N j=0 (·) j = N j=0 (·) j − (·) 0 + (·) N 2(22)
where the barycentric formula involving the modified sum N j=0 (·) is used for the cardinal polynomials and the interpolation (see Trefethen (2019)). By making use of the barycentric formula in equation (21) we are able to evaluate the cardinal polynomials fast and with smaller error that other conventional approaches (see Trefethen (2019); Tang et al. (2008)). We note that the Lagrange interpolation polynomial at the selection of Chebyshev points {z i } stays unaffected by Runge's phenomenon. Runge's phenomenon is the observation that the high polynomial degree Lagrangian interpolation in equidistant grids leads to high error at the approximation of points that don't belong to the set of interpolation nodes. The effect is more pronounced near the boundaries of the interpolation domain.
For the numerical evaluation of the integral in equation (18) the Clenshaw-Curtis quadrature will be used since it is compatible with the iterpolation nodes used. We note here that the choice of the interpolation nodesz j -extrema of the degree N Chebyshev polynomial of first kind-leads to quadrature weights of positive sign, which reduces the error of the summation. If equidistant points were used as a quadrature rule of high order (N > 7), this would lead to quadrature weights of alternating sign increasing the integration error Quarteroni et al. (2007). We transform once again the integral of equation (18) from s ∈ [−1, z] to θ ∈ [−1, 1] in order to apply the appropriate quadrature rule for integration. the new integral equation reads:
U (z) = 1 − f ΛV ρC H 2 c th (r c − 1)T 2z + 1 2 1 −1 U (s(z, θ))G ȳ,T 2 (z + 1);ȳ ,T 2 (s(z, θ) + 1) ds,(23)
The discretized form of equation (23) for the Clenshaw Curtis quadrature scheme is given by:
U (z i ) = 1 − at i N j=0 U j (z j ) N p=0 P j (s ip )G ȳ,t i ;ȳ ,t ip ) w p ,(24)
where,
s ip = s(z i , θ p ),t i =z i+1 2T ,t ip =T 2 (s ip + 1) , a = f ΛV ρC H 2 c th (rc−1)T 2 .
Finally, by adopting the indicial notation with summation over repeated indices our system is written as:
(δ i,j + A i,j )U j (z j ) = g i ,(25)
where g i = 1 and A i,j is given by:
A i,j = A ij − B ij(26)A ij = at i N p=0 P j (s ip )G ȳ,t i ;ȳ ,t ip ) w p (27) B ij = A i0 , j = 0 A iN , j = N A ij , j = 0 or j = N(28)
or in matrix form:
(I + A ) U = G,(29)
We can then solve the algebraic system to find the interpolation coefficients U j of the numerical solution.
Due to the properties of the Lagrange polynomials the coefficients U i are also the values of the numerical solution at the specific times t i .
Applications
In this section we will present the evolution of the frictional strength τ (t) for the different cases of loading and boundary conditions described in section 2.2. The available values for the fault gouge properties considered homogeneous along its height are given in Table 1 Carslaw and Jaeger (1959). Mase and Smith (1987) and Andrews (2005), present temperature field solutions for stationary distributed thermal loads. Later in Lee and Delaney (1987) the authors used the above temperature solutions to derive the pressure solution fields ∆P (y, t) of the coupled pore fluid pressure equation.
In the work of Rice (2006a); Rempel and Rice (2006) the authors introduce a methodology for the determination of the coupled frictional response of a fault gouge under constant shear rate. The results for the stationary instability on an infinite domain have already been derived in Rice (2006b) for yielding on a mathematical plane, and further expanded in the case of distributed yield in Rempel and Rice (2006). In this case, a closed form analytical solution is possible:
τ (δ) = f (σ n − p 0 ) exp( δ L ) erfc( δ L ), where L = 4 f 2 ρC Λ 2 ( √ c hy + √ c th ) 2 δ .
The derived solution is recognized as the Hermite polynomial of degree -1.
We note that this solution is dependent on the seismic slip rateδ (see dimensions of L ). The dependence of the fault friction on the seismic slip rateδ (velocity weakening) has been shown in experiments (see Badt et al., 2020;Harbord et al., 2021;Rempe et al., 2020, among many others). In order to demostrate the efficiency of the SCLM method, we use the above analytical solution as a benchmark for comparison. In Figure 3 we present the numerical results of slip on a stationary mathematical plane. To showcase further the accuracy of our results, we present the calculated temperature ∆T (y, t) and pressure ∆P (y, t) fields, computed with the method of Gaussian quadrature at the already computed Chebyshev nodes for the time domain, in a uniform spatial grid around the position of strain localization. The results of Figure 4 indicate that at all times the pressure maximum coincides with the position of the strain localization as expected from the analytical solution. This corroborates the accuracy and precision of our results.
Stationary strain localization on a bounded domain
When the yielding region (PSZ) is wholly contained on a mathematical plane one might assume that the true boundaries of the fault gouge play little role in the evolution of the phenomenon, simulating the fault gouge region as an infinite layer. However, the validity of this model depends heavily on the pressure and temperature diffusion characteristic times in comparison to the total evolution time of the seismic slip. In essence, the question is: Does the phenomenon evolve so fast that the boundaries do not play a role in the overall frictional response? This is a valid question, considering that in experiments and in the majority of the numerical simulations, we need to assign some kind of boundary conditions to the problem in question. We address this question by investigating the case of a stationary strain localization (point thermal source) in the middle of a bounded domain representing the fault gouge, with the linear Volterra integral equation of the second kind (15). We do so by applying the new form of the kernel G X11 (x, x , t − t , c hy , c th ), which takes into account the boundary conditions of coseismic slip, pressure and temperature discussed in Part I, Stathas and Stefanou (2022). Namely, the domain of the fault gouge was assumed to have a width of H = 1 mm. We remind also that the boundary conditions correspond to an isothermal (∆T (0, t) = ∆T (H, t) = 0) drained (∆P (0, t) = ∆P (H, t) = 0) case. In order to solve equation (15) for the new kind of boundary conditions, we need to derive the new expressions for the Green's function kernel for the thermal diffusion and coupled pore fluid pressure diffusion equations on the bounded domain. The expression for the bounded Green's function kernel under Dirichlet boundary of the heat diffusion equation (17), can be found by applying the method of separation of variables according to Cole et al. (2010).
Equation (17) is termed the long co-time Green's function kernel. A mathematically equivalent short co-time solution can be constructed making use of the Green's kernel defined for the infinite domain case via the method of images, however, its form is significantly more complicated than equation (17) and is not convenient for the numerical procedures used in this paper. Namely, the short co-time solution is best suited when studying transient diffusion at the very start of the phenomenon. For fast timescales we don't need a lot of terms for the short co-time series to converge to the expected degree of accuracy. However, for large timescales after the initiation of the phenomenon the large co-time solution converges faster, i.e. using fewer terms in the sum. Furthermore, the form of the large co-time solution has a simpler form and can be integrated numerically faster, i.e. with less machine operations, than that of the short co-time.
Next, we need to obtain the Green's function for the coupled pore fluid pressure diffusion equation. This is done by solving the coupled pressure differential equation on the bounded domain, using the method of separation of variables. We note that the two diffusion problems (thermal and coupled pore fluid pressure) are bounded by Dirichlet boundary conditions on the same domain and therefore, their Fourier expansions belong to the same Sturm-Liouville problem. This allows us to express, for the first time in the literature, the Green's function kernel of the coupled temperature diffusion system on a bounded domain due to an impulsive thermal load. Full derivation details are shown in Appendix Appendix B, where we prove that the kernel in question can be given in a manner similar to the original expression for the infinite domain case found in Lee and Delaney (1987).
Next, we apply the kernel of equation (17) in the equation (15). Using the SCLM method, the values of friction at specific values of time (t) and seismic slip displacement (δ) can be derived for different seismic slip velocities (δ). The results of such an analysis are presented in Figure 5.
We note here that contrary to the results obtained in the case of the infinite layer in Rice (2006a); Rempel and Rice (2006), where the frictional response is decreasing monotonously (see also Figures 3,4), in the case of the stationary thermal load on a bounded layer the frictional response is eventually influenced by the boundaries of the domain (see Figures 6,7). Since the conditions on the boundaries are constant in time and the frictional source provides heat to the layer at a rate that is bounded by a constant Figure 5: τ − δ response of the layer for different slip velocitiesδ applied. We observe that as the shearing rate increases, the softening behavior becomes more pronounced. For typical values of the seismic slip displacement we note that the effect of the boundaries becomes important. Due to the existence of a steady state the fault recovers all of its strength lost due to thermal pressurization at the beginning of the coseismic slip.
( 1 ρC τ (t)δ ≤ 1 ρC τ 0δ = M ), the temperature field will eventually reach a steady state. This in turn means that at the later stages of the phenomenon the temperature profile will remain constant in time, therefore its rate of change ∂∆T ∂t will become zero. Consequently, the phenomenon of thermal pressurization will cease, leading to rapid pore fluid pressure decrease due to the diffusion at the boundaries. As a result pore fluid pressure will return to its ambient value, and therefore, friction will regain its initial value too. Figure 6: Comparison of the τ − δ response of the layer for an applied slip velocityδ = 1000 mm/s. We observe that the influence of the boundaries becomes important from the early stages of coseismic slip (δ ≈ 10 mm). In the bounded case, due to the existence of a steady state the fault tends to recover all of its strength lost to thermal pressurization at the beginning of the phenomenon. namely for a typical value of coseismic slip δ = 1000 mm, the fault has recovered more than half of its initial frictional strength.
It is important to note here that as we show in Figure 6, frictional regain happens well inside the time and coseismic slip margins observed in nature during evolution of the earthquake phenomenon. Of course frictional regain depends on the height of the layer. Namely as the height of the layer increases, the stress drop due to thermal pressurization at the initial stages becomes larger and the fault gouge recovers its frictional strength slower and in later stages of slip. However, the height of the fault gouge H=1 mm corresponds to typical values from fault observations around the globe (see Myers and Aydin, 2004;Rice, 2006a;Sibson, 2003;Sulem et al., 2004, among others). Furthermore, based on the significantly higher hydraulic, and to a lesser extent thermal, diffussivities of the surrounding damaged zone (see Part I Aydin, 2000;Tanaka et al., 2007), we conclude that the assumption of isothermal drained conditions at the boundaries of the fault gouge as a first approximation, is also justified.We note in particular that for a mature fault gouge, the ratio of the hydraulic permeability and thermal conductivity of the fault gouge ( f ) to the surrounding damaged zone ( d ) lies between r hy = Therefore, the a priori assumption that an infinite layer describes adequately well the fault gouge during seismic slip should, in our opinion, be revised.
Next, we provide in Figure 7 the field numerical solutions for the change in temperature and pressure in a bounded domain of height H = 1 mm, under constant seismic slip rateδ = 1 m/s. In the bounded domain, the fields of temperature and pressure will reach the steady state, while the maximum pore fluid pressure coincides with the position of the stationary strain localization. However the steady state reached now is one where full frictional regain takes place. Therefore, the predicted temperature field at the steady state is not applicable, since other weakening mechanisms will take place (e.g. thermal decomposition of minerals will start at 900 o C, see Sulem and Famin (2009) ;Sulem and Stefanou (2016)). The role of the boundary conditions at the fault gouge level becomes very important. Figure 7: Temperature ∆T and pore fluid pressure ∆P fields along the height of the layer for shearing velocityδ = 1 m/s, at different times during the analysis. The numerical solution is consistent with the analytical observation that the position of ∆Pmax coincides with the position of the stationary strain localization. The arrows indicate the evolution course of the maxima of each field. The pressure field initially increases before subsiding when the temperature field progressively reaches steady state and thermal pressurization ceases. The shaded area, indicates a range of temperatures (∆T ≥ 800 o C), that is prohibitively large inside the fault gouge since it corresponds to melting of the gouge material. Moreover, at ∆T ≥ 600 o C chemical decomposition of minerals will start to take place inside the gouge, antagonizing the weakening mechanism of thermal pressurization.
Traveling mode of strain localization
In the available literature Rice (2006a,b) and the subsequent works Rempel and Rice (2006); Platt et al. (2014b); Rice et al. (2014a) one of the main assumptions is that the principal slip zone (PSZ), which is described by the profile of the plastic strain rate (localized on a mathematical plane or distributed over a wider zone) remains stationed in the same place during shearing of the infinite layer. In this work we depart from this assumption, by assuming that the principal slip zone is traveling inside the fault gouge. Two cases will be discussed, the first one discusses the implications of a traveling shear band inside the infinite layer, while the other case focuses on a moving shear band inside the bounded layer. The difference between a stationary and a moving shear band is that in the second case a steady state for the temperature ∆T (y, t) and pressure ∆P (y, t) fields is not possible (i.e. their rates of change cannot become zero, ∂∆T ∂t = 0, ∂∆P ∂t = 0 , since the profile of temperature constantly changes due to the thermal load constantly moving around the domain. This ensures that thermal pressurization never ceases. Thus, the value of the residual friction τ res depends on the fault gouge's thermal and hydraulic properties (c th , c hy ), the coseismic slip velocityδ, and the traveling velocity of the strain localization mode (v). This has serious implications for the frictional response of the layer during shearing. More specifically, as the load does not stay stationary, thermal pressurization does not have enough time to act by increasing the pore fluid pressure. Therefore, according to the Mohr-Coulomb yield criterion, friction does not vanish as in the case of Rice (2006a). Instead friction reaches a residual value τ res different than zero. This is central for the dissipated energy (see Andrews, 2005;Kanamori and Brodsky, 2004b;Kanamori and Rivera, 2006, among others) and the control of the fault transition from steady to unsteady seismic slip.
Traveling mode of strain localization in the unbounded domain.
Here we consider the shearing of a fault gouge, whose boundaries are taken at infinity. In what follows, we distinguish between the seismic slip velocityδ and the velocity of the traveling shear band v(t). In Figure 8, we consider the PSZ (moving point heat source) to travel inside the fault gouge with a velocity v=50 mm/s. Different values for the rate of coseismic slip parameterδ are taken into account. The shear band velocity v is in agreement with observations from the numerical results of Part I Stathas and Stefanou (2022). Contrary to the results obtained in the case of a stationary strain localization studied in Rice (2006a), our results indicate the existence of a lower bound in the frictional strength τ res , dependent on the rate of seismic slipδ (see Figure 9). Figure 8: τ − δ response of the layer for different slip velocitiesδ applied. We observe that as the shearing rate increases, the softening behavior becomes more pronounced. Higher seismic slip rates correspond to lower residual values for friction.
In Figure 8, we observe that an increase in seismic slip velocityδ leads to a decrease of the frictional plateau. Since the plateau reached in these cases is other than the initial friction value corresponding to the ambient pore fluid pressure, we conclude that thermal pressurization is still present in the model's response. This is true, since the profile of temperature changes continuously, due to the yielding plane moving at a constant velocity v. This forces the maximum temperature, T max , to move in the same way. Thus, the rate of change of the temperature field ∂∆T ∂t , which is the cause of thermal pressurization, does not vanish. Figure 9: Comparison of the τ − δ frictional response between a moving and a stationary strain localization (PSZ) in an unbounded domain. The assumption of a traveling strain localization leads to a plateau of non zero residual friction τres, contrary to the solution of Rice (2006a), which is based on a stationary PSZ.
In Figure 10, we plot the frictional response of the fault for a given seismic slip velocityδ = 1 m/s treating the shear band velocity v as a parameter. We notice that the slower moving shear bands force the fault to faster and larger frictional strength drops, before they eventually reach a plateau. This is consistent with the observations made in Rice (2006a), where the stationary shear band that presents an infinite negative slope at the start of the slip δ and tends asymptotically to zero as δ increases, can be treated as a special case of the model of traveling localization mode as the shear band velocity tends to zero (v = 0).
In Figure 11, we present the evolution with time of the temperature ∆T (y, t) and pressure increase ∆P (y, t) fields, in the region of the unbounded domain covered by traveling strain localization mode. We note that in this case the traveling localization mode leads to a distribution of the thermal load inside the domain, which -since thermal pressurization remains constant-leads to significantly lower values of temperature inside the domain. We note that the frictional response shown in Figure 8 is consistent with the pressure increase ∆P (y, t) inside the domain, while the temperature and pressure fronts coincide with the prescribed position of the traveling strain localization (thermal load).
Traveling mode of strain localization in the bounded domain.
In this section we investigate the frictional response of the layer of height H = 1 mm, when the plastic strain localization (PSZ) travels inside a predefined region with a width h = 0.6 mm as shown in Figure 12. This region has the same width as the width of the plastified region predicted by our numerical model in Part I, Stathas and Stefanou (2022) see Figure ??). Based on the numerical results of Part I, we apply a periodic mode of traveling strain localization, with a constant velocity v = 30 mm/s. We prescribe the trajectory of the yielding plane, whose position u(t) is given by a triangle pulse train:
u(t) = H 2 + h 2H T r(vt),(30)
where H is the height of the layer, h is the width of the plastified region, v is the velocity of the strain localization and T r(·) is the triangle wave periodic function. The period is given by T = 2h v . The resulting linear Volterra integral equations of the second kind is solved numerically by making use of the spectral collocation method in section 3. We observe in Figure 13 that as the shearing rate increases, the softening behavior becomes more pronounced. For typical values of the seismic slip displacement we note that the effect of the boundaries becomes important. The frictional response presents oscillations due to the periodic movement of the strain localization. Since the strain localization is constantly moving, a steady state is not possible for the fields of temperature and pressure ( ∂∆T ∂t = 0 → ∂∆P ∂t = 0). This means that the friction presents a residual value, τ res , which is lower than the fully recovered value of the stationary bounded case. Figure 13: τ − δ response of the bounded layer for different slip velocitiesδ applied. A periodic traveling localization mode is applied. We observe that as the shearing rate increases, the softening behavior becomes more pronounced. For typical values of the seismic slip displacement we note that the effect of the boundaries becomes important. As the periodic traveling localization mode is constantly moving, a steady state is not possible. This means that the friction presents an oscillating residual value lower than the fully recovered value of the stationary bounded case.
Assuming the material parameters c th , c hy and the height of the layer H constant, characteristics such us the oscillations amplitude A, circular frequency ω and the residual value of friction τ res are controlled by three parameters, the thickness of the prescribed region the PSZ is allowed to travel inside the layer, h, the velocity of the traveling PSZ, v, and the seismic slip rate applied at the fault gouge,δ.
In Figure 14, we investigate the influence of the shearing velocityδ, the velocity of the traveling shear band v on the frictional response of a fault gouge with height H = 1 mm. We note that the period of oscillations in the frictional response depends on the velocity with which the shear band travels inside the fault gouge. For the range of applied traveling shear band velocities 30 − 50 mm/s the minima and maxima of the frictional response τ − δ are not affected. Figure 14: τ − δ response of the bounded layer for different ratios of strain localization velocities v to coseismic slip rateṡ δ applied ( cḋ ). We note that for the same rate the period of oscillation remains the same. The period of oscillations is depends on the height of the layer H and the velocty of the strain localization.
In Figure 15, we present a comparison between the friction developed during shearing of a bounded fault gouge and the model of slip on a stationary mathematical plane presented in section 4.1.1 and in Rice (2006a). In the bounded fault gouge, the seismic slip velocity is given byδ = 1000 mm/s. We further consider the shear band to travel with a velocity v = 30 mm/s inside a predefined region of height h = 0.6 mm. We note that the two responses differ. The periodic movement of the yielding plane (thermal load) inside the layer leads to frictional oscillations. This happens because the yielding plane moves towards the isothermal drained boundaries that function as heat and pressure sinks. Namely, the crests of the oscillations correspond to the time instance the load approaches the fault gouge boundaries, while troughs correspond to the time the PSZ is closer to the middle of the layer. We note here that the average friction inside the layer, τ ave , is increasing due to the diffusion of pressure and temperature at the boundaries of the fault gouge. We note also that the oscillatory movement of the fault gouge moves excess heat and pressure towards the boundaries of the fault gouge leading to a ventilation phenomenon that further enhances the recovery of frictional strength. It is likely that removing the invariance along the slip direction would lead to vortices and other convective phenomena inside the layer (see Griffani et al., 2013;Miller et al., 2013;Rognon et al., 2015). However, 2D and 3D phenomena inside the fault gouge are not explored here. The results obtained in Figures 13,14, 15, present a qualitative agreement with those of Part I Stathas and Stefanou (2022). The difference in the values is due to the assumption of a Dirac load in this paper, in order to preserve the equilibrium inside the band. Assuming a distribution of the yielding rateγ p that is not singular while respecting the equilibrium conditions along the layer -as it is the case for the Cosserat continuum-would allow for higher minima in the frictional response, because of the distributed thermal load over the finite thickness of the yielding region. This leads to more efficient diffusion at the initial stages of thermal pressurization.
In Figure 16 we present the fields of temperature ∆T (y, t) and pore fluid pressure increase ∆P (y, t) during shearing of the bounded fault gouge, with coseismic slip rateδ = 1 m/s, assuming a traveling mode of strain localization traveling with a velocity of v = 30 mm/s. We note that along the bounded fault gouge, the pore fluid pressure increase might become negative. This is acceptable as long as the total pore fluid pressure doesn't become negative (∆P (y, t) > −P 0 ). This is a characteristic that also exists in our fully nonlinear numerical analyses on the bounded domain (see Stathas and Stefanou (2022), Figure ??). Figure 16: Temperature ∆T and pore fluid pressure ∆P fields along the height of the layer for shearing velocityδ = 1 m/s, at different times during the analysis. Because of the thermal load moving inside the domain closer to the sinks at the boundaries, temperature reaches markedly smaller values than in the stationary case. We note that the change in the pressure field presents negative values leading to regions of smaller pressure than the initial P 0 (P (y, t) = P 0 + ∆P (y, t)). Thiscoincides with the numerical analyses presented in Part I, Stathas and Stefanou (2022).
Conclusions
In this paper a series of numerical results have been obtained for the coupled thermal and pore fluid pressure diffusion equations. We follow the methodology developed in Rice (2006a); Rempel and Rice (2006), and we expand it to the cases of bounded domains and moving thermal loads resulting from traveling (flutter) instabilities on a Cauchy continuum (see Rice, 2006a;Benallal and Comi, 2003;Benallal, 2005;Rice et al., 2014b;Platt et al., 2014a).
To handle the integral differential equations the SCLM method was applied (see Elnagar and Kazemi, 1996;Tang et al., 2008). The method can handle the weakly singular kernels that appear in the unbounded case and the stationary thermal load on the bounded case. The method can also generalize to the case of a periodic traveling strain localization inside the bounded domain, which is in accordance with the numerical results of Part I, Stathas and Stefanou (2022).
It is found that contrary to the case of a stationary thermal load on an unbounded domain described in Rice (2006a), taking into account the existence of the boundary conditions at the edges of the fault gouge plays an important role at the frictional evolution of the fault for a range of values of the seismic slip velocities commonly observed during earthquake events. Namely, for a seismic slip δ of 1 m under a seismic slip velocityδ=1 m/s, the influence of the boundaries becomes important after the first 0.1 m of slip. It is shown that under the influence of homogeneous Dirichlet conditions on the bounded domain, a steady state is reached for the temperature field, which in turn implies that the effects of thermal pressurization progressively attenuate until it completely ceases. In this case the temperature rise inside the fault gouge is well above the melting point of the fault gouge material. The apparent scarcity of pseudodactilites and absence of widespread melting observations in faults (see Brantut et al. (2008); Kanamori and Brodsky (2004a); Rice (2006a)), however, indicates that other possible frictional weakening mechanisms will become prevalent, such as chemical decomposition of minerals (see Sulem and Famin, 2009). Furthermore, the effects of a moving thermal load corresponding to a traveling strain localization (flutter instability) inside the fault gouge, were examined under both unbounded and bounded boundary conditions. In both cases, traveling strain localization mode showed the existence of a plateau in the frictional strength of the fault, τ res (see Figures 8, 13).
In the case of the traveling load on the unbounded domain, the fact that the load changes its position constantly leads to a non zero change of the temperature field ( ∂∆T (x,t) ∂t = 0) and constant influence of the pore fluid pressure profile by the thermal pressurization term. Moreover, because the thermal load changes its position, temperature does not have time to accumulate in one point and provoke a pressure increase that eliminates fault friction. Instead fault friction reaches a non-zero plateau (see Figure 8). This is an important result since it directly influences the dissipation energy produced during seismic slip.
Moreover, we examined the influence of the velocity of the strain localization (moving thermal load) in the frictional evolution. Based on our analyses, we established that the faster traveling shear bands have a smoother stress drop at the first stages of the analysis and they reach a higher plateau of frictional strength, see Figure 10. When the velocity of the shear band tends to zero we retrieve the solution described in Rice (2006a), as expected.
Next, a traveling instability was applied into a bounded domain with homogeneous Dirichlet boundary conditions. Again the results show that the frictional strength of the fault reaches a plateau and is not fully recovered as in the case of a stationary instability (see Figure 13). The reason is the change of the position of the thermal load during the analysis and the subsequent change of the temperature profile, leading to a non attenuating thermal pressurization phenomenon. Again the plateau reached, differs based on the traveling velocity of the shear band v, which ranges in the order of 20 ∼ 50 mm/s according to the numerical analyses of Part I, Stathas and Stefanou (2022). In this case, it is shown that in contrast to the case of a stationary thermal load on the bounded domain, the fault never recovers entirely its frictional strength since the effects of thermal pressurization never cease.
The results presented above clearly show a strong dependence of the fault's frictional behavior in both the fault gouge boundary conditions and the strain localization mode (traveling or stationary PSZ) introduced into the medium. These results can be used as a preliminary model in order to evaluate qualitatively the results obtained by numerical analyses taking into account the microstructure of the fault gouge material, where discerning between the effects of the different mechanisms affecting the frictional response of a fault undergoing thermal pressurization is more involved. The results of the fully non-linear numerical analyses with the Cosserat micromorphic continuum of Part I agree qualitatively with the results from the linear model of this paper. This indicates that the driving cause behind the obtained results is the diffusion from the thermal and hydraulic couplings. The microstructure follows to a lesser extend. Its use in the solution of the BVP presented in Part I (see Stathas and Stefanou (2022)), is required in order for the dissipation and the meta-stable frictional response of the fault gouge to be calculated correctly excluding mesh dependency from the numerical results.
In conclusion, our results show that for typical values of seismic slip δ and seismic slip velocityδ, the effects of the boundaries of the fault gouge cannot be ignored. This means that those effects need to be accounted in both numerical analyses and laboratory experiments. The influence of different kind of boundary conditions needs to be studied. The introduction of a traveling (flutter-type) strain localization mode is an important aspect of our model. Its presence increases the frequency content of the earthquake and it prevents the bounded fault gouge from fully recovering its frictional shear strength due to the diffusion at the boundaries. Furthermore, it contributes in keeping the temperatures inside the fault gouge smaller than in the stationary cases. The existence of oscillations and the reduction of the peak residual frictional strength are also important in understanding the transition form a stable to unstable seismic slip and subsequent fault nucleation (see Rempel and Rice, 2006;Rice, 1973Rice, , 2006aViesca and Garagash, 2015, among others). Furthermore, the existence of non zero upper and lower bounds in the fault's frictional behavior (τ min , τ res ), has serious implications for any attempt in control-ing the transition from stable (aseismic) to unstable (coseismic) slip (see Stefanou, 2019;Stefanou and Tzortzopoulos, 2020;Tzortzopoulos, 2021).
where q(y, t) is the unknown function (e.g. the temperature T (y, t)), f i , i = 1, 2 are the values of the general Robin boundary conditions with coefficients (a i , b i , i = 1, 2), I(y) is the initial condition and g(y, t) is the loading function (here related to frictional dissipation). We denote by c the diffusivity and by k the conductivity of the material.
where C = f Λδ ρC(c hy −c th ) . Due to the concentrated nature of the thermal load (Dirac distribution), the integral equation (A.9) can be brought to its final form: τ (t) = f (σ n − p 0 ) − C t 0 τ (t )G (y, t; y , t , c hy , c th ) y=y (t ) dt (A.10)
The above integral equation is a linear Volterra integral equation of the second kind Wazwaz (2011). We note here that this equation is valid only at the position of the yielding plane which has to coincide with the position of the maximum pressure inside the layer (y = y (t )). This has been proven to hold true for the cases present on the unbounded domain (see Appendix ??). In the case of a bounded fault gouge under the influence of a traveling PSZ (thermal load) this is true only in the regions again from the boundary. Nevertheless, the difference between the position of the traveling thermal load and that of the P max is small (see also Figure 16).
Isolating each eigenfunction sin λ n y we arrive at the following first order linear differential equations involving the unknown coefficientp n (t) and the loading coefficient T n (t) for each particular component of the solution series expansion.
∂p n (t) ∂t + c hy λ 2 npn (t) =
2Λ HρC
∂T n (t) ∂t , t ≥ 0. (B.5) is given as:
τ (t) = 1 − C t 0τ
(t )Ḡ (ȳ,t;ȳ ,t ) ȳ=ȳ dt ,t ∈ [0, T c th H 2 ], (C.1) whereT = T c th H 2 is the final normalized time, andτ (t) is the unknown function. We begin by performing a change of variables fromt ∈ [0,T] toz ∈ [−1, 1]. The change of variables reads:
t =T 1 +z 2 ,z = 2t T − 1
The Volterra integral equation can then be written:
U (z) = 1 − C T 1+z 2
0Ḡ
(ȳ,T 1 +z 2 ;ȳ ,t ) y=y dt ,z ∈ [−1, 1], (C.2)
where U (z) = τ (T 1+z 2 ). In order for the collocation method solution to converge exponentially we require that both the integral equation ( where K(ȳ,z;ȳ , s) =T 2Ḡ (ȳ,T 2 (z + 1);ȳ ,T 2 (s + 1)). Next, we set the N + 1 collocation pointsz i ∈ [−1, 1] and corresponding weights ω i according to the Clenshaw-Curtis quadrature formula. The integral equation ( U i + C 1 +z i 2 N j=0 u j N p=0 K(ȳ,z i ,ȳ , s(z i , θ)) ȳ=ȳ U (s(z i , θ p ))ω j = 1, i ∈ [0, N ] (C.6)
In order to apply the collocation method according to the Clenshaw-Curtis quadrature, we express the solution U (s(z i , θ p )) with the help of Lagrange interpolation polynomials P j (s(z i , θ p )) as a series:
U (s(z i , θ p )) ∼ N k=0 U j P j (s(z i , θ j )),
U i + 1 +z i 2 N j=0 U j N p=0
K(y,z i ;ȳ , s(z i , θ)) ȳ=ȳ P j (s(z i , θ p ))ω p = 1, i ∈ [0, N ] (C.7)
where, P j (s(z i , θ p )), N j=0 have been defined in the main text (see equations (21) and (22)). In order to assure an exponential degree of convergence, we choose the set of Gauss Chebyshev quadrature points for the numerical evaluation of the integral {θ j } N j=0 , to coincide with the set of collocation points {z j } N j=0 , where the integral equation is evaluated. Rearranging the terms and applying Einstein's summation over repeated indices yields the system of algebraic equations:
(δ ij + A ij )U j = g(z i ), (C.8) where, A ij = 1+zi 2 N j=0 N p=0 K(ȳ,z i ;ȳ , s(z i , θ))P j (s(z i , θ p ))ω p , g(z i ) = 1 and U j the unknown quantities. Because Lagrange interpolation was assumed, the interpolation coefficients U j calculated at each x j are also the value of the interpolation atz j .
1 −1 T i (z)T j (z) √ 1 −z 2 dz = 1, i = j 0, i = j , (C.9)
moreover, due to the change in the evaluation set, {z i }, the formula for the calculation of the Lagrange interpolation is given by:
U (s(z, θ p )) = N j=0 U (z j )F j (s(z, θ p )) (C.10)
where F j (s(z, θ p ) are the Lagrange cardinal polynomials. We note that the formula for the Lagrange cardinal polynomials changes due to the change of the interpolation nodes {z j } N j=0 . Taking advantage of the orthogonality condition the cardinal polynomials F j (s(z, θ p )) are given by: F j (s(z i , θ p )) = N 0 α p,j T p (s(z i , θ p )), (C.11) where, again due to orthogonality, α p,j is given by:
α p.j = T p (z j )ω j /γ p , (C.12) γ p = π, j = 0 π 2 , j = 0 (C.13)
The final discretized form of the integral equation (15) is then given by:
U i + 1 +z i 2 N j=0 U j N p=0
K(y,z i ;ȳ , s(z i , θ)) ȳ=ȳ F j (s(z i , θ p )) 1 − θ 2 p ω p = 1, i ∈ [0, N ]. (C.14)
we note here that the term 1 − θ 2 p , accounts for the weight function present in the orthogonality condition.
Figure 1 :
1Comparative normalized frictionτ -displacement δ results. The purple-square curve presents the frictional response of the established thermal pressurization model in the case of slip on a mathematical plane under isothermal drained boundary conditions lying at infinity
Figure 2 :
2The established model of thermal pressurization: The values of Pressure and temperature are prescribed at infinity. The bodies outside the fault gouge (red color) are considered rigid. Deformation loclalizes on a mathematical plane and the PSZ coincides with the fault gouge.
Figure 3 :
3Left: τ − δ response of the layer for different slip velocitiesδ applied. Due to the constant isothermal drained conditions at the boundary near infinity the solution tends asymptotically to the zero steady state solution. For different values of the velocityδ, the analytical solution is presented by a continuous line and the numerical solution is presented by the triangle markers. The numerical solution obtained by the SCLM method, coincides to the analytical curves.
Figure 4 :
4Temperature ∆T and pore fluid pressure ∆P fields along the height of the layer for shearing velocityδ = 1 m/s, at different times during the analysis. The numerical solution is consistent with the analytical observation that the position of ∆Pmax coincides with the position of the stationary strain localization.
Figure 10 :
10Frictional response τ − δ of the layer for different velocities v of the traveling PSZ. For low traveling velocities the response tends to the behavior of stationary slip on a mathematical plane. As the traveling velocity increases the drop in friction becomes less.
Figure 11 :
11Temperature ∆T and pore fluid pressure ∆P fields along the height of the layer for shearing velocityδ = 1 m/s, at different times during the analysis. The numerical solution is consistent with the analytical observation that the position of ∆Pmax coincides with the position of the stationary strain localization.
Figure 12 :
12Schematic representation of a fault gouge of height H = 1 mm, under seismic slip δ. The PSZ -red line-is allowed to travel in a region of thickness h=0.6mm according to the numerical results of Part I,Stathas and Stefanou (2022). The PSZ is moving periodically inside the region h with velocity v.
Figure 15 :
15Comparison of the τ − δ frictional response between a moving periodic strain localization on a bounded domain and a stationary strain localization (PSZ) on an unbounded domain. The influence of the boundary conditions is noticeable from the initial stages of the coseismic slip (δ ≈ 10 mm).
C.2) and the integral inside (C.2) are expressed inside the same interval [−1, 1]. To do this first we change the integration bounds fromt ∈ [0,T 1+z 2 ] to s ∈ [−1,z]. ȳ,z;ȳ , s) ȳ=ȳ U (s)ds,z ∈ [−1, 1], , (C.3)
C.3) must hold at eachz i :U (z i ) = 1 − C zi 0 K(ȳ,z i ;ȳ , s) ȳ=ȳ U (s)ds, i ∈ [0, N ], , (C.4)The main hindrance in solving equation (C.4) accurately, is the calculation of the integral with variable integration bounds. For small values ofz i , the quadrature provides little information for U (s). We handle this difficulty by yet another variable change where we transfer the integration variable s ∈ [−1,z i ] to θ ∈ [−1, 1] via the transformation:
.Parameters Values Properties Parameters Values Properties
f
0.5
-
Λ
2.216
MPa/ o C
σ n
200
MPa
ρC
2.8
MPa/ o C
P 0
66.67
MPa
c hy
10
mm 2 /s
H
1
mm
c th
1
mm 2 /s
Table 1 :
1Material parameters of a mature fault at the seismogenic depth (seeRice, 2006b;Rattez et al., 2018).4.1. Stationary strain localization mode
4.1.1. Stationary strain localization on an unbounded domain
The solutions for the temperature field on an infinite layer under a stationary point source thermal load
were first derived in
AcknowledgmentsThe authors would like to acknowledge the support of the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant agreement no. 757848 CoQuake).Appendix A. The coupled Thermo-hydraulic problem and is solution Appendix A.1. Problem description We have already discussed that we are interested in the limiting case where the position and profile of the PSZ can be prescribed inside the fault gouge. Knowing the form of the profile of the shear plastic strainrate inγ p (y, t) (equation(2)), the two way coupled problem in the form of temperature and pressure diffusion equations is given by: where ∆P (y, t) is the unknown pressure difference between the fault gouge layer and the boundaries, while P 0 = P (y, 0) is the initial pore fluid pressure, that is kept constant at the boundaries of the fault gouge (drained boundary conditions).We note that the above formulations are also valid in the case of an unbounded domain considering H → ±∞. The pressure problem affects also the temperature BVP through the value of shear stress (fault friction), τ (t), in the yielding region. According to the Mohr-Coulomb yield criterion, subtracting the initial ambient pore fluid pressure P 0 we get:We note here that once we know the form of the plastic strain-rate profileγ p (y, t) as in equation (A.1) the only unknown is the fault friction τ (t). We can find the solution of the temperature equation T (y, t) in terms of the unknown fault friction τ (t) and replace into the pressure equation, which can then also be described as an unknown function of friction. Finally, we can define the value of fault friction τ (t) by inserting the pressure increase solution ∆P (y, t) into the material equation (A.3) and solving for τ (t). The above equations have constant coefficients and since the loading is prescribed (based on the unknown τ (t)), the system has been transformed to a one-way coupled set of linear differential 1D diffusion equations of the form:∂q(y, t) ∂t = c ∂ 2 q(y, t) ∂x 2 + 1 k g(y, t), a 1 ∂u ∂n 1 y=0 + b 1 q y=0 (t) = f 1 (t), t > 0, a 2 ∂q ∂n 2 y=H + b 2 u y=H (t) = f 2 (t), t > 0, q(y, 0) = I(y), (A.4)Appendix A.2. Fundamental solutionWe can find the solution to the above BVP by application of the Green's theorem, which for the general diffusion case in 1D reads (seeCole et al. (2010)):where G(y, t; y , t , c) is the appropriate Green's function. The first two terms correspond to the initial condition I(y, 0) and the loading term g(y, t) respectively. The terms α, k represent the diffusivity and the conductivity of the unknown quantity q(y, t) respectively. The third term is important for non homogeneous Neumann and Robin boundary conditions while the fourth term refers to non homogeneous Dirichlet boundary conditions. In what follows the last two terms in equation (A.5) are omitted due to the existence of homogeneous Dirichlet boundary conditions in the problems of temperature and pressure difference diffusion at hand.Applying the solution in terms of the Green's function (A.5) to problems (A.1),(A.2) we obtain the solution in terms of the Green's function specific to each diffusion problem.where c th , k T are the thermal diffusivity, conductivity pair and c hy , k H are their hydraulic counterparts. Similarly (g T , g H ) are the loading functions, while G(y, t; y , t , c) is the Green's function kernel for the thermal (c = c th ) and pressure (c = c hy ) diffusion problems respectively.In the case of the coupled pressure problem (A.2) with the temperature as a loading function, we are interested in rewriting the system's response with the help of the dissipative loading ( 1 ρC τ (t)γ p ) of the temperature equation (A.1). This way we can connect the pressure response P (y, t) to the fault friction τ (t) which is the main unknown. We can do this by replacing in the expression of T (y, t) in the pressure diffusion equation (A.2) the temperature impulse response of equation (A.1) due to a impulsive (Dirac) thermal load. This way the response obtained from the pressure diffusion equation is a Green's function kernel that contains the influence of an impulse thermal load (see Appendix Appendix B for detailed derivation in the cases of 1) a bounded domain for a stationary impulsive thermal load and 2) an unbounded domain subjected to a moving impulsive thermal load). The pressure solution can then be written as:where G (y, t; y , t , c hy , c th ) is the Green's function kernel of the pressure equation (A.2) containing the influence of an impulse thermal load from the temperature equation (A.1).Having found the pressure solution P (y, t) as a function of g T we can then replace (A.8) into the material description equation (A.3). For the case of 1D shear τ under constant normal load σ n that we will consider throughout this paper, the material law is transformed into the integral equation:Appendix B. Derivation of the coupled pore fluid pressure diffusion kernel.In this appendix we derive the coupled pore fluid pressure diffusion kernel for the cases of a bounded domain subjected to a stationary Dirac load and an unbounded domain under a moving Dirac load. Our procedure follows the discussion inLee and Delaney (1987)where the same problem was solved for a stationary Dirac thermal load on an unbounded domain.Appendix B.1. Stationary thermal load, coupled pore fluid pressure Green's kernel for a bounded domain.In the case of the bounded domain we proceed by applying the method of separation of variables and then expanding the solution to a Fourier series. We note here that the coupled system of pressure and temperature diffusion equations have the same form of linear partial differential operators and boundary conditions and therefore their solution belongs to the same space of Sturn-Liouville problems. We note here that the homogeneous pressure diffusion partial deifferential equation on the above bounded domain has the same boundary conditions. Therefore, the pore fluid pressure solution can be written with the same eigenfunctions as above. Replacing the pore fluid pressure eigenfunction expansion ∆P (x, t) = P (y, t) − P 0 = ∞ i=np n sin nπx H into the coupled pressure diffusion partial differential equation,we obtain: Applying the inverse of the Laplace transform gives us:Finally, in the series expansion ∆P (y, t) = ∞ n=1p n (t) sin λ n y we move the summation under the integral sign and we obtain:sin λ n y sin λ n y dy dt .(B.9)We recognize the term in the second line of equation (B.9) as the Green's function kernel of the coupled pressure diffusion partial differential equation. This expression has the added advantage that the influence of the thermal load on the pressure ∆P (y, t) = P (y, t) − P 0 solution is straightforward. Noticing that for a general diffusion problem on a bounded domain under homogeneous Dirichlet boundary conditions the Green's function kernel is given by: The Green's function kernel of the coupled pressure differential equation on the bounded domain is then given as:G X11 (y, t; y , t , c th , c hy ) = c hy G X11 (y, t; y , t , c hy ) − c th G X11 (y, t; y , t , c th ) c hy − c th . (B.11)Finally, the pressure solution can be given as:This result agrees with the formula provided inLee and Delaney (1987);Rice (2006a)for the unbounded domain.Appendix B.2. Moving thermal load, coupled pore fluid pressure Green's kernel for a unbounded domain.Here, we present the derivation of the Green's function kernel of the coupled pressure diffusion equation for an unbounded domain under moving thermal load. Note here, that the Green's function kernel is independent of the type of loading (stationary or moving), it depends on the kind of the differential operator and the boundary conditions. What differs here in the form of the Green's function kernel is the velocity dependence, since we want to connect the pressure evolution not with the stationary Green's function but with the moving Dirac thermal load, that can be written as g(y, t) =δ ρC τ (t)δ(y − vt).In essence we need to only prescribe the velocity dependence of x = f (v, t ) in the Green's function kernel for the unbounded domain under Dirichlet conditions G X00 (y, t; y , t , c hy , c th ). We provide a full description and then compare the results. The coupled system of temperature and pore fluid pressure diffusion equations in the unbounded domain is given by:∆T (y, 0) = 0, lim ∆T (y, t) y=−∞,y=∞ = 0, ∆P (y, 0) = 0, lim ∆P (y, t) y=−∞,x=∞ = 0 (B.13)To account for the moving load we perform a change of variables on the original system (B.13), setting ξ = y − vt, η = t so that we attach a frame of reference to the moving load. In this case and by suitable application of the chain rule we can write:Applying a Fourier transform in space and a Laplace transform in time on the system of partial differential equations (B.14) we obtain:Solving the above algebraic system (B.15) we obtain:Inverting the Laplace and then the Fourier transform yields:dt .By inspection we note that these are the same expressions as the ones presented in (9), where y was replaced by y = vt and c = c th , or c = c hy respectively.Appendix C. Collocation MethodologyAppendix C.1. Regular kernels In order to apply the collocation methodology to the linear Volterra integral equation of the second kind,, we make use of the collocation methodology described inTang et al. (2008). The integral equationAppendix C.2. Singular kernelsWhen the kernel of the integral equation(15)involves a singularity (see equation(16)), we cannot use the Clenshaw-Curtis quadrature rule in its original form because the quadrature requires the values of the function at a position, where the kernel evaluates to infinity. For this reason a different quadrature strategy needs to be implemented. Here, based on the work ofTang et al. (2008)we apply the Gauss-Chebyshev quadrature rule. This quadrature rule involves the values of the function at the zeros of the N -th degree Chebyshev polynomial of the first kind {z i }. The quadrature can then be successfully calculated, because the new set of integration points, {z i }, does not involve the ends of the interval [-1,1]. However, since the Chebyshev polynomials of the first kind were used, we need to take into account the specific weight function w(z) = √ 1 −z 2 under which the Chebyshev polynomials of the first kind are orthogonal on the interval [-1,1], namely:
Scaling law of seismic spectrum. K Aki, 10.1029/jz072i004p01217doi:10. 1029/jz072i004p01217Journal of Geophysical Research. 72Aki, K., 1967. Scaling law of seismic spectrum. Journal of Geophysical Research 72, 1217-1231. doi:10. 1029/jz072i004p01217.
Rupture dynamics with energy loss outside the slip zone. D J Andrews, 10.1029/2004JB003191Journal of Geophysical Research: Solid Earth. 110Andrews, D.J., 2005. Rupture dynamics with energy loss outside the slip zone. Journal of Geophysical Research: Solid Earth 110, 1-14. doi:10.1029/2004JB003191.
Fractures, faults, and hydrocarbon entrapment, migration and flow. Marine and petroleum geology. A Aydin, 17Aydin, A., 2000. Fractures, faults, and hydrocarbon entrapment, migration and flow. Marine and petroleum geology 17, 797-814.
Thermal Pressurization Weakening in Laboratory Experiments. N Z Badt, T E Tullis, G Hirth, D L Goldsby, 10.1029/2019JB018872doi:10.1029/ 2019JB018872Journal of Geophysical Research: Solid Earth. 125Badt, N.Z., Tullis, T.E., Hirth, G., Goldsby, D.L., 2020. Thermal Pressurization Weakening in Lab- oratory Experiments. Journal of Geophysical Research: Solid Earth 125, 1-21. doi:10.1029/ 2019JB018872.
On localization modes in coupled thermo-hydro-mechanical problems. A Benallal, 10.1016/j.crme.2005.05.005Comptes Rendus Mécanique. 333Benallal, A., 2005. On localization modes in coupled thermo-hydro-mechanical problems. Comptes Rendus Mécanique 333, 557-564. doi:https://doi.org/10.1016/j.crme.2005.05.005.
Perturbation growth and localization in fluid-saturated inelastic porous media under quasi-static loadings. A Benallal, C Comi, 10.1016/S0022-5096(02)00143-6doi:10. 1016/S0022-5096(02Journal of the Mechanics and Physics of Solids. 51Benallal, A., Comi, C., 2003. Perturbation growth and localization in fluid-saturated inelastic porous media under quasi-static loadings. Journal of the Mechanics and Physics of Solids 51, 851-899. doi:10. 1016/S0022-5096(02)00143-6.
chebyshev and fourier spectral methods. J Boyd, DoverNew YorkBoyd, J., 2006. 2000. chebyshev and fourier spectral methods. Dover, New York. .
High-velocity frictional properties of a clay-bearing fault gouge and implications for earthquake mechanics. N Brantut, A Schubnel, J N Rouzaud, F Brunet, T Shimamoto, 10.1029/2007JB005551Journal of Geophysical Research: Solid Earth. 113Brantut, N., Schubnel, A., Rouzaud, J.N., Brunet, F., Shimamoto, T., 2008. High-velocity frictional properties of a clay-bearing fault gouge and implications for earthquake mechanics. Journal of Geo- physical Research: Solid Earth 113, 1-18. doi:10.1029/2007JB005551.
The Qualitative Theory of Ordinary Differential Equations: An Introduction. F Brauer, J Nohel, Dover PublicationsNew YorkBrauer, F., Nohel, J., 1969. The Qualitative Theory of Ordinary Differential Equations: An Introduction. Dover Publications, New York.
Complex variables and applications. J W Brown, R V Churchill, McGraw-Hill Higher EducationBostonBrown, J.W., Churchill, R.V., et al., 2009. Complex variables and applications. Boston: McGraw-Hill Higher Education.
Tectonic stress and the spectra of seismic shear waves from earthquakes. J N Brune, 10.1029/jb075i026p04997J Geophys Res. 75Brune, J.N., 1970. Tectonic stress and the spectra of seismic shear waves from earthquakes. J Geophys Res 75, 4997-5009. doi:10.1029/jb075i026p04997.
Conduction of heat in solids. H S Carslaw, J C Jaeger, Clarendon PressTechnical ReportCarslaw, H.S., Jaeger, J.C., 1959. Conduction of heat in solids. Technical Report. Clarendon Press.
Operational mathematics. R V Churchill, Churchill, R.V., 1972. Operational mathematics .
Heat conduction using Greens functions. K Cole, J Beck, A Haji-Sheikh, B Litkouhi, CRC PressCole, K., Beck, J., Haji-Sheikh, A., Litkouhi, B., 2010. Heat conduction using Greens functions. CRC Press.
A cosserat breakage mechanics model for brittle granular media. N A Collins-Craft, I Stefanou, J Sulem, I Einav, Journal of the Mechanics and Physics of Solids. 103975Collins-Craft, N.A., Stefanou, I., Sulem, J., Einav, I., 2020. A cosserat breakage mechanics model for brittle granular media. Journal of the Mechanics and Physics of Solids , 103975.
Chebyshev spectral solution of nonlinear Volterra-Hammerstein integral equations. G N Elnagar, M Kazemi, 10.1016/S0377-0427(96)00098-2doi:10.1016/ S0377-0427Journal of Computational and Applied Mathematics. 7696Elnagar, G.N., Kazemi, M., 1996. Chebyshev spectral solution of nonlinear Volterra-Hammerstein in- tegral equations. Journal of Computational and Applied Mathematics 76, 147-158. doi:10.1016/ S0377-0427(96)00098-2.
Iterative solution of Volterra integral equations using Clenshaw-Curtis quadrature. G A Evans, J Hyslop, A P Morgan, 10.1016/0021-9991(81)90199-6doi:10.1016/ 0021-9991Journal of Computational Physics. 4081Evans, G.A., Hyslop, J., Morgan, A.P., 1981. Iterative solution of Volterra integral equations us- ing Clenshaw-Curtis quadrature. Journal of Computational Physics 40, 64-76. doi:10.1016/ 0021-9991(81)90199-6.
How rotational vortices enhance transfers. D Griffani, P Rognon, B Metzger, I Einav, Physics of Fluids. 2593301Griffani, D., Rognon, P., Metzger, B., Einav, I., 2013. How rotational vortices enhance transfers. Physics of Fluids 25, 093301.
Fault friction during simulated seismic slip pulses. C Harbord, N Brantut, E Spagnuolo, G D Toro, Harbord, C., Brantut, N., Spagnuolo, E., Toro, G.D., 2021. Fault friction during simulated seismic slip pulses .
The physics of earthquakes. H Kanamori, E E Brodsky, 10.1088/0034-4885/67/8/r03Reports on Progress in Physics. 67Kanamori, H., Brodsky, E.E., 2004a. The physics of earthquakes. Reports on Progress in Physics 67, 1429-1496. URL: https://doi.org/10.1088%2F0034-4885%2F67%2F8%2Fr03, doi:https://doi. org/10.1088/0034-4885/67/8/r03.
The physics of earthquakes. H Kanamori, E E Brodsky, 10.1088/0034-4885/67/8/R03Reports on Progress in Physics. 67Kanamori, H., Brodsky, E.E., 2004b. The physics of earthquakes. Reports on Progress in Physics 67, 1429-1496. doi:10.1088/0034-4885/67/8/R03.
Energy partitioning during an earthquake. H Kanamori, L Rivera, 10.1029/170GM03Geophysical Monograph Series. 170Kanamori, H., Rivera, L., 2006. Energy partitioning during an earthquake. Geophysical Monograph Series 170, 3-13. doi:10.1029/170GM03.
Efficient computation of chebyshev polynomials in computer algebra. W Koepf, Computer Algebra Systems: A Practical GuideKoepf, W., 1999. Efficient computation of chebyshev polynomials in computer algebra. Computer Algebra Systems: A Practical Guide , 79-99.
Frictional heating, fluid pressure, and the resistance to fault motion. A H Lachenbruch, 10.1029/JB085iB11p06097Journal of Geophysical Research: Solid Earth. 85Lachenbruch, A.H., 1980. Frictional heating, fluid pressure, and the resistance to fault motion. Journal of Geophysical Research: Solid Earth 85, 6097-6112. doi:https://doi.org/10.1029/JB085iB11p06097.
Frictional heating and pore pressure rise due to a fault slip. T C Lee, P T Delaney, 10.1111/j.1365-246X.1987.tb01647.xGeophysical Journal of the Royal Astronomical Society. 88Lee, T.C., Delaney, P.T., 1987. Frictional heating and pore pressure rise due to a fault slip. Geophysical Journal of the Royal Astronomical Society 88, 569-591. doi:10.1111/j.1365-246X.1987.tb01647.x.
On the problem of stability of motion. Stability of Motion. A Lyapunov, 30Lyapunov, A., 1893. On the problem of stability of motion. Stability of Motion 30, 123-127.
Pore-fluid pressures and frictional heating on a fault surface. C W Mase, L Smith, 10.1007/BF00874618Pure and Applied Geophysics PAGEOPH. 122Mase, C.W., Smith, L., 1984. Pore-fluid pressures and frictional heating on a fault surface. Pure and Applied Geophysics PAGEOPH 122, 583-607. doi:10.1007/BF00874618.
Effects of frictional heating on the thermal, hydrologic, and mechanical response of a fault. C W Mase, L Smith, 10.1029/JB092iB07p06249Journal of Geophysical Research. 92Mase, C.W., Smith, L., 1987. Effects of frictional heating on the thermal, hydrologic, and mechanical response of a fault. Journal of Geophysical Research 92, 6249-6272. doi:10.1029/JB092iB07p06249.
A fast BEM procedure using the Ztransform and high-frequency approximations for large-scale 3D transient wave problems. D Mavaleix-Marchessoux, M Bonnet, S Chaillat, B Leblé, International Journal for Numerical Methods in Engineering. Mavaleix-Marchessoux, D., Bonnet, M., Chaillat, S., Leblé, B., 2020. A fast BEM procedure using the Z- transform and high-frequency approximations for large-scale 3D transient wave problems. International Journal for Numerical Methods in Engineering .
Eddy viscosity in dense granular flows. T Miller, P Rognon, B Metzger, I Einav, Physical review letters. 11158002Miller, T., Rognon, P., Metzger, B., Einav, I., 2013. Eddy viscosity in dense granular flows. Physical review letters 111, 058002.
The thickness of shear hands in granular materials. H B Muhlhaus, I Vardoulakis, 10.1680/geot.1988.38.2.331bGéotechnique. 38Muhlhaus, H.B., Vardoulakis, I., 1988. The thickness of shear hands in granular materials. Géotechnique 38, 331-331. doi:10.1680/geot.1988.38.2.331b.
The evolution of faults formed by shearing across joint zones in sandstone. R Myers, A Aydin, Journal of Structural Geology. 26Myers, R., Aydin, A., 2004. The evolution of faults formed by shearing across joint zones in sandstone. Journal of Structural Geology 26, 947-966.
Development of cataclastic foliation in deformation bands in feldspar-rich conglomerates of the rio do peixe basin, ne brazil. M A Nicchio, F C Nogueira, F Balsamo, J A Souza, B R Carvalho, F H Bezerra, 10.1016/j.jsg.2017.12.013Journal of Structural Geology. 107Nicchio, M.A., Nogueira, F.C., Balsamo, F., Souza, J.A., Carvalho, B.R., Bezerra, F.H., 2018. Develop- ment of cataclastic foliation in deformation bands in feldspar-rich conglomerates of the rio do peixe basin, ne brazil. Journal of Structural Geology 107, 132-141. doi:https://doi.org/10.1016/j.jsg. 2017.12.013.
The influence of ambient fault temperature on flashheating phenomena. F X Passelègue, D L Goldsby, O Fabbri, Geophysical Research Letters. 41Passelègue, F.X., Goldsby, D.L., Fabbri, O., 2014. The influence of ambient fault temperature on flash- heating phenomena. Geophysical Research Letters 41, 828-835.
Stability and localization of rapid shear in fluid-saturated fault gouge: 2. Localized zone width and strength evolution. J D Platt, J W Rudnicki, J R Rice, 10.1002/2013JB010711Journal of Geophysical Research: Solid Earth. 119Platt, J.D., Rudnicki, J.W., Rice, J.R., 2014a. Stability and localization of rapid shear in fluid-saturated fault gouge: 2. Localized zone width and strength evolution. Journal of Geophysical Research: Solid Earth 119, 4334-4359. doi:10.1002/2013JB010711.
Stability and localization of rapid shear in fluid-saturated fault gouge: 2. localized zone width and strength evolution. J D Platt, J W Rudnicki, J R Rice, 10.1002/2013JB010711Journal of Geophysical Research: Solid Earth. 119Platt, J.D., Rudnicki, J.W., Rice, J.R., 2014b. Stability and localization of rapid shear in fluid-saturated fault gouge: 2. localized zone width and strength evolution. Journal of Geophysical Research: Solid Earth 119, 4334-4359. doi:https://doi.org/10.1002/2013JB010711.
. A Quarteroni, R Sacco, F Saleri, Numerical Mathematics Texts in Applied Mathematics. Quarteroni, A., Sacco, R., Saleri, F., 2007. Numerical Mathematics Texts in Applied Mathematics.
The importance of Thermo-Hydro-Mechanical couplings and microstructure to strain localization in 3D continua with application to seismic faults. Part I: Theory and linear stability analysis. H Rattez, I Stefanou, J Sulem, 10.1016/j.jmps.2018.03.004doi:10.1016/j.jmps.2018.03.004Journal of the Mechanics and Physics of Solids. 115Rattez, H., Stefanou, I., Sulem, J., 2018. The importance of Thermo-Hydro-Mechanical couplings and microstructure to strain localization in 3D continua with application to seismic faults. Part I: Theory and linear stability analysis. Journal of the Mechanics and Physics of Solids 115, 54-76. URL: https://doi.org/10.1016/j.jmps.2018.03.004, doi:10.1016/j.jmps.2018.03.004.
Influence of Effective Stress and Pore Fluid Pressure on Fault Strength and Slip Localization in Carbonate Slip Zones. M Rempe, G Di Toro, T M Mitchell, S A Smith, T Hirose, J Renner, 10.1029/2020JB019805Journal of Geophysical Research: Solid Earth. 125Rempe, M., Di Toro, G., Mitchell, T.M., Smith, S.A., Hirose, T., Renner, J., 2020. Influence of Effective Stress and Pore Fluid Pressure on Fault Strength and Slip Localization in Carbonate Slip Zones. Journal of Geophysical Research: Solid Earth 125. doi:10.1029/2020JB019805.
The effects of flash-weakening and damage on the evolution of fault strength and temperature. Earthquakes: Radiated energy and the physics of faulting. A Rempel, 170Rempel, A., 2006. The effects of flash-weakening and damage on the evolution of fault strength and temperature. Earthquakes: Radiated energy and the physics of faulting 170, 263-270.
Thermal pressurization and onset of melting in fault zones. A W Rempel, J R Rice, 10.1029/2006JB004314Journal of Geophysical Research: Solid Earth. 111Rempel, A.W., Rice, J.R., 2006. Thermal pressurization and onset of melting in fault zones. Journal of Geophysical Research: Solid Earth 111. doi:10.1029/2006JB004314.
The growth of slip surfaces in the progressive failure of over-consolidated clay. J R Rice, 10.1098/rspa.1973.0040Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences. 332Rice, J.R., 1973. The growth of slip surfaces in the progressive failure of over-consolidated clay. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 332, 527-548. doi:10.1098/rspa.1973.0040.
On the stability of dilatant hardening for saturated rock masses. J R Rice, 10.1029/jb080i011p01531Journal of Geophysical Research. 80Rice, J.R., 1975. On the stability of dilatant hardening for saturated rock masses. Journal of Geophysical Research 80, 1531-1536. doi:10.1029/jb080i011p01531.
Heating and weakening of faults during earthquake slip. J R Rice, 10.1029/2005JB004006Journal of Geophysical Research: Solid Earth. 111Rice, J.R., 2006a. Heating and weakening of faults during earthquake slip. Journal of Geophysical Research: Solid Earth 111, 1-29. doi:10.1029/2005JB004006.
Heating and weakening of faults during earthquake slip. J R Rice, 10.1029/2005JB004006Journal of Geophysical Research: Solid Earth. 111Rice, J.R., 2006b. Heating and weakening of faults during earthquake slip. Journal of Geophysical Research: Solid Earth 111. doi:https://doi.org/10.1029/2005JB004006.
Stability and localization of rapid shear in fluid-saturated fault gouge: 1. linearized stability analysis. J R Rice, J W Rudnicki, J D Platt, 10.1002/2013JB010710Journal of Geophysical Research: Solid Earth. 119Rice, J.R., Rudnicki, J.W., Platt, J.D., 2014a. Stability and localization of rapid shear in fluid-saturated fault gouge: 1. linearized stability analysis. Journal of Geophysical Research: Solid Earth 119, 4311- 4333. doi:https://doi.org/10.1002/2013JB010710.
Stability and localization of rapid shear in fluid-saturated fault gouge: 1. Linearized stability analysis. J R Rice, J W Rudnicki, J D Platt, 10.1002/2013JB010710Journal of Geophysical Research: Solid Earth. 119Rice, J.R., Rudnicki, J.W., Platt, J.D., 2014b. Stability and localization of rapid shear in fluid-saturated fault gouge: 1. Linearized stability analysis. Journal of Geophysical Research: Solid Earth 119, 4311- 4333. doi:10.1002/2013JB010710.
A circulation-based method for detecting vortices in granular materials. P Rognon, T Miller, I Einav, Granular Matter. 17Rognon, P., Miller, T., Einav, I., 2015. A circulation-based method for detecting vortices in granular materials. Granular Matter 17, 177-188.
Fault rocks and fault mechanisms. R Sibson, Journal of the Geological Society. 133Sibson, R., 1977. Fault rocks and fault mechanisms. Journal of the Geological Society 133, 191-213.
Thickness of the Seismic Slip Zone. R H Sibson, 10.1785/0120020061doi:10.1785/0120020061Bulletin of the Seismological Society of America. 93Sibson, R.H., 2003. Thickness of the Seismic Slip Zone. Bulletin of the Seismological Society of America 93, 1169-1178. URL: https://doi.org/10.1785/0120020061, doi:10.1785/0120020061.
Fault friction under thermal pressurization during large coseismic-slip part i: Numerical analyses. A Stathas, I Stefanou, Stathas, A., Stefanou, I., 2022. Fault friction under thermal pressurization during large coseismic-slip part i: Numerical analyses. --, -.
Controlling Anthropogenic and Natural Seismicity: Insights From Active Stabilization of the Spring-Slider Model. I Stefanou, 10.1029/2019JB017847doi:10. 1029/2019JB017847Journal of Geophysical Research: Solid Earth. 124Stefanou, I., 2019. Controlling Anthropogenic and Natural Seismicity: Insights From Active Stabilization of the Spring-Slider Model. Journal of Geophysical Research: Solid Earth 124, 8786-8802. doi:10. 1029/2019JB017847.
Control instabilities and incite slow-slip in generalized burridgeknopoff models. I Stefanou, G Tzortzopoulos, arXiv:2008.03755arXiv:2008.03755Stefanou, I., Tzortzopoulos, G., 2020. Control instabilities and incite slow-slip in generalized burridge- knopoff models. arXiv:2008.03755 URL: https://arxiv.org/abs/2008.03755, arXiv:2008.03755.
Thermal decomposition of carbonates in fault zones: Slip-weakening and temperature-limiting effects. J Sulem, V Famin, 10.1029/2008jb006004doi:10.1029Journal of Geophysical Research: Solid Earth. 114Sulem, J., Famin, V., 2009. Thermal decomposition of carbonates in fault zones: Slip-weakening and temperature-limiting effects. Journal of Geophysical Research: Solid Earth 114, 1-14. doi:10.1029/ 2008jb006004.
Thermal and chemical effects in shear and compaction bands. J Sulem, I Stefanou, 10.1016/j.gete.2015.12.004doi:10.1016/j.gete.2015.12.004Geomechanics for Energy and the Environment. 6Sulem, J., Stefanou, I., 2016. Thermal and chemical effects in shear and compaction bands. Geomechanics for Energy and the Environment 6, 4-21. URL: http://dx.doi.org/10.1016/j.gete.2015.12.004, doi:10.1016/j.gete.2015.12.004.
Stability analysis of undrained adiabatic shearing of a rock layer with cosserat microstructure. J Sulem, I Stefanou, E Veveakis, 10.1007/s10035-010-0244-1Granular Matter. 13Sulem, J., Stefanou, I., Veveakis, E., 2011. Stability analysis of undrained adiabatic shearing of a rock layer with cosserat microstructure. Granular Matter 13, 261-268. doi:https://doi.org/10.1007/ s10035-010-0244-1.
Experimental characterization of the thermo-poro-mechanical properties of the aegion fault gouge. J Sulem, I Vardoulakis, H Ouffroukh, M Boulon, J Hans, Comptes Rendus Geoscience. 336Sulem, J., Vardoulakis, I., Ouffroukh, H., Boulon, M., Hans, J., 2004. Experimental characterization of the thermo-poro-mechanical properties of the aegion fault gouge. Comptes Rendus Geoscience 336, 455-466.
Thermal properties across the chelungpu fault zone and evaluations of positive thermal anomaly on the slip zones: Are these residuals of heat from faulting?. H Tanaka, W Chen, K Kawabata, N Urata, Geophysical Research Letters. 34Tanaka, H., Chen, W., Kawabata, K., Urata, N., 2007. Thermal properties across the chelungpu fault zone and evaluations of positive thermal anomaly on the slip zones: Are these residuals of heat from faulting? Geophysical Research Letters 34.
On spectral methods for volterra integral equations and the convergence analysis. T Tang, X Xu, J Cheng, Journal of Computational Mathematics. Tang, T., Xu, X., Cheng, J., 2008. On spectral methods for volterra integral equations and the conver- gence analysis. Journal of Computational Mathematics , 825-837.
Approximation Theory and Approximation Practice. L N Trefethen, Extended Edition. SIAMTrefethen, L.N., 2019. Approximation Theory and Approximation Practice, Extended Edition. SIAM.
Controlling earthQuakes in the laboratory using pertinent fault stimulating techniques. G Tzortzopoulos, Tzortzopoulos, G., 2021. Controlling earthQuakes in the laboratory using pertinent fault stimulating techniques.
Stability and bifurcation of undrained, plane rectilinear deformations on watersaturated granular soils. I Vardoulakis, International journal for numerical and analytical methods in geomechanics. 9Vardoulakis, I., 1985. Stability and bifurcation of undrained, plane rectilinear deformations on water- saturated granular soils. International journal for numerical and analytical methods in geomechanics 9, 399-414.
Deformation of water-saturated sand: I. uniform undrained deformation and shear banding. I Vardoulakis, Géotechnique. 46Vardoulakis, I., 1996a. Deformation of water-saturated sand: I. uniform undrained deformation and shear banding. Géotechnique 46, 441-456.
Deformation of water-saturated sand: Ii. effect of pore water flow and shear banding. I Vardoulakis, Géotechnique. 46Vardoulakis, I., 1996b. Deformation of water-saturated sand: Ii. effect of pore water flow and shear banding. Géotechnique 46, 457-472.
| []
|
[
"Extended gaussian ensemble solution and tricritical points of a system with long-range interactions",
"Extended gaussian ensemble solution and tricritical points of a system with long-range interactions"
]
| [
"Rafael B Frigori \nUniversidade Tecnológica Federal do Paraná\n2191, 85902-040Rua XV de Novembro, ToledoCEP, PRBrazil\n",
"Leandro G Rizzi \nDepartamento de Física e Matemática\nFFCLRP\nUniversidade de São Paulo\nAvenida Bandeirantes3900, 14040-901Ribeirão PretoSPBrazil\n",
"Nelson A Alves \nDepartamento de Física e Matemática\nFFCLRP\nUniversidade de São Paulo\nAvenida Bandeirantes3900, 14040-901Ribeirão PretoSPBrazil\n"
]
| [
"Universidade Tecnológica Federal do Paraná\n2191, 85902-040Rua XV de Novembro, ToledoCEP, PRBrazil",
"Departamento de Física e Matemática\nFFCLRP\nUniversidade de São Paulo\nAvenida Bandeirantes3900, 14040-901Ribeirão PretoSPBrazil",
"Departamento de Física e Matemática\nFFCLRP\nUniversidade de São Paulo\nAvenida Bandeirantes3900, 14040-901Ribeirão PretoSPBrazil"
]
| []
| The gaussian ensemble and its extended version theoretically play the important role of interpolating ensembles between the microcanonical and the canonical ensembles. Here, the thermodynamic properties yielded by the extended gaussian ensemble (EGE) for the Blume-Capel (BC) model with infinite-range interactions are analyzed. This model presents different predictions for the first-order phase transition line according to the microcanonical and canonical ensembles. From the EGE approach, we explicitly work out the analytical microcanonical solution. Moreover, the general EGE solution allows one to illustrate in details how the stable microcanonical states are continuously recovered as the gaussian parameter γ is increased. We found out that it is not necessary to take the theoretically expected limit γ → ∞ to recover the microcanonical states in the region between the canonical and microcanonical tricritical points of the phase diagram. By analyzing the entropy as a function of the magnetization we realize the existence of unaccessible magnetic states as the energy is lowered, leading to a breaking of ergodicity. | 10.1140/epjb/e2010-00161-y | [
"https://arxiv.org/pdf/0910.0500v2.pdf"
]
| 118,638,587 | 0910.0500 | 2015bd529eaf307f943b7724452be882fe79061a |
Extended gaussian ensemble solution and tricritical points of a system with long-range interactions
21 May 2010
Rafael B Frigori
Universidade Tecnológica Federal do Paraná
2191, 85902-040Rua XV de Novembro, ToledoCEP, PRBrazil
Leandro G Rizzi
Departamento de Física e Matemática
FFCLRP
Universidade de São Paulo
Avenida Bandeirantes3900, 14040-901Ribeirão PretoSPBrazil
Nelson A Alves
Departamento de Física e Matemática
FFCLRP
Universidade de São Paulo
Avenida Bandeirantes3900, 14040-901Ribeirão PretoSPBrazil
Extended gaussian ensemble solution and tricritical points of a system with long-range interactions
21 May 2010numbers: 0520Gg0550+q0570Fh6540Gd Keywords: gaussian ensembleensemble inequivalenceBlume-Capel modelnegative specific heatnonconcave entropy
The gaussian ensemble and its extended version theoretically play the important role of interpolating ensembles between the microcanonical and the canonical ensembles. Here, the thermodynamic properties yielded by the extended gaussian ensemble (EGE) for the Blume-Capel (BC) model with infinite-range interactions are analyzed. This model presents different predictions for the first-order phase transition line according to the microcanonical and canonical ensembles. From the EGE approach, we explicitly work out the analytical microcanonical solution. Moreover, the general EGE solution allows one to illustrate in details how the stable microcanonical states are continuously recovered as the gaussian parameter γ is increased. We found out that it is not necessary to take the theoretically expected limit γ → ∞ to recover the microcanonical states in the region between the canonical and microcanonical tricritical points of the phase diagram. By analyzing the entropy as a function of the magnetization we realize the existence of unaccessible magnetic states as the energy is lowered, leading to a breaking of ergodicity.
I. INTRODUCTION
The canonical and grand-canonical ensembles approximate the microcanonical ensemble in the limit of infinitely large number of particles, where surface effects and fluctuations can be disregarded with respect to the bulk mean values [1,2]. However, if the system sizes are not large enough compared to the range of interactions, or even in the presence of long-range forces, this inherent expectation changes dramatically. Although such nonextensive systems can be appropriately described by models in a volume-dependent scaling manner [3], the non-additive character still remains. As a matter of fact, the lack of additivity can be noticed for astrophysical objects, where gravitational interaction is responsible for a nonnegligible contribution from particles at large distances [4]. Thus, one realizes that most of the systems in nature can be encompassed in a class that can be designated non-additive for what concerns energy and entropy. The existence of a fundamental ensemble, the microcanonical one, seems to meet a consensus, while others, in particular the canonical and grand-canonical ones, are taken as its approximations [1,5,6]. In cases where full consistency of statistical ensembles holds for systems that undergo phase transitions, it is found that finite size scaling relations still place the microcanonical approach as the fundamental one [7].
There are many examples of systems whose equilibrium * Electronic address: [email protected] † Electronic address: [email protected] ‡ Electronic address: [email protected] properties are not equivalent in both microcanonical and canonical ensembles. Differences in the thermodynamic features have been verified analytically for systems with long-range interactions [8][9][10][11]. These examples show that the nonequivalence appears where the canonical ensemble presents a phase diagram with a first-order transition line. Actually, necessary and sufficient conditions for equivalence of ensembles can be formally stated [12]. Thus, apart from the expected difference in the intermediate values of extensive thermodynamic quantities when one works with finite systems, the nonadditive property also sets striking differences in the thermodynamic limit, leading to different phase diagrams [13,14]. Such nonequivalence has its counterpart in the nonconcavity of the entropy as a function of energy, S = S(E) [8,15,16]. This may result in uncommon features at first-order phase transitions like temperature discontinuity and negative specific heat in the microcanonical ensemble [8,12,15,[17][18][19]. In turn, the thermodynamic temperature β = 1/T (E) = ∂S/∂E, (we take the Boltzmann constant k B = 1) is not a monotonic function of the energy and the equilibrium value E(β) may be a multivalued function of β [1,8,15].
An alternative ensemble, the gaussian ensemble [20][21][22][23][24] was introduced to deal with systems that exchange energy with a finite reservoir. This contrasts with the canonical ensemble, where the system is in thermal contact with a huge heat reservoir and the energy exchange is controlled by the temperature of the reservoir, which defines the average energy of the system. On the other hand, in the limit of no energy exchange with the reservoir, the system is isolated and thus has fixed energy. This is the microcanonical point of view, whose experimental situation resembles a system in contact with a fictitious reservoir of extremely small size where the energy exchange can be disregarded. The gaussian ensemble has also been described as a regularization procedure for the microcanonical ensemble [25]. Later on, Johal et al. [26] redefine the assumptions characterizing the gaussian ensemble to describe the thermodynamic properties of a system also in contact with a finite reservoir. This led to an extended version of the former gaussian ensemble. The extended gaussian ensemble (EGE) presents a smooth interpolation between its limiting behaviors, corresponding to the microcanonical and canonical ensembles.
This work explores the EGE as a working ensemble for a system where the ensemble nonequivalence has been demonstrated, the Blume-Capel model. We explicitly work out the analytical microcanonical solution from this ensemble. By means of the general EGE solution we are able to illustrate in details how the stable microcanonical states are continuously recovered as the gaussian parameter γ is increased. We investigate the EGE behavior in the region of the phase diagram where one observes a first-order phase transition line in the canonical ensemble but of second-order type in the microcanonical description. Then, we point out how EGE identifies the canonical and microcanonical tricritical points. Moreover, we call attention to the broken ergodicity found in this model.
The EGE formulation encompasses a natural extension of Statistical Mechanics, to include non-additive systems. The relation between EGE and Tsallis statistics has been described in Ref. [26,27], with the Tsallis parameter q being related to the parameter γ in the EGE. The theoretical background characterizing this ensemble is presented in Section 2, where we briefly review some thermodynamic relations that are γ dependent. The EGE solution of the mean-field BC model is carried out in Sec. 3 and is confronted at the thermodynamic level with the usual solutions in the canonical and microcanonical ensembles. The main conclusions about the effectiveness of the EGE in determining thermodynamic properties are summarized in Sec. 4.
II. EXTENDED GAUSSIAN ENSEMBLE
The canonical ensemble describes thermal properties of a system in thermal equilibrium with a heat reservoir. A new insight has been obtained when the reservoir is finite and possibly small. To this end, let a be a system with energy E and entropy S, and b a reservoir with energy E b and entropy S b , which exchanges energy with a. As a consequence, the energy of the system is allowed to fluctuate. Both systems form an isolated system with total energy E t = E + E b and total entropy S t . Equilibrium is reached when the total entropy S t (E) is a maximum. The system itself and its heat bath can be considered subsystems of an isolated system where E fluctuates around its mean value U . Thus, for fixed external parameters like total energy E t and number of particles in the system, the most probable energy U is such that the expansion of the reservoir entropy S b around its equilibrium value E t − U can be written up to the second order as
S b (E b ) = S b (E t − U ) + dS b dE b Et−U (U − E) + 1 2 d 2 S b dE 2 b Et−U (U − E) 2 + · · · .(1)
Because the derivatives depend on the reservoir thermodynamic properties, one defines [26]
dS b dE b Et−U = α ,(2)
and 1 2
d 2 S b dE 2 b Et−U = −γ .(3)
In the case of an infinite reservoir, one would be working with the canonical ensemble and α would thus be identified with the inverse thermodynamic temperature, α = 1/T . This is because in the canonical ensemble approach the temperature T of the reservoir is a fixed parameter that determines the mean energy of the system. The effect of an infinite reservoir with constant temperature yields γ = 0 and vanishing higher-order derivatives in Eq.(1); otherwise, those terms should be taken into account. The EGE is defined by the condition γ = 0 and probability density
P γ,α (E) = ρ(E) e −αE−γ(E−U) 2 Z γ (U, α) ,(4)
where Z γ (U, α) stands for the normalization constant, which is the corresponding partition function in EGE [26,28,29], with density of states ρ(E) and parameters γ, α, and the dependent one U = U (α, γ). Actually, the extended gaussian ensemble is a particular case in a class of general functions g(E) [28,29]; the quadratic form g(E) = γ(E − U ) 2 is just a convenient choice. The probability density in Eq. (4) can be used to write the average energy of the system,
U = E P γ,α (E)dE .(5)
Let us also introduce the extended thermodynamic potential analogous to the one in the canonical approach, Φ γ (U, α) = −ln Z γ (U, α). From here, the derivative at fixed value γ can be obtained,
∂Φ γ ∂α γ = U ,(6)
which parallels that of the usual canonical approach. The average energy U can be found self-consistently by means of Eq. (5), which recovers the usual canonical ensemble result for γ = 0, or from Eq. (6), as describing the equilibrium average energy with fixed parameters γ and α. In this paper we follow a kind of inverse problem, U will be set as an input parameter that must, in conjunction with the variational problem of minimization of the extended thermodynamic potential, satisfy stability conditions for some (unknown) temperature 1/α, which is U dependent. The extended heat capacity has also been introduced [23,26],
C γ = −α 2 (∂U /∂α) γ .
The usual canonical ensemble deals with homogeneous configurations in equilibrium as a function of intensive variables like temperature. The canonical averages always produce smooth distributions of mean values, as in the case of heat capacity, when averaged over fluctuations. In contrast to the canonical heat capacity, the extended heat capacity may present negative values when γ > 0. Negative values of C γ (U ) require that (∂U/∂α) γ > 0. Thus, van der Waals loops can be seen in this formalism. This sort of behavior has been observed in typical caloric curves, temperature versus mean energies, for systems with thermodynamic first-order phase transition, a forbidden phenomenon in the canonical picture. These features are illustrated in Fig. 2 for the Blume-Capel model. Thus, the standard homogeneous thermodynamics given by the canonical ensemble is not suited to describe first-order phase transitions. On the other hand, the stability condition of a system is related to the homogeneous temperature that defines thermal equilibrium with the huge reservoir. The EGE includes the possibility of a rather small heat bath, thus allowing for the appearance of inhomogeneous configurations in the system for finite γ, which results into a weakened version for the constraint of constant energy that defines the microcanonical ensemble.
The extended entropy can be obtained by the Legendre-Fenchel (LF) transform of the extended canonical thermodynamic potential Φ γ (α) as [26,28,29]
S γ (U ) = α ∂Φ γ ∂α γ + γ ∂Φ γ ∂γ α − Φ γ .(7)
From this transform and Eq. (6), it follows that α(U ) = ∂S γ /∂U . Notice that the above relations recover the canonical results in the limit γ → 0. In this case, one has the standard Legendre transform S(U ) = βU − Φ(β), where Φ(β) = lim γ→0 Φ γ (α = β) corresponds to the canonical potential and U is the equilibrium mean energy, U = ∂Φ(β)/∂β. It is well known that the standard Legendre transform of Φ(β) always produces a concave function of U . Therefore, nonequivalence between microcanonical and canonical ensembles appears when the microcanonical entropy is a nonconcave function of U in some energy range, as shown in Fig. 2b. In that case of nonequivalence, S(U ) can be named as just the canonical entropy: S can = βU can (β) + ln Z can (β). On the other hand, the limit γ → ∞ corresponds to the microcanonical case. This can be seen as lim γ→∞ π/γZ γ (U, α) through the use of the Dirac's delta sequence in the gaussian form [25]. For finite γ one obtains an intermediary thermal description between the known limiting ensembles.
III. EXTENDED GAUSSIAN SOLUTION OF THE MEAN-FIELD BC MODEL
The Blume-Capel model is a spin-1 Ising model [30,31] and was introduced to describe phase separation in magnetic systems. It is a particular case of the Blume-Emery-Griffiths model [32] aimed at describing the critical behavior of He 3 -He 4 mixtures with different concentrations. Here we consider its mean field version,
H(S) = ∆ N i=1 S 2 i − J 2N N i=1 S i 2 ,(8)
where S i = {0, ±1}. The couplings J > 0 and ∆ are the exchange and crystal-field interactions, respectively. The BC model represents a simple generalization of the spin-1/2 Ising model, but with a rich phase diagram in the (∆/J, T /J) plane. It exhibits a first-order transition line, tricritical point, and a second-order transition line. The critical properties of the BC model can be determined analytically in both the microcanonical [8] and canonical ensembles [32,33]. It has been demonstrated that these ensembles do not yield the same phase diagram for the first-order critical line [8]. Here, it is useful to introduce the order parameters magnetization M = N i=1 S i = N + − N − and its second moment, the quadrupole moment Q = N i=1 S 2 i = N + + N − , where N + and N − are, respectively, the number of sites with up and down spins. If N 0 is defined as the total number of zero spins, then N = N + + N − + N 0 is the total number of spins in the system.
The extended gaussian partition function
Z γ (U, α) = {S} e −αH(S)−γ[H(S)−U] 2 ,(9)
can be analytically solved in terms of its order parameters M and Q. To this end, the so-called Hubbard-Stratonovich (HS) transformation
e −bx 2 /2 = 1 √ 2πb +∞ −∞ dy e −y 2 /2b−ixy ,(10)
is applied to the gaussian term in Eq. (9) with the choices b = 2γ and x = H(S) − U . It turns out that
Z γ (U, α) = {S} 1 √ 4πγ e −αU (11) × +∞ −∞ dy e − y 2 4γ −(iy+α)(∆ N i=1 S 2 i −U) e (iy+α)( √ J 2N N i=1 Si) 2 .(12)
By making use of another HS transformation, the extended gaussian-partition function becomes
Z γ (U, α) = 1 √ 4πγ {S} e −αU × +∞ −∞ dy iy + α π 1/2 e − y 2 4γ −(iy+α)(∆ N i=1 S 2 i −U) × +∞ −∞ dz e −(iy+α)z 2 +2(iy+α)z √ J 2N N i=1 Si .(13)Since S i = {0, ±1}, it follows that {S} e −(iy+α)∆ i S 2 i +2(iy+α)z √ J 2N i Si = 1 + e −(iy+α)∆ e 2(iy+α)z √ J 2N + e −2(iy+α)z √ J 2N N (14) = N N0=0 N −N0 N+=0 N ! N 0 !N + !N − ! e −(iy+α)∆(N −N0) × e −2(iy+α)z √ J 2N (N −N0−N+) e 2(iy+α)z √ J 2N N+ ,(15)
where the last result is obtained by applying the binomial expansion twice to the result in Eq. (14). Now, placing the order parameters M and Q in Eq. (15) and inserting this result into Eq. (13), one obtains
Z γ (U, α) = 1 √ 4πγ e −αU N N0=0 N −N0 N+=0 N ! N 0 !N + !N − ! × +∞ −∞ dy iy + α π 1/2 e − y 2 4γ −(iy+α)(∆Q−U) × +∞ −∞ dz e −(iy+α)z 2 +2(iy+α) √ J 2N Mz .(16)
This expression can be integrated by gaussian formulas to produce
Z γ (U, α) = N N0=0 N −N0 N+=0 N ! N 0 !N + !N − ! × e −α(∆Q− J 2N M 2 )−γ(∆Q−U− J 2N M 2 ) 2 .(17)
The solution for this ensemble notoriously brings forth the counting factor for the number of microscopic states corresponding to the macrostate defined by M and Q. These order parameters indeed define the energy E of a configuration given by the Hamiltonian in Eq. (8),
E = ∆Q − J 2N M 2 .(18)
Hence, it is convenient to write explicitly the extended partition function as a function of those order parameters,
Z γ (U, α) = N Q=0 M=Q M=−Q N ! (N − Q)! 1 2 (Q + M ) ! 1 2 (Q − M ) ! × e −α(∆Q− JM 2 2N )−γ(∆Q− JM 2 2N −U) 2(19)
Before studying the thermodynamic features presented by this ensemble as a function of finite γ, it is important to show explicitly the limiting microcanonical behavior of this ensemble.
A. Microcanonical limit and negative response functions
Let us firstly explore the limit γ → ∞ to obtain the microcanonical ensemble. Since it is required that the extended partition function is well-behaved in this limit, the sum in Q must converge to a dominant value for some Q such that ∆Q − JM 2 /2N − U = 0. This is nothing else than the microcanonical constraint on the energy E.
Next, the thermodynamic limit N → ∞ is studied. Here, it is convenient to work with the intensive quantities q = Q/N and m = M/N . Let us also define K = J/2∆ and ε = U/∆N as in Ref. [8]. Equation (18) now reads ε = q − Km 2 and becomes a constraint equation for the average energy ε as γ → ∞.
For large N , one can evaluate the microcanonical partition function Z(ε, α), where Z(ε, α) = lim γ→∞ Z γ (ε, α), as a variational problem. To this end, we consider the saddle point solution, Z(ε, α) ≈ e −N ϕ(ε,α,m) , where m is such that the thermodynamic potential ϕ(ε, α, m) = εα∆
+ q ln q 2 − m 2 2(1 − q) + m 2 ln q + m q − m + ln (1 − q) ,(20)
is minimized for each average energy per site ε. To obtain ϕ(ε, α, m), we also applied the Stirling approximation for large N to Z(ε, α) in Eq. (19). The above expression, Eq. (20), was kept as a function of q and m, to recognize that the term inside the brackets is the correct microcanonical entropy −s micro (ε, m) obtained in [8] as a function of the parameter mean energy ε and mean magnetization m. The entropy is an even function of m and a nonconcave function of the independent variables m and ε, as respectively shown in Fig. 1 and 2b. This fact has striking consequences for the response functions specific heat and specific susceptibility.
Here, we remark that the microcanonical entropy is not always an analytic function. As a consequence, gaps may develop in the magnetization for some values of ε as illustrated in Figure 1 for the coupling ∆/J = 0.462407. This means that the system presents ranges of disconnected magnetization as function of ε and ∆/J. Thus, we cannot move continuously from one domain of magnetization to any other, leading to the so-called microcanonical ergodicity breaking [9], which is not related to any phase transition. The condition for unaccessible magnetization states can be easily determined from the expression for the entropy. Thus, one finds that those gaps start at ε = ∆/2J and increase as the energy ε is lowered. Now, back to the EGE approach, it is worthy of mention that we are not evaluating a Laplace integral of the usual canonical partition function: the extended thermodynamic potential per site ϕ γ results from a modified partition function Z γ ,
ϕ γ (ε, α) = − lim N →∞ 1 N ln Z γ (ε, α) ,(21)
which transforms nonequilibrium states of the canonical ensemble into equilibrium states of the extended ensemble. Here, the dependence of ϕ γ on m has been omitted because we are already assuming that the minimization in m has been accomplished. As emphasized, the nonequivalence of ensembles (microcanonical and canonical) is a consequence of the anomalous behavior of the microcanonical entropy characterized by the existence of convex parts in s micro (ε, m). The nonconcavity of the entropy function means that the system contains several energy-dependent equilibrium states, revealed in the microcanonical ensemble, which do not have their counterpart in the temperature-dependent equilibrium states in the canonical description. Thus, the new term in γ turns such points ε(T ) into equilibrium points in the extended ensemble [28,29,34,35]. The usual thermodynamic potential ϕ(ε, α) is given by the minimization procedure
ϕ(ε, α) = min m ϕ(ε, α, m) ,(22)
where the dependence of ϕ on ε is always kept to show that ϕ(α) is calculated at the equilibrium value that minimizes this potential.
In the present ensemble, the LF transform (7) of ϕ γ (ε, α, m), where ε, α and m are independent variables, produces the correct s micro (ε) as follows,
s micro (ε) = min α max m { lim γ→∞ s γ (ε, α, m)} ,(23)
where lim γ→∞ s γ stands for εα∆ − ϕ(ε, α, m) in this model. From this result one recovers the known thermodynamic behavior. Figure 2 contains our calculations for the microcanonical temperature T (ε), shifted entropỹ s(ε) = s micro (ε) − (A + Bε), specific heat, susceptibility and determinant of the curvature of s micro , as a function of ε for ∆/J = 0.462407. This value of ∆/J is in the canonical first-order phase transition region but it is in the microcanonical second-order phase transition region. Since ε = U/∆N , one obtains
1 T (ε) = 1 ∆ ∂s micro ∂ε ≡ β(ε) .(24)
The horizontal dashed line in Fig. 2a indicates the temperature T can ≃ 0.330666 obtained by canonical methods [32,33]. It connects the point T (ε a ) to T (ε b ), where ε a ≃ 0.328959 and ε b ≃ 0.330646, are read from Fig. 2a. The width δε = 0.001687 is the specific latent heat of the first-order phase transition seen in the canonical ensemble. In Fig. 2b it is shown s micro (ε) shifted by the canonical entropy s(ε) = A + Bε, where A ≃ 0.401447 and B ≃ 1.398397 are such that s(ε a ) = s micro (ε a ) and s(ε b ) = s micro (ε b ). This subtraction allows one to highlight the so called convex intruder in the specific entropy [2,5].
The point c, as signaled in Fig. 2a, corresponds to the energy ε c where occurs the minimum of the shifted entropy. Points d and e in Fig. 2b signal the energy range (ε d , ε e ) where the entropy is nonconcave, ε d ≃ 0.3297040 and ε e ≃ 0.3303532. In Fig. 2a, we have the corresponding temperature T (ε d ) ≃ 0.33074967 as the maximum temperature in that energy range. Figure 2c shows the specific heat
c(ε) = dε dT (ε) = − s mm T 2 d(ε, m) m ,(25)
where d(ε, m) is the determinant of the curvature of s micro (ε),
d(ε, m) = 1 ∆ 2 det s εε s εm s mε s mm ,(26)
where the notations s εε , s εm and s mm refer respectively to the second derivatives ∂ 2 s micro /∂ε 2 , ∂ 2 s micro /∂ε∂m and ∂ 2 s micro /∂m 2 . This determinant addresses the stability conditions around the stationary points m and ε [1,18]. Figure 2d presents the corresponding magnetic susceptibility
χ micro (ε, m) = − s εε /∆ 2 d(ε, m) .(27)
The nonconcavity of the microcanonical entropy in ε and m renders a negative region for the specific heat and magnetic susceptibility.
B. Finite γ and extended thermodynamic potential
For finite γ, one obtains different equilibrium properties. As we are going to show, the full equivalence with the ones in the microcanonical ensemble is achieved for finite γ only for ∆/J between the canonical and the microcanonical tricritical points. On the microcanonical first-order transition line, one needs γ → ∞ for such full recovery of the microcanonical results.
The analytical solution for the extended thermodynamic potential is analogously obtained following the procedure leading to Eq. (20),
ϕ γ (ε, α, m, q) = q ln q 2 − m 2 2(1 − q) + m 2 ln q + m q − m + ln(1 − q) +γ∆ 2 (q − Km 2 − ε) 2 + α∆(q − Km 2 ) .(28)
The basic problem concerning nonequivalent ensembles is that the true s micro cannot be obtained as an LF transform of the free energy ϕ(β). Here, the application of the extended LF transform to ϕ γ yields the extended entropy s γ , which can be read from Eq. (28), s γ = α∆(q − Km 2 ) − ϕ γ . This entropy is now a concave function of ε. The extended inverse temperature reads α = (∂s γ /∂ε)/∆. It characterizes stationary points analogous to the physical inverse temperature and is γ dependent. Now, let us evaluate the equilibrium points of ϕ γ (ε, α, m, q). Notice that the microcanonical constraint for the specific quantities is not enforced here, the variables ε, m and q are treated as independent variables. The linear term in γ can be seen as a constrained equation, leading to the microcanonical ensemble only for γ → ∞.
It was verified that all solutions of ∂ϕ γ /∂α = ε∆, as expressed by Eq. (6), and ∂ϕ γ /∂m = 0, ∂ϕ γ /∂q = 0, for a fixed ε, are only the ones given, for example, for T (ε) in Fig. 2a. However, those solutions are not stable for all γ. Since the analytical expression ϕ γ comes from the saddle-point approximation in Eq. (19), one needs to study the stability of those EGE solutions as a function of m and q. To this end, the determinant of the Hessian matrix,
d(m, q) = det ∂ 2 ϕγ ∂m 2 ∂ 2 ϕγ ∂m∂q ∂ 2 ϕγ ∂q∂m ∂ 2 ϕγ ∂q 2 ,
is analyzed in the T versus ε plane as a function of γ. This amounts to exploring which points {m, q} minimize ϕ γ for fixed T and ε, and satisfy the condition d(m, q) ≥ 0. Figure 3 shows the lines of stable points for different values of γ. Notice that the figure for γ = 0 corresponds to the canonical results, but we have not included the Maxwell construction. All presented states are the stable ones in the canonical ensemble. This procedure selects solutions T (ε) for energies where the entropy is a concave function. The gap in ε corresponds to the region where one observes negative values in the specific heat and in the susceptibility. As γ increases, one recovers the microcanonical solution. In fact, for sufficiently large γ, s γ (ε) becomes entirely concave and continuous on ε ∈ (ε d , ε e ) [29,36],
∂ 2 s γ ∂ε 2 < 0 .(29)
The addition of the term in γ to the usual Legendre transform changes the energy range where the nonconcavity of the canonical entropy is observed. How this energy range is reduced as γ increases can be easily evaluated from Eq. (29). This implies the following condition on γ, γ > −1 2∆T 2 (ε)c(ε)
.
But in view of the specific heat c(ε) < 0 for ε ∈ (ε d , ε e ), one obtains γ > 0 in this range, as exhibited in the inset of Fig. 4. Figure 4 shows the behavior ofγ ≡ −1/2∆T 2 (ε)c(ε) for energies out of that range, too. This figure highlights the minimum value of γ to achieve equivalence with the microcanonical ensemble for ε ∈ (ε d , ε e ). The full equivalence in this energy range is reached when γ ≃ 4950 for the example with coupling ∆/J = 0.462407. Negative values for γ have been considered in [27] to enhance Monte Carlo sampling.
Here, a negative γ converts microcanonical stable states at ε to unstable ones when ε < ε a or ε > ε b . Figure 5 shows, for all values of ∆/J between the canonical and microcanonical tricritical points, the minimum γ needed to recover the exact microcanonical solution. From the canonical approach, a first-order phase transition starts at ∆/J ≃ 0.46209812, but from a microcanonical analysis the true first-order transition starts at ∆/J ≃ 0.46240788. The EGE approach distinguishes those transition regions presenting finite values for γ, to recover the full thermodynamic features of this model when ∆/J is between those tricritical points.
IV. CONCLUSIONS
In conclusion, the analysis of the BC model shows how the stable states present in the microcanonical approach but not found in the canonical one can be obtained from EGE. This approach leads to analytical expressions for the extended free energy and entropy in a simple way, and quantifies the nonequivalence of ensembles between the tricritical points. The EGE formulation exhibits negative specific heat, like the microcanonical one, in the canonical first-order phase transition region. This also happens between the tricritical points where γ is finite. As a consequence, this remark may open a way of finding tricritical points in systems where analytical solutions can not be obtained. Thus, an appropriated Monte Carlo method based on EGE should be preferable than the standard one where sampling relies on the Boltzmann weight.
V. ACKNOWLEDGEMENT
The authors acknowledge support by FAPESP and CAPES (Brazil).
The canonical tricritical point occurs at (∆/J, T /J) = (≃ 0.46209812, 1/3), which gives origin to the first-order transition line for larger values of ∆/J. The microcanonical solution identifies the tricritical point at (∆/J, T /J) ≃ (0.46240788, 0.33034383).
FIG. 1 :
1Entropy smicro(ε, m) for some values of ε with ∆/J = 0.462407. Gaps in the magnetization correspond to unaccessible states.
FIG. 2 :
2Microcanonical behavior of the BC model with ∆/J = 0.462407. (a) Microcanonical temperature as a function of the average energy ε. The horizontal dashed line corresponds to the canonical critical temperature of the transition. (b) The shifted microcanonical entropys(ε) = smicro(ε)−(A+Bε). The subtraction is performed to visualize the nonconcavity of the entropy in relation to the linear function joining smicro(εa) to smicro(ε b ). (c) Specific heat c(ε). It presents two poles located by the zeros of the determinant d(ε, m), where m stands for the values that maximize the entropy at ε. Those poles can also be read from T (ε) behavior in (a). c(ε) becomes negative in between those poles. (d) Specific susceptibility χ(ε). It presents two poles, again placed at the zeros of d(ε, m) and becomes negative between them. (e) Behavior of the determinant d(ε, m) as a function of ε. The vertical dashed lines signal the zeros of d(ε, m).
Finally, Fig. 2e depicts the behavior of the determinant d(ε, m) as a function of ε with m evaluated at the microcanonical equilibrium condition. The zeros of this determinant indicate the region where the response functions attain negative values. Their locations are represented by vertical dashed lines. Those negative values for the canonical observables happens inside the convex region related to the phase separation in the first-order thermodynamic transition.
FIG. 3 :FIG. 4
34EGE temperatures for some values of γ with ∆/:γ ≡ −1/2∆T 2 (ε)c(ε) presents positive values for ε ∈ (ε d , εe): energy range of nonconcave entropy. Negative values ofγ occur at energies out of this range. Here ∆/J = 0.462407.
FIG. 5 :
5Minimum γ needed to recover the full microcanonical solution. The vertical dashed lines signal the canonical (∆/J ≃ 0.46209812) and microcanonical (∆/J ≃ 0.46240788) tricritical points.
Phase Transitions in Small Systems. D H E Gross, Microcanonical Thermodynamics, Lecture Notes in Physics. 66World ScientificD.H.E. Gross, Microcanonical Thermodynamics: Phase Transitions in Small Systems. Lecture Notes in Physics, vol. 66, World Scientific, Singapore, 2001
. D H E Gross, Phys. Rep. 279119D.H.E. Gross, Phys. Rep. 279, 119 (1997)
. M Kac, G E Uhlenbeck, P C Hemmer, J. Math. Phys. 4216M. Kac, G.E. Uhlenbeck, P.C. Hemmer, J. Math. Phys. 4, 216 (1963)
. T Padmanabhan, Phys. Rep. 188285T. Padmanabhan, Phys. Rep. 188, 285 (1990);
. P H Chavanis, Int. J. Mod. Phys. 203113P.H. Chavanis, Int. J. Mod. Phys. B20, 3113 (2006)
. D H E Gross, E V Votyakov, Eur. Phys. J. 15115D.H.E. Gross, E.V. Votyakov, Eur. Phys. J. B15, 115 (2000)
. D H E Gross, Phys. Chem. Chem. Phys. 4863D.H.E. Gross, Phys. Chem. Chem. Phys. 4, 863 (2002)
. M Kastner, M Promberger, J. Stat. Phys. 103893M. Kastner, M. Promberger, J. Stat. Phys. 103, 893 (2001)
. J Barré, D Mukamel, S Ruffo, Phys. Rev. Lett. 8730601J. Barré, D. Mukamel, S. Ruffo, Phys. Rev. Lett. 87, 030601 (2001)
. D Mukamel, S Ruffo, N Schreiber, Phys. Rev. Lett. 95240604D. Mukamel, S. Ruffo, N. Schreiber, Phys. Rev. Lett. 95, 240604 (2005)
. J Barré, F Bouchet, T Dauxois, S Ruffo, J. Stat. Phys. 119677J. Barré, F. Bouchet, T. Dauxois, S. Ruffo, J. Stat. Phys. 119, 677 (2005)
. A Campa, S Ruffo, H Touchette, Physica. 385233A. Campa, S. Ruffo, H. Touchette, Physica A385, 233 (2007)
. R S Ellis, K Haven, B Turkington, J. Stat. Phys. 101999R.S. Ellis, K. Haven, and B. Turkington, J. Stat. Phys. 101, 999 (2000)
. L Casetti, M Kastner, Phys. Rev. Lett. 97100602L. Casetti, M. Kastner, Phys. Rev. Lett. 97, 100602 (2006)
. M Kastner, O Schnetz, J. Stat. Phys. 1221195M. Kastner, O. Schnetz, J. Stat. Phys. 122, 1195 (2006)
. L Casetti, M Kastner, Physica. 384318L. Casetti, M. Kastner, Physica A384, 318 (2007)
. F Bouchet, J Barré, J. Stat. Phys. 1181073F. Bouchet, J. Barré, J. Stat. Phys. 118, 1073 (2005)
. S Ruffo, Eur. Phys. J. 64355S. Ruffo, Eur. Phys. J. B64, 355 (2008)
Dynamics and Thermodynamics of Systems with Long-Range Interactions. J Barré, D Mukamel, S Ruffo, Lecture Notes in Physics. 60245SpringerJ. Barré, D. Mukamel, S. Ruffo, Dynamics and Ther- modynamics of Systems with Long-Range Interactions, Lecture Notes in Physics, Vol. 602, 45 (Springer, 2002)
. R S Ellis, H Touchette, B Turkington, Physica. 335518R.S. Ellis, H. Touchette, B. Turkington, Physica A335 (2004) 518
. J H Hetherington, J. Low Temp. Phys. 66145J.H. Hetherington, J. Low Temp. Phys. 66, 145 (1987)
. J H Hetherington, D R Stump, Phys. Rev. 351972J.H. Hetherington, D. R. Stump, Phys. Rev. D35, 1972 (1987)
. D R Stump, J H Hetherington, Phys. Lett. 188359D.R. Stump, J.H. Hetherington, Phys. Lett. B188, 359 (1987)
. M S S Challa, J H Hetherington, Phys. Rev. Lett. 6077M.S.S. Challa, J. H. Hetherington, Phys. Rev. Lett. 60, 77 (1988)
. M S S Challa, J H Hetherington, Phys. Rev. 386324M.S.S. Challa, J. H. Hetherington, Phys. Rev. A38, 6324 (1988)
. J Lukkarinen, J. Phys. A: Math. Gen. 32287J. Lukkarinen, J. Phys. A: Math. Gen. 32, 287 (1999)
. R S Johal, A Planes, E Vives, Phys. Rev. 6856113R. S. Johal, A. Planes, E. Vives, Phys. Rev. E68, 056113 (2003)
. T Morishita, M Mikami, J. Chem. Phys. 12734104T. Morishita, M. Mikami, J. Chem. Phys. 127, 034104 (2007)
. M Costeniuc, R S Ellis, H Touchette, B Turkington, J. Stat. Phys. 1191283M. Costeniuc, R.S. Ellis, H. Touchette, B. Turkington, J. Stat. Phys. 119, 1283 (2005)
. M Costeniuc, R S Ellis, H Touchette, B Turkington, Phys. Rev. 7326105M. Costeniuc, R.S. Ellis, H. Touchette, B. Turkington, Phys. Rev. E73, 026105 (2006)
. M Blume, Phys. Rev. 141517M. Blume, Phys. Rev. 141, 517 (1966)
. H W Capel, Physica (Amsterdam). 32423H.W. Capel, Physica (Amsterdam) 32, 966 (1966); 33, 295 (1967); 37, 423 (1967)
. M Blume, V J Emery, R B Griffiths, Phys. Rev. 41071M. Blume, V.J. Emery, and R.B. Griffiths, Phys. Rev. A4, 1071 (1971)
. D Mukamel, M Blume, Phys. Rev. 10610D. Mukamel and M. Blume, Phys. Rev. A10, 610 (1974)
. M Costeniuc, R S Ellis, H Touchette, Phys. Rev. 7410105M. Costeniuc, R.S. Ellis, H. Touchette, Phys. Rev. E74, 010105(R) (2006)
. H Touchette, M Costeniuc, R S Ellis, B Turkington, Physica. 365132H. Touchette, M. Costeniuc, R.S. Ellis, B. Turkington, Physica A365, 132 (2006)
. M Costeniuc, R S Ellis, H Touchette, B Turkington, Prob. Geom. Integr. Syst. 55131M. Costeniuc, R.S. Ellis, H. Touchette, B. Turkington, Prob. Geom. Integr. Syst. 55, 131 (2007)
| []
|
[
"Sequentially estimating the dynamic contact angle of sessile saliva droplets in view of SARS-CoV-2",
"Sequentially estimating the dynamic contact angle of sessile saliva droplets in view of SARS-CoV-2"
]
| [
"Sudeep R Bapat Id *[email protected] \nDepartment of Operations Management and Quantitative Techniques\nIndian Institute of Management\nIndoreIndia\n"
]
| [
"Department of Operations Management and Quantitative Techniques\nIndian Institute of Management\nIndoreIndia"
]
| []
| Estimating the contact angle of a virus infected saliva droplet is seen to be an important area of research as it presents an idea about the drying time of the respective droplet and in turn of the growth of the underlying pandemic. In this paper we extend the data presented by Balusamy, Banerjee and Sahu ["Lifetime of sessile saliva droplets in the context of SARS-CoV-2," Int. J. Heat Mass Transf. 123, 105178 (2021)], where the contact angles are fitted using a newly proposed half-circular wrapped-exponential model, and a sequential confidence interval estimation approach is established which largely reduces both time and cost with regards to data collection. | 10.1371/journal.pone.0261441 | null | 236,447,784 | 2107.12857 | 222aa7c3ac53429cea1992e9a66c312474a87a49 |
Sequentially estimating the dynamic contact angle of sessile saliva droplets in view of SARS-CoV-2
Sudeep R Bapat Id *[email protected]
Department of Operations Management and Quantitative Techniques
Indian Institute of Management
IndoreIndia
Sequentially estimating the dynamic contact angle of sessile saliva droplets in view of SARS-CoV-2
RESEARCH ARTICLE
Estimating the contact angle of a virus infected saliva droplet is seen to be an important area of research as it presents an idea about the drying time of the respective droplet and in turn of the growth of the underlying pandemic. In this paper we extend the data presented by Balusamy, Banerjee and Sahu ["Lifetime of sessile saliva droplets in the context of SARS-CoV-2," Int. J. Heat Mass Transf. 123, 105178 (2021)], where the contact angles are fitted using a newly proposed half-circular wrapped-exponential model, and a sequential confidence interval estimation approach is established which largely reduces both time and cost with regards to data collection.
Introduction
SARS-CoV-2 (virus which causes COVID-19) has severely impacted more than 200 countries worldwide, with over 180 million cases until the end of June, 2021. The span of this virus was so fast and devastating that the World Health Organization declared the outbreak as a Public Health Emergency of International Concern on 30 January, 2020, whereas a global pandemic on 11 March, 2020. Spreading of such respiratory diseases is largely caused due to respiratory droplets of saliva (of an infected person) during coughing, sneezing or even moist speaking. A recent reference paper in this regard is by [1]. Understanding the lifetime of such droplets is hence an important area of research, which could be handled by studying the fluid dynamics of such droplets in air. One may refer to [2] who analyze the flow-physics of virus laden respiratory droplets, or [3] who analyze the likelihood of survival of a virus laden droplet on a solid surface. Further, it has been studied that such respiratory droplets have a tendency to increase their lifetime on coming in contact with a surface based on its properties. [4] studied the physico-chemical characteristics of evaporating respiratory fluid droplets and found out that a typical saliva droplet also contains NaCl, mucin (protein) and a certain surfactant in fixed amounts. In addition to the droplet composition, the evaporation rate of a droplet also depends on environmental conditions and factors such as temperature, relative humidity, droplet volume and the contact angle which the droplet makes with the surface. A specific analysis was carried out in [3] droplet in two different temperatures namely, 25˚C and 40˚C which represent an air-conditioned room and a summer afternoon respectively. The contact angle and humidity were set at 30˚and 50%. Studying the drying time of a droplet plays an important role as it well related to the the survival of the droplet and in turn to the growth of the pandemic. [5] tested this hypothesis using suspended droplets in air, whereas [3] compared the growth of infection with the drying time in different cities globally. They verified that for a 5 nL droplet, a higher drying time corresponds to a higher pandemic growth rate. Hence, when a droplet evaporates slowly, the chance of the survival of the virus is enhanced.
Specifically, the initial contact angle, which measures the angle that a droplet makes with the surface plays a big role in determining the lifetime of it. Different contact angles are predominant with different surfaces i.e., droplets on glass, wood, stainless steel, cotton or the touchscreen of a smartphone tend to make angles varying from 5˚to 95˚. It is also intuitive that a contact angle cannot exceed 180˚. Fig 1 contains pictorial representations of two different droplets making different angles with the surface. The left image shows a water droplet on cloth, making a high contact angle due to the hydrophobic property of the cloth. Whereas the image on the right shows a water droplet on a lotus leaf, again making a high contact angle. Both the images are borrowed from Wikipedia under the license CC BY-SA 3.0.
A dynamic contact angle is the one which is measured as the droplet changes its size as it moves quickly over the surface. One may again refer to [3] or [1] for more details. However it is also true that measuring such contact angles (initial or dynamic) involves a lot of struggle and cost, as it has to be carried out using heavy apparatus. Some of the existing methods for contact angle determination include the sessile droplet method, where the angle is measured using a "contact angle goniometer", the pendant drop method which is used to measure angles for pendant drops, the dynamic sessile drop method which is similar to a sessile drop method but requires the drop to be modified or a single-fiber meniscus method where the shape of the meniscus on the fiber is directly imaged through a high resolution camera. One may refer to [6] for an overview of other techniques. Hence to estimate a dynamic contact angle of a droplet, a reduction in the number of observations required to carry out the estimation is highly beneficial. In this paper, we thus introduce an appropriate sequential estimation technique. Now since the aim of this paper is to estimate a certain contact angle, it makes more sense to apply a circular model rather than a usual linear one on the concerned data. Literature on such models is vast and ever expanding. A few other examples where a circular model is appropriate involve orientations of the major axis of termite mounds, the angles of slope of different sedimentary layers of an exposed rock-face or the walking directions of long legged desert ants etc. In all these examples, the observations are either certain directions, or angles measured in degrees or radians. Such observations are often measured either clockwise or counter-clockwise from some reference direction, usually called as the zero direction. Over years, a usual technique to design new circular distributions is to wrap a linear distribution over a full circle. However as seen before, since the contact angles of any droplet is necessarily less than 180˚, an adjusted model which is capable of taking values only on half-a-circle seems more appropriate. In this context, we introduce a new model called as the half-circular wrapped-exponential distribution to model our data. In general, a few notable books covering circular models which one can refer to are by [7][8][9][10], among others.
Data modeling and analysis
The particular dataset analyzed for this experiment is a pseudo dataset which is an extended version of the one borrowed from [1] and consists of the temporal variations of the dynamic contact angles in degrees (simply called as contact angles from now on) of the droplet normalized with the initial contact angle, θ/θ 0 . The particular setting used for this experiment is as follows: the relative humidity (RH) is controlled at 50%, the initial droplet volume (V 0 ) is 10 nL, the molality of the saliva (M) is 0.154 mol/kg, temperature (T) is 30˚, the surfactant parameter (C) is 10 and the initial contact angle (θ 0 ) is 50˚. One may refer to Fig 2a in [1] for a pictorial description of the dataset. As there was not an access to the actual observations, we adopted the following approach: for brevity alone, we only focused on the curve representing RH = 50%. Using an online tool, we extracted the (x, y) coordinates for each of its 20 observations. We converted these normalized contact angles to actual contact angles (θ) and finally translated those into radians. Table 1 lists all these observations for convenience. Now, to include more observations in the analysis, we first assumed a linear relationship between "time" and "contact angles" (CA), fitted several polynomial regression models and picked the following third-order model which fitted better with a R 2 value of 0.9613. We then assumed a vector of times ranging over 5-300 seconds with a jump of 1 second in between, and predicted the contact angles according to the above model. Thus, our final pseudo dataset consists of 296 observations according to our construction.
A half-circular wrapped-exponential model for the contact angles
For a start, Fig 3 shows a pictorial distribution of our pseudo data placed around a circle. Purposefully, we have stacked the closely lying observations for a better visualization and as one can observe, all the observations lie entirely between 0 and π/2 radians. As seen before, wrapping a linear density over a circle is a suitable choice to model such observations. In this case, since the linear curve seen in Fig 2 shows an exponential decline, it makes sense to choose some of the lifetime distributions and wrapping them around a circle. Now as discussed before, since any contact angle of a droplet is always less than π radians it makes more sense to fit a distribution which takes values only on a semicircle. In literature, not many such distributions have been proposed. One such example is of a half-circular distribution which was introduced by [11], who converted a Gamma distribution to a half-circular one and fitted it to the angle which measures the posterior corneal curvature of an eye. In a similar spirit, we now introduce a half-circular wrapped-exponential (HCWE) distribution with parameter λ. An
PLOS ONE
Estimating the dynamic contact angle of saliva droplets intuitive construction is through the following transformation: X w = X(modπ), where X is a linear exponential random variable with pdf f(x) = λe −λx , x > 0, λ > 0. Interestingly, another easy construction is to simply truncate X over [0, π). Its pdf, cdf and characteristic functions are as follows, f w ðyÞ ¼ le À ly 1 À e À pl ; y 2 ½0; pÞ ð2Þ F w ðyÞ ¼ 1 À e À ly 1 À e À pl ; y 2 ½0; pÞ ð3Þ
� p ¼ 1 1 À ip=l ; p ¼ 0; �1; �2; :::ð4Þ
Consequently, the mean direction happens to be,
m 0 ¼ tan À 1 1 l ; l > 0ð5Þ
Now for a comparison, we tried to fit several other wrapped distributions to the data namely, the wrapped-exponential by [12], transmuted wrapped-exponential by [13] and wrapped-Lindley by [14]. For completeness, we also fit a von-Mises distribution which is one of the widely used circular models. Table 2 contains the log-likelihood values and the AICs for these five models. As one can observe, the half-circular wrapped-exponential model fits better than the others. It is also seen to be a significant fit with a p-value of 0.18 using the Kolmogorov-Smirnov test, and the estimated λ value equals 3.69. On using Eq (5), the estimated mean direction equals 0.2646 radians. Fig 4 contains a set of goodness of fit plots for the HCWE(λ) distribution. All these fits and plots were carried out using the "circular" and "fitdistrplus" packages in R. Now, since in practice the value of λ will be unknown, we develop a sequential fixed-width confidence interval to estimate λ which in turn will give us an estimate for the mean direction μ 0 of the contact angle, which will give us a fair idea about the drying time of the droplet.
A sequential fixed-width confidence interval
In general, a sequential rule consists of identifying a stopping variable, which determines the optimal sample size to be used in the experiment. This technique largely reduces the number of observations needed for the inference part, which proves to be beneficial as it reduces both time and cost. Literature on sequential estimation methodologies is vast and still being explored. In particular, a few recent works aimed at finding appropriate confidence intervals include, [15], who developed a general sequential fixed-accuracy confidence interval, [16], who looked at constructing bounded length intervals, [17,18] who constructed fixed-accuracy intervals for parameters under an inverse Gaussian and bivariate exponential models or [19], who derived fixed-accuracy intervals for the reliability parameter of an exponential distribution.
To summarize, a fixed-width interval (FWI) aims at simultaneously controlling the width of the interval (say, d) and the confidence limit (1−α). Such an interval is clearly symmetric around the parameter. It turns out that there does not exist any fixed sample size procedure to tackle this problem and one has to resort to a sequential setup. However, a certain drawback of this method is: even though a parameter is entirely positive, the lower bound of a FWI can assume negative values. A fix to this is to construct a fixed-accuracy interval (FAI), which assumes a fixed-accuracy value (say, d). A FAI happens to be symmetric around log of the parameter. An introductory paper to this approach is [15]. Even in this case it may happen, that if the parameter space is bounded (say from above by U), a FAI may contain bounds which cross U. Hence, [15] came up with a bounded-length fixed-accuracy interval (BLFAI) as a fix. In our case, we aim at constructing a fixed-width interval as outlined next.
Let θ 1 , θ 2 , . . . be the dynamic contact angles of a droplet, measured using a suitable technique. Then, for some pre-fixed width d, a confidence interval of λ takes the following form,
I n ¼ fl : l 2 ½l n À d;l n þ d�g;ð6Þ
wherel n is the MLE of λ, which is consistent and asymptotically normal with the following representation,
ffi ffi ffi n p ðl n À lÞ ! D Nð0; s 2 l n Þ;ð7Þ
where s 2 l n is the variance of the MLE and! D stands for convergence in distribution. Now, for I n to include λ with a pre-fixed coverage probability 1 − α, the required fixed sample size can be found out as follows, where z α/2 is the upper 100(α/2)% point of a standard normal distribution. Since n � is an unknown quantity, we now propose the following sequential methodology:
Pðl n À d � l �l n þ dÞ ¼ 1 À a ) n � � n � d ¼ z a=2 d � �2 s 2 l n ;ð8Þ
We first fix an integer m(> 1) often called as the "pilot sample size" and obtain a pilot sample θ 1 , θ 2 , . . ., θ m from a HCWE(λ) density as given in Eq (2). We then aim to collect an additional observation at every stage, until sampling is terminated according to the following stopping rule:
N ¼ inf ( n � m : n � z a=2 d � �2ŝ 2 l n � ;ð9Þ
whereŝ 2 l n is the estimated variance of the MLE. We then have a final set of observations θ 1 , θ 2 , . . ., θ N and will estimate λ using the interval,
I N ¼ ½l N À d;l N þ d� ¼ ½L N ; U N � ðsayÞ:ð10Þ
The stopping variable N from Eq (9) follows properties such as asymptotic first-order efficiency and asymptotic consistency. We leave out the proofs for brevity. One may refer to Theorem 3.1 of [17]. Finally, we estimate the mean direction μ 0 using an interval,
J N ¼ tan À 1 1 U N ; tan À 1 1 L N � � :ð11Þ
We now outline a stepwise procedure to tackle a practical problem through the above methodology.
Step 1: For a certain specific liquid droplet, observe the contact angles over equally spaced time intervals and note down the first m angles (θ 1 , θ 2 , . . ., θ m ) over the first m time points t 1 , t 2 , . . ., t m .
Step 2: After t m , collect observations (i.e. observe contact angles) one-at-a-time according to the stopping rule given in Eq (9).
Step 3: Once the stopping rule is executed, observe the value of N, find out an interval for λ as per Eq (10) and ultimately find a subsequent interval for the mean direction μ 0 according to Eq (11).
Step 4: Using the interval for μ 0 , find out a rough interval for the average drying time of the droplet by predicting using the following inverted polynomial regression model (R 2 = 0.98) (i.e. by assuming "time" as the response and "contact angle" as the predictor.
Hence, for our complete pseudo data,l n ¼ 3:69;m 0 ¼ 0:2646 and the estimated drying time equals 115.13 seconds. We now apply the above procedure to our observed pseudo data with a small adjustment: we first randomize the entire data, sample 250 observations, and sort them. This kind of an approach gives a good representation of the actual data in every simulation. We consider several fixed values of d ranging from 0.05 to 0.6 over roughly equally spaced intervals. We fix the pilot sample size m = 5 and the significance level α = 0.05. After implementing the sequential rule Eq (9) with a particular choice of d, we obtain the confidence interval for λ and in turn report the interval for μ 0 , and finally an interval for the average drying time of the droplet. Since the procedure has to be solved analytically, all the analyses were carried out again using the "fitdistrplus" package in R.
A few take away points from Table 3 are: as one goes on increasing d, naturally, the width of the desired interval increases and as a result, less number of observations are required to achieve a confidence level of α (0.05 in this case). Also, for increasing N, the intervals for the drying time also increase and are seen to approach the actual estimated drying time of 115.13 seconds. But of course, a larger sample size comes with a cost and hence one needs to strike a proper balance.
Conclusion
In this paper we have established a sequential confidence interval methodology to estimate the dynamic contact angle of a sessile saliva drop. This will help the researchers and practitioners to build an idea about the growth of the pandemic in general or by focusing on specific countries. Since a contact angle has to be measured using a heavy-duty apparatus, a sequential rule also appears to be beneficial by offering a reduction in time and cost. We introduced a new circular model called as the half-circular wrapped-exponential distribution to model the angles, which can only spread over half a circle. This new model was seen to fit better than some of the existing ones in the literature. Depending on the width d of the interval fixed by the experimenter, the mean contact angle of the droplet was seen to be between 0.41 and 0.56 radians or 23.49 and 32.08 degrees. On the other hand the drying time of the saliva droplet was seen to be between 61 and 80 seconds.
Fig 1 .
1Water droplets making contact angles greater than 90˚on two different surfaces. Both the images are borrowed from Wikipedia under the license CC BY-SA 3.0. (a) water drop on cloth (b) water drop on a lotus leaf. https://doi.org/10.1371/journal.pone.0261441.g001
CA ¼ 0:985 À 8:45 � 10 À 3 time þ 2:34 � 10 À 5 time 2 À 2:05 � 10 À 8 time 3ð1ÞFig 2 contains a scatterplot of the raw data (a) and the fitted polynomial regression model superimposed on it (b).
Fig 2 .
2Temporal variations of the contact angles (a) plot of the raw data (b) superimposed polynomial model. https://doi.org/10.1371/journal.pone.0261441.g002
Fig 3 .
3Raw circular plot of the pseudo data. https://doi.org/10.1371/journal.pone.0261441.g003
Fig 4 .
4Goodness of fit plots for the HCWE(λ) density on the pseudo data.https://doi.org/10.1371/journal.pone.0261441.g004
where the authors examined the drying time of a deposited Citation: Bapat SR (2021) Sequentially estimating the dynamic contact angle of sessile saliva droplets in view of SARS-CoV-2. PLoS ONE 16(12): e0261441. https://doi.org/10.1371/journal. pone.0261441a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
OPEN ACCESS
Table 1 .
1Extracted dataset containing the temporal variations of the contact angles (in radians).Time (sec)
CA
Time (sec)
CA
Time (sec)
CA
10
0.811
88.75
0.379
287.5
0.034
25
0.794
100
0.261
325
0.031
55
0.689
118.75
0.218
381.25
0.028
58.75
0.654
137.5
0.157
437.5
0.026
66.25
0.593
175
0.109
493.75
0.023
77.25
0.471
212.5
0.052
550
0.020
83.15
0.436
250
0.035
https://doi.org/10.1371/journal.pone.0261441.t001
Table 2 .
2Comparing model fits to the pseudo data.Model
Log-likelihood
AIC
von-mises
−36.72
77.44
wrapped-exponential
90.75
−179.50
half-circular wrapped-exponential
92.56
−181.52
transmuted wrapped-exponential
−7.01
18.02
wrapped-Lindley
89.94
−177.88
https://doi.org/10.1371/journal.pone.0261441.t002
time ¼ 266:96 À 872:293 CA þ 1329:892 CA 2 À 763:05 CA 3
PLOS ONE | https://doi.org/10.1371/journal.pone.0261441 December 22, 2021
AcknowledgmentsThe author sincerely thanks the anonymous referee and the Editor for their valuable comments and suggestions.Author Contributions
Lifetime of sessile saliva droplets in the context of SARS-CoV-2. S Balusamy, S Banerjee, K C Sahu, Int J Heat Mass Trasf. 123Balusamy S, Banerjee S, Sahu KC. Lifetime of sessile saliva droplets in the context of SARS-CoV-2. Int J Heat Mass Trasf. 2021; (123).
The flow physics of COVID-19. R Mittal, R Ni, J H Seo, J Fluid Mech. 894Mittal R, Ni R, Seo JH. The flow physics of COVID-19. J Fluid Mech. 2020; (894).
Likelihood of survival of coronavirus in a respiratory droplet deposited on a solid surface. R Bhardwaj, A Agrawal, 10.1063/5.0012009Phys Fluids. 326Bhardwaj R, Agrawal A. Likelihood of survival of coronavirus in a respiratory droplet deposited on a solid surface. Phys Fluids. 2020; 32(6). https://doi.org/10.1063/5.0012009
Physico-chemical characteristics of evaporating respiratory fluid droplets. Interface. E P Vejerano, L C Marr, 10.1098/rsif.2017.09392949117815Vejerano EP, Marr LC. Physico-chemical characteristics of evaporating respiratory fluid droplets. Inter- face. 2018; 15(139). https://doi.org/10.1098/rsif.2017.0939 PMID: 29491178
Modeling ambient temperature and relative humidity sensitivity of respiratory droplets and their role in determining growth rate of COVID-19 outbreaks. S Chaudhuri, S Basu, P Kabi, V R Unni, A Saha, Phys Fluids. 32Chaudhuri S, Basu S, Kabi P, Unni VR, Saha A. Modeling ambient temperature and relative humidity sensitivity of respiratory droplets and their role in determining growth rate of COVID-19 outbreaks. Phys Fluids. 2020; 32.
Robust contact angle determination for needle-in-drop type measurements. E Albert, B Tegze, Z Hajnal, D Zámbó, D P Szekrényes, A Déak, 10.1021/acsomega.9b0299031720550ACS Omega. 4Albert E, Tegze B, Hajnal Z, Zámbó D, Szekrényes DP, Déak A, et al. Robust contact angle determina- tion for needle-in-drop type measurements. ACS Omega. 2019; 4:18465-18471. https://doi.org/10. 1021/acsomega.9b02990 PMID: 31720550
Directional statistics. K V Mardia, P E Jupp, WileyNew York2nd EdMardia KV, Jupp PE. Directional statistics. 2nd Ed. New York: Wiley.
Topics in circular statistics. J S Rao, A Sengupta, World ScientificNew YorkRao JS and Sengupta A. Topics in circular statistics. New York: World Scientific.
Angular statistics. Avd Rao, Svs Girija, CRC PressBoca RatonRao AVD, Girija SVS. Angular statistics. Boca Raton: CRC Press.
Applied directional statistics. C Ley, T Verdebout, CRC PressBoca RatonLey C, Verdebout T. Applied directional statistics. Boca Raton: CRC Press.
A half-circular distribution on a circle. A Rambli, I Mohamed, K Shimizu, N Ramli, 10.17576/jsm-2019-4804-21Sains Malay. 484Rambli A, Mohamed I, Shimizu K, Ramli N. A half-circular distribution on a circle. Sains Malay. 2019; 48(4):887-892. https://doi.org/10.17576/jsm-2019-4804-21
New families of wrapped distributions for modeling skew circular data. J S Rao, T J Kozubowski, 10.1081/STA-200026570Comm Stat Theo Meth. 339Rao JS, Kozubowski TJ. New families of wrapped distributions for modeling skew circular data. Comm Stat Theo Meth. 2004; 33(9):2059-2074. https://doi.org/10.1081/STA-200026570
A new wrapped exponential distribution. A Yilmaz, C Biçer, 10.1007/s40096-018-0268-yMath Sci. 12Yilmaz A, Biçer C. A new wrapped exponential distribution. Math Sci. 2018; (12):285-293. https://doi. org/10.1007/s40096-018-0268-y
Wrapped Lindley distribution. S Joshi, K K Jose, 10.1080/03610926.2017.1280168Comm Stat Theo Meth. 475Joshi S, Jose KK. Wrapped Lindley distribution. Comm Stat Theo Meth. 2018; 47(5):1013-1021. https://doi.org/10.1080/03610926.2017.1280168
A general sequential fixed-accuracy confidence interval estimation methodology for a positive parameter: Illustrations using health and safety data. S Banerjee, N Mukhopadhyay, 10.1007/s10463-015-0504-2Ann Inst Stat Math. 68Banerjee S, Mukhopadhyay N. A general sequential fixed-accuracy confidence interval estimation methodology for a positive parameter: Illustrations using health and safety data. Ann Inst Stat Math. 2016; (68):541-571. https://doi.org/10.1007/s10463-015-0504-2
Purely sequential and two-stage bounded-length confidence intervals for the Bernoulli parameter with illustrations from health studies and ecology. N Mukhopadhyay, S Banerjee, Springer Proceedings in Mathematics & Statistics 149. Choudhary PK et al.Ordered Data Analysis, Modeling and Health Research MethodsMukhopadhyay N, Banerjee S. Purely sequential and two-stage bounded-length confidence intervals for the Bernoulli parameter with illustrations from health studies and ecology. Choudhary PK et al. (eds.), Ordered Data Analysis, Modeling and Health Research Methods. Springer Proceedings in Math- ematics & Statistics 149.
On purely sequential estimation of an inverse Gaussian mean. S R Bapat, 10.1007/s00184-018-0665-0Metrika. 81Bapat SR. On purely sequential estimation of an inverse Gaussian mean. Metrika. 2018; (81):1005- 1024. https://doi.org/10.1007/s00184-018-0665-0
Purely sequential fixed accuracy confidence intervals for P(X > Y) under bivariate exponential models. S R Bapat, Am J Math Manag Sci. 37Bapat SR. Purely sequential fixed accuracy confidence intervals for P(X > Y) under bivariate exponen- tial models. Am J Math Manag Sci. 2018; (37):386-400.
Sequential fixed-accuracy confidence intervals for the stressstrength reliability parameter for the exponential distribution: two-stage sampling procedure. A Khalifeh, E Mahmoudi, A Chaturvedi, 10.1007/s00180-020-00957-5Comp Stat. Khalifeh A, Mahmoudi E, Chaturvedi A. Sequential fixed-accuracy confidence intervals for the stress- strength reliability parameter for the exponential distribution: two-stage sampling procedure. Comp Stat. https://doi.org/10.1007/s00180-020-00957-5.
| []
|
[
"HW ± /HZ + 0 and 1 jet at NLO with the POWHEG BOX interfaced to GoSam and their merging within MiNLO",
"HW ± /HZ + 0 and 1 jet at NLO with the POWHEG BOX interfaced to GoSam and their merging within MiNLO"
]
| [
"Gionata Luisoni [email protected] ",
"Paolo Nason [email protected] ",
"Carlo Oleari [email protected] ",
"Francesco Tramontano [email protected] ",
"\nMax-Planck Institut für Physik\nINFN\nFöhringer 6, Sezione di Milano Bicocca, Piazza della Scienza 3D-80805, 20126Munich, MilanoGermany, Italy\n",
"\nMilano-Bicocca and INFN\nUniversità di\nSezione di Milano-Bicocca Piazza della Scienza 320126MilanoItaly\n",
"\nFederico II\" and INFN, Sezione di Napoli\nComplesso di Monte Sant'Angelo, via Cintia\nUniversità di Napoli \"\n80126NapoliItaly\n"
]
| [
"Max-Planck Institut für Physik\nINFN\nFöhringer 6, Sezione di Milano Bicocca, Piazza della Scienza 3D-80805, 20126Munich, MilanoGermany, Italy",
"Milano-Bicocca and INFN\nUniversità di\nSezione di Milano-Bicocca Piazza della Scienza 320126MilanoItaly",
"Federico II\" and INFN, Sezione di Napoli\nComplesso di Monte Sant'Angelo, via Cintia\nUniversità di Napoli \"\n80126NapoliItaly"
]
| []
| We present a generator for the production of a Higgs boson H in association with a vector boson V = W or Z (including subsequent V decay) plus zero and one jet, that can be used in conjunction with general-purpose shower Monte Carlo generators, according to the POWHEG method, as implemented within the POWHEG BOX framework.We have computed the virtual corrections using GoSam, a program for the automatic construction of virtual amplitudes. In order to do so, we have built a general interface of the POWHEG BOX to the GoSam package. With this addition, the construction of a POWHEG generator within the POWHEG BOX is now fully automatized, except for the construction of the Born phase space. Our HV + 1 jet generators can be run with the recently proposed MiNLO method for the choice of scales and the inclusion of Sudakov form factors. Since the HV production is very similar to V production, we were able to apply an improved MiNLO procedure, that was recently used in H and V production, also in the present case. This procedure is such that the resulting generator achieves NLO accuracy not only for inclusive distributions in HV + 1 jet production but also in HV production, i.e. when the associated jet is not resolved, yielding a further example of matched calculation with no matching scale. | 10.1007/jhep10(2013)083 | [
"https://arxiv.org/pdf/1306.2542v2.pdf"
]
| 119,211,671 | 1306.2542 | 83821431f01856e25ad65a19ff3c39545537ef0d |
HW ± /HZ + 0 and 1 jet at NLO with the POWHEG BOX interfaced to GoSam and their merging within MiNLO
1 Oct 2013
Gionata Luisoni [email protected]
Paolo Nason [email protected]
Carlo Oleari [email protected]
Francesco Tramontano [email protected]
Max-Planck Institut für Physik
INFN
Föhringer 6, Sezione di Milano Bicocca, Piazza della Scienza 3D-80805, 20126Munich, MilanoGermany, Italy
Milano-Bicocca and INFN
Università di
Sezione di Milano-Bicocca Piazza della Scienza 320126MilanoItaly
Federico II" and INFN, Sezione di Napoli
Complesso di Monte Sant'Angelo, via Cintia
Università di Napoli "
80126NapoliItaly
HW ± /HZ + 0 and 1 jet at NLO with the POWHEG BOX interfaced to GoSam and their merging within MiNLO
1 Oct 2013Preprint typeset in JHEP style -PAPER VERSIONQCDHadronic CollidersHiggs boson
We present a generator for the production of a Higgs boson H in association with a vector boson V = W or Z (including subsequent V decay) plus zero and one jet, that can be used in conjunction with general-purpose shower Monte Carlo generators, according to the POWHEG method, as implemented within the POWHEG BOX framework.We have computed the virtual corrections using GoSam, a program for the automatic construction of virtual amplitudes. In order to do so, we have built a general interface of the POWHEG BOX to the GoSam package. With this addition, the construction of a POWHEG generator within the POWHEG BOX is now fully automatized, except for the construction of the Born phase space. Our HV + 1 jet generators can be run with the recently proposed MiNLO method for the choice of scales and the inclusion of Sudakov form factors. Since the HV production is very similar to V production, we were able to apply an improved MiNLO procedure, that was recently used in H and V production, also in the present case. This procedure is such that the resulting generator achieves NLO accuracy not only for inclusive distributions in HV + 1 jet production but also in HV production, i.e. when the associated jet is not resolved, yielding a further example of matched calculation with no matching scale.
Introduction
Higgs boson production in association with a vector boson (HV production from now on) is an interesting channel for Higgs boson studies at the LHC. On one hand, it seems to be the only available channel to study the Higgs branching to bb, or to set limits to the Higgs branching into invisible particles. In particular, in the HV process with the Higgs boson decaying into a bb pair, the CMS experiment has reported an excess of events over the background of 2.2 standard deviation that is consistent with an Higgs boson [1] in the 7 TeV data. The ATLAS experiment is not reporting any excess, but is setting a limit above 1.9 standard deviation from the Standard Model prediction [2]. The CDF and D0 Collaborations have reported evidence for an excess of events, at the 3.1 standard deviation level, in the search for the standard model Higgs boson in the HV process with the Higgs decaying to bb [3,4,5]. Looking for invisible Higgs boson decays in HZ associated production, the ATLAS experiment is setting 95% confidence level limits on a 125 GeV Higgs boson decaying invisibly with a branching fraction larger than 65%. Searches for W H → W W W ( * ) have also been carried out by both ATLAS [6] and CMS [7], and the Higgs boson decay into ττ pairs has also been studied [8].
A POWHEG [9] generator for the HV has been presented in ref. [10], and is often used for simulating the signal in the experimental analysis. It is developed within the HERWIG++ framework [11].
The aim of this paper is twofold:
1. to present generators for the HV and HV + 1 jet processes in the POWHEG BOX [12,13] framework, a next-to-leading order+parton shower (NLO+PS) event generators. In the following we will refer to these generators as HV and HVJ, respectively. These generators can be interfaced with any parton shower compliant with the Les Houches Interface for User Processes [14,15], like PYTHIA [16], Pythia8 [17], HERWIG [18] and HERWIG++ [11].
2. To illustrate a new interface of the POWHEG BOX to the GoSam [19] package, that allows for the automatic generation of the virtual amplitudes. In order to achieve this, the POWHEG BOX interface to MadGraph4 of ref. [20] was extended to produce also a file that can be passed to GoSam in order to generate the virtual amplitudes. Using this new tool, the generation of all matrix elements is performed automatically, and one only needs to supply the Born phase space in order to build a POWHEG process.
In our HV + 1 jet generators, we apply the improved version of the MiNLO procedure [21] discussed in ref. [22]. In [22] it was shown that, by applying this procedure to the NLO production of a color-neutral object in association with one jet, one can reach NLO accuracy for quantities that are inclusive in the production of the color-neutral system, i.e. when the associated jet is not resolved. In the present case, the HV j process can be viewed as the production of a virtual vector boson, that decays into the HV pair, accompanied by one jet. Since vector-boson production was explicitly considered in ref [22], the same MiNLO procedure used in that context can be transported and applied to the present case.
The HVJ+MiNLO generator that we build can then replace the HV generator, since it has the same NLO accuracy, and in addition it is NLO accurate in the production of the hardest jet. We are then able to produce a matched calculation, with no matching scale, without actually merging different samples. In addition to this, it turns out that it is possible to extend the precision of our HVJ+MiNLO generator in such a way to reach next-to-next-to-leading order+parton shower (NNLO+PS) accuracy for inclusive HV distributions. This can be achieved following the procedure outlined in ref. [22], i.e. by rescaling the HVJ+MiNLO results with the the NNLO calculation for HV production, already available in the literature [23]. We remind that higher accuracy for the production of an associated jet is particularly useful in contexts where an extra jet is vetoed, like in the search for invisible Higgs boson decays, or, in general, when events are also classified according to the number of jets. In this paper, we will not pursue the extension of our calculation to the NNLO+PS accuracy, postponing a phenomenological study of this to a future publication.
The organization of the paper is the following: in sec. 2 we describe the new GoSam -POWHEG BOX interface. In sec. 3 we give more details about the virtual contribution and in sec. 4 we briefly review the MiNLO procedure for the present case. In sec. 5 we compare the improved HVJ+MiNLO outputs with the HV ones, and discuss a few phenomenological results. Finally in sec. 6 we summarize our findings. Instructions on how to generate a new virtual code using the GoSam -POWHEG BOX interface and how to drive the GoSam program are collected in Appendixes A and B.
The GoSam -POWHEG BOX interface
The code for the calculation of the one-loop corrections to HV j production has been generated using GoSam interfaced to the POWHEG BOX. GoSam is a python framework coupled to a template system for the automatic generation of fortran95 codes for the evaluation of virtual amplitudes. The one-loop virtual corrections are evaluated using algebraic expressions of D-dimensional amplitudes based on Feynman diagrams. The diagrams are initially generated using QGRAF [24]. GoSam allows to select the relevant diagrams which are processed with FORM [25], using the SPINNEY [26] package. The processing amounts to organize the numerical computation of the amplitudes in terms of their numerators that are functions of the loop momentum. Finally, the manipulated algebraic expressions of the one-loop numerators are optimized and converted into a fortran95 code 1 and merged into the code generated by GoSam.
When integrating over the phase space, the virtual matrix elements are evaluated with SAMURAI [28] or, alternatively, with Golem95 [29]. The first program evaluates the amplitude using integrand reduction methods [30] extended to D-dimensions [31], whereas the latter allows to compute the same amplitude evaluating tensor-integrals. The interplay of the two reduction strategies is used to guarantee the highest speed for the produced codes, while keeping the required precision for good numerical stability of the results. For the evaluation of the scalar one-loop integrals QCDLoop [32,33], OneLOop [34] or Golem95C [29] can be used.
Binoth-Les-Houches-Accord interface
GoSam has an interface to generic external Monte Carlo (MC) programs based on the Binoth-Les-Houches-Accord (BLHA) [35], which sets the standards for the communications among a MC program and a general One-Loop Program (OLP). We developed an interface based on the BLHA for the POWHEG BOX MC program as well, and the computation we present here is its first application. In the following we explain the basic features of this interface.
The communication between MC and OLP in the BLHA has two separate stages: a pre-running phase and a running-time phase [35].
1. In the pre-running phase, during code generation, the MC writes an order file which contains all the basic information about the amplitudes that should be generated and computed by the OLP. Among these there are the powers of the strong and electromagnetic couplings, the type of corrections (i.e. whether the OLP should generate QCD, QED or EW loop corrections), information on the helicity and color treatment (average, sum. . . ), and finally the full list of the partonic subprocesses for which the virtual one-loop amplitudes are required. The OLP reads the order file and generates the needed amplitudes together with the code to evaluate them. Furthermore, information on the generated code are written by the OLP into a contract file, which will be read-in again by the MC at every run. The structure of the contract file is similar to the one of the order file. In the former the requests of the MC, contained in the latter, are either confirmed by an "OK" label, or rejected. This allows to be sure that the requests contained in the order file can be satisfied and the codes are coherent, or to detect potential problems due to misunderstanding of the MC requests by the OLP. On top of this, the list of partonic processes is rewritten in the contract file with a numerical label which uniquely identifies each subprocess. This will be used at running time by the MC to obtain the amplitude of a specific subprocess from the OLP.
2. At running time the MC reads-in once the contract file and then communicate with the OLP via two standard subroutines: the first one is responsible for the initialization of the OLP, whereas the second one is called for the evaluation of the virtual corrections of a specific partonic subprocess at a given phase-space point. As a reply, the OLP provides an array containing the coefficients of the poles and the finite part of the virtual contribution.
We have extended the already existing interface [20] of the POWHEG BOX to MadGraph4 in order to write an order file and read a contract file. Together with the order file, GoSam needs a further input card: this is a file, called gosam.rc, where the user can specify further details on the generation of the virtual contributions. Among these there are, for example, the number of available cpus or the treatment of classes of gauge-invariant diagrams.
This setup allows for a completely automated generation of all the matrix elements needed by the POWHEG BOX for the computation of the QCD corrections to any Standard Model process. The limitation is, of course, the computing power of nowadays computer.
In Appendix A the reader can find a detailed descriptions of the steps needed for the generation of a new process, whereas a list of options for the GoSam input card can be found in Appendix B.
The virtual corrections
In this section we would like to comment on the structure of the virtual diagrams contributing to HV j production. Typical tree-level diagrams for the HW j and HZj production are illustrated in fig. 1. Similar ones can be drawn for real-radiation diagrams. In all these contributions, the Higgs boson is radiated off the vector boson. In fact, we consider all quark to be massless, with the only exception for the top quark, running in fermionic loops.
The virtual corrections can be separated into three different classes:
(a) In the first class, we can accommodate the one-loop Higgs-Strahlung-type diagrams, with no closed fermionic loop. A sample of diagrams belonging to this class is depicted in fig. 2. These diagrams are similar to the virtual diagrams for H/W/Z production, with the addition of an extra parton in the final state. quark loop is present, and the Z boson couples to the internal quark. No such diagrams are present for W production, since the flavour running in the loop must be conserved. These contributions vanish by charge-conjugation invariance (Furry's theorem), when they couple to a vector current. For axial currents, they cancel in pairs of up-type and down-type quarks, because they have opposite axial coupling, as long as the loop of different flavours can be considered massless. Thus, the up-quark contribution cancels with the down-quark, and, since we treat the charm as massless, its contribution cancels with the strange one. Only the difference between the diagrams with a massive top quark and a massless bottom quark loop survives. The Feynman diagrams belonging to the three classes are fully implemented in the POWHEG BOX code, and they are computed if the massivetop flag is set to 1 in the input file. The contributions of the diagrams belonging to classes (b) and (c) are, in general, very small. For example, in HZj production, with the setup described in sec. 5 and with transverse-momentum cuts on jets of 20 GeV, the total NLO cross section, keeping only the virtual diagrams belonging to class (a), is 5.187(4) fb, while keeping all the virtual diagrams is 5.254(4) fb. This behavior is reflected in more exclusive quantities, such as the transverse momentum and rapidity of the HZ pair or of the H and Z bosons. In fig. 6, we compare the rapidity distributions of the HZ system (left plot) and of the Z boson (right plot), obtained by including the virtual diagrams with the top-quark loop and by neglecting them. In fig. 7 we show a similar comparison for the transverse-momentum distributions of the HZ pair and of the Higgs boson.
In all the several observables that we have examined, we find differences of the order of 1-2%, with the exception of distributions related to the HZ transverse momentum (or of the leading jet p T ), that display a slightly larger difference increasing with the transverse Since the contributions of the diagrams belonging to classes (b) and (c) are at the level of a few percent for inclusive and typical more exclusive distributions, the default behavior of the POWHEG BOX is to neglect them, i.e. the default value of the massivetop flag is 0. We can then apply in a straightforward way the improved MiNLO procedure to HV j production, as illustrated in sec. 4. We would like to point out that the diagrams belonging to classes (b) and (c) not only have a small impact on the cross sections, but they contribute to the differential cross section with terms that are finite, down to zero transverse momentum of the jet, i.e. they do not have the diverging behavior of the diagrams belonging to class (a).
The MiNLO procedure
The application of the MiNLO procedure to HV j production is fully analogous to the case of the V j generator presented in ref. [22], and based on ref. [21], if we keep only the virtual diagrams that belong to class (a), i.e. if the production mechanism is an Higgs-Strahlung one
pp → V * j , with V * → HV → H l 1 l 2 , (4.1)
with no top-quark loop involved.
In the MiNLO method, the NLO inclusive cross section for the computation of the underlying Born kinematics (the so calledB function in the POWHEG jargon) is modified with the inclusion of the Sudakov form factor and with the use of appropriate scales for the couplings, according to the formulā
B = α S (q T ) ∆ 2 (M V * , q T ) B 1 − 2∆ (1) (M V * , q T ) + V + dΦ rad R , (4.2)
where M V * is the virtuality of the vector boson before the Higgs-boson emission, i.e. M 2 V * = (p l 1 + p l 2 + p H ) 2 , where p l 1 and p l 2 are the momenta of the leptons into which the V boson decays, and p H is the Higgs boson momentum. The transverse momentum of V * is indicated with q T . In eq. (4.2) we have stripped away one power of α S from the Born (B), the virtual (V ) and the real (R) contribution, and we have explicitly written it in front, with its scale dependence. The scale at which the remaining power of α S in R, V and ∆ (1) is evaluated and the factorization scale used in the evaluation of the parton distribution functions is again q T . The Sudakov form factor ∆ is given by
∆ (Q, q T ) = exp − Q 2 q 2 T dq 2 q 2 A α S q 2 log Q 2 q 2 + B α S q 2 ,(4.3)
and
∆ (Q, q T ) = 1 + ∆ (1) (Q, q T ) + O α 2 S (4.4)
is the expansion of ∆ in powers of α S . The functions A and B have a perturbative expansion in terms of constant coefficients
A (α S ) = ∞ i=1 A i α i S , B (α S ) = ∞ i=1 B i α i S . (4.5)
In the improved MiNLO approach, only the coefficients A 1 , A 2 , B 1 and B 2 are needed in order to have NLO accuracy also in inclusive HV distributions. Their value for the case at hand are given by [36,37,38]
A 1 = 1 2π C F , A 2 = 1 4π 2 C F K, B 1 = − 3 4π C F , K = 67 18 − π 2 6 C A − 5 9 n f ,(4.6)B 2 = 1 2π 2 π 2 4 − 3 16 − 3ζ 3 C 2 F + 11 36 π 2 − 193 48 + 3 2 ζ 3 C F C A + 17 24 − π 2 18 C F n f + 4ζ 3 (A 1 ) 2 ,(4.7)
and the O (α S ) expansion of the Sudakov form factor in eq. (4.4) is given by
∆ (1) (Q, q T ) = α S − 1 2 A 1 log 2 q 2 T Q 2 + B 1 log q 2 T Q 2 . (4.8)
Following the reasoning in ref. [22] we can show that events generated according to eq. (4.2), i.e. our HVJ-MiNLO generator, are NLO-accurate for distributions inclusive in the HV production and have NLO accuracy for distributions inclusive in the HV + 1 jet too.
Implementation and plots
In this section we discuss and compare results obtained using the HV and the HVJ-MiNLO generator, as implemented in the POWHEG BOX.
In our study we have generated 5 millions events both for the HV j and for the HV sample, where V is a W − , a W + or a Z boson, which decays leptonically. The conclusions drawn for associated W + production are similar to those for W − production. For this reason, in the following, we will show only results for W − production.
The produced samples were generated for the LHC running at 8 TeV, with M H = 125 GeV and Γ H = 4.03 MeV, and with the Higgs boson virtuality distributed according to a fixed-width Breit-Wigner function. In addition, we have restricted the Higgs boson and V boson virtuality in the range 10 GeV-1 TeV. This range can be set by the user via the powheg.input file. A minimum transverse momentum cut of 260 MeV has been applied to the jet in the HV j sample in the generation of the underlying Born kinematics, in order to avoid the Landau pole in the strong coupling constant. The factorization scale for the HV POWHEG generators has been set to M H + M V . The renormalization and factorization scales for the HVJ+MiNLO generators have been set according to the procedure discussed in sec. 4. In our study, we have used the CT10 parton distribution function set [39], but any other set can be used equivalently [40,41]. The shower has been completed using the PYTHIA shower Monte Carlo program, although it is as easy to interface our results to HERWIG, HERWIG++ and Pythia8. Jets have been reconstructed using the anti-k T jet algorithms, as implemented in the fastjet package [42,43] with R = 0.5.
The HV code is run without using the hfact flag, that can be used to separate the real contribution into a singular and a finite part. In the present case, since radiative corrections are modest, we do not expect a large sensitivity to this parameter, as is observed in Higgs boson production in gluon fusion [44,45]. In the following, the HVJ generator is always run in the MiNLO mode. Since the improved MiNLO prescription applied here achieves NLO accuracy for observables inclusive in the HV production, we begin showing results for the most inclusive quantity, i.e. the total cross section. In tabs. 1 and 2 we collect the results for the total cross sections obtained with the HVJ-MiNLO and the HV programs, both at full NLO level, for different scale combinations. The scale variation in the HVJ-MiNLO results is obtained by multiplying the factorization scale and each of the several renormalization scales that appear in the procedure by the scale factors K F and K R , respectively, where (K R , K F ) = (0.5, 0.5), (0.5, 1), (1, 0.5), (1, 1), (2, 1), (1, 2), (2, 2).
HW − → Hl −ν l production
(5.1)
The Sudakov form factor is also changed according to the prescription described in ref. [22]. For ease of visualization, in fig. 8 we have plotted the maximum and minimum values for the HVJ-MiNLO (red lines) and HV (black lines) cross sections in solid and dashed lines, respectively. We have also plotted the central-scale cross section in dotted lines. Notice that we expect agreement only up to terms of higher order in α S , since the HVJ-MiNLO results include terms of higher order, and also since the meaning of the scale choice is different in the two approaches. For similar reasons, we do not expect the scale variation bands to be exactly the same in the two approaches. From the tables and the figure, it is clear that the standard HV NLO+PS results and the HVJ-MiNLO one are fairly consistent: the HVJ-MiNLO independent scale variation is in general larger than the HV one, and it shrinks if a symmetric scale variations is performed, as illustrated in the last two columns of the tables. In general the HVJ-MiNLO central values are 2% smaller than the HV ones. As already pointed out in ref. [22], comparing full independent scale variation in the HVJ-MiNLO and in the HV approaches does not seem to be totally fair. In fact, in the HV case, there is no renormalization scale dependence at LO, while there is such a dependence in HVJ-MiNLO. It was shown in ref. [22] for the case of W production at LO that an independent scale variation corresponds at least in part to a symmetric scale variation in the MiNLO formula. It is thus not surprising that the MiNLO independent scale variation is so much larger than the HV one also at NLO. If we limit ourselves to consider only symmetric scale variations, the MiNLO and the HV results are more consistent, although the HV scale variation band is extremely small. Turning now to less inclusive quantities, we plot in fig. 9 the rapidity distribution of the HW system obtained with the HW and HWJ-MiNLO generator. We remind that this quantity is predicted at NLO by both generators, and in fact the agreement is very good. The uncertainty band of the HW generator is shown on the left while that of the HWJ-MiNLO generator is shown on the right.
In fig. 10 we show another inclusive quantity, i.e. the charged lepton transverse momentum from the W − decay. Also in this case we find perfect agreement between the two generators In figs. 11 and 12 we compare the HW and HWJ-MiNLO generators for the transverse momentum of the HW system. In this case we do observe small differences, that are however perfectly acceptable if we remember that this distribution is only computed at leading order by the HW generator, while it is computed at NLO accuracy by the HWJ-MiNLO generator. It can also be noted that the uncertainty band for the HW generator is uniform, while it depends upon the transverse momentum for the HWJ-MiNLO one. In fact, the uniformity of the scale-variation band in the HW case is well understood: in POWHEG, the scale uncertainty manifests itself only in theB function, while the shape of the transversemomentum distribution is totally insensitive to it.
The transverse momentum of the second jet computed with the HWJ-MiNLO generator compared with the pure NLO result is plotted in fig. 13. In this plot, MiNLO plays no role, but the POWHEG formalism is still in place. In fact, the NLO prediction for the second jet has a diverging behavior at low transverse momenta, that is tamed in the POWHEG BOX generator by the Sudakov form factor. Thus, the two results differ considerably from each other, especially at low transverse momenta. Conclusions similar to those for HW (j) can be drawn for HZ(j) associated production. For this reason, we refrain from commenting figs. 14-18 that show the same physical quantities shown previously but for HZ production.
Conclusions
In this paper we presented new POWHEG BOX generators for HV and HV + 1 jet production, with all spin correlations from the vector boson decay into leptons correctly included. The codes for the Born and the real contributions were computed using the existing interface to MadGraph4, while the code for the virtual amplitude was computed using a new interface to GoSam. This interface allows for the automatic generation of the virtual code for generic supply the Born phase space in order to build a POWHEG process.
We have applied the recently proposed MiNLO procedure to the POWHEG HV j production in order to have a generator that is NLO accurate not only for inclusive distribution in HV j production (as a POWHEG process is) but also in HV production, i.e. when the associated jet is not resolved. Together with H/W/Z production described in ref. [22], this is a further example of matched calculation with no matching scale.
We have found very good agreement between the HVJ+MiNLO results and the HV ones, for HV inclusive distributions, while there are clearly differences in the less inclusive distributions, where the HV code has at most leading-order+parton-shower accuracy, while the HVJ one reaches next-to-leading order accuracy.
We point out that, using our HVJ-MiNLO generator, it is actually possible to construct an NNLO+PS generator, simply by reweighting the transverse-momentum integral of the cross section to the one computed at the NNLO level. We postpone a phenomenological study of this method to a future publication.
The first step in the generation of the code for a new process is to create a directory under the main POWHEG BOX, and to work from inside this folder, from where all the following script files have to be executed. We will refer to this directory as the process folder. For a complete generation of a new process the following basic steps are needed:
• Generate the tree-level amplitudes and the related code using MadGraph4 [20]. After having copied a MadGraph input card (proc card.dat) into the process folder, it is sufficient to run from there the BuildMad.sh script contained in the MadGraphStuff folder distributed within the POWHEG BOX. Among the many files generated, this will automatically generate an order file for GoSam.
• Generate the one-loop amplitudes by running the script BuildGS.sh contained in the GoSamStuff folder distributed within the POWHEG BOX with the tag virtual.
The script looks for a GoSam input card within the process folder. If no card is found, the user is asked if the template one should be used instead. This command generates all the needed code for the evaluation of the virtual amplitude.
• Generate the interface of the virtual code to the POWHEG BOX. This is done by running again the script BuildGS.sh with the tag interface. This replaces the files init couplings.f and virtual.f with new ones, containing calls to set the values of the physical parameters and the initialization of the GoSam-generated virtual amplitudes.
The parameters are passed by the POWHEG BOX using the function OLP OPTION, which is not part of the BLHA standards, but is described in the GoSam manual [46]. The new file virtual.f instead is constructed using the information contained in the contract file, in such a way that the partonic subprocess label assigned by GoSam is not read from the contract file at every run.
• Since the evaluation of the virtual amplitude at running time is performed with SAMURAI and Golem95, both distributed in the GoSam-contrib package, the last step consists in producing a standalone version of everything needed to compute the oneloop amplitudes. This can be achieved by executing a last time the script BuildGS.sh with the tag standalone. The entire code generated by GoSam, together with the code contained in the GoSam-contrib package, is copied in the directory GoSamlib, that is then ready to be compiled together with the rest of the code. The last three steps can be executed all together by running BuildGS.sh with the tag allvirt.
If the generation is successful, the only work left to the user is to provide an appropriate Born phase-space generator in the Born phsp.f file.
B. The GoSam input card
In this section, we describe the main options of the template GoSam input card available in the directory GoSamStuff/Templates of the POWHEG BOX distribution. Further input options for GoSam can be found in the online manual [46]. There are two categories of input options: one related to the characteristics of the physics process and one more related to the computer code.
B.1 Physics option
We list here the main options related to the physics of the processes. For further options we refer to the GoSam manual.
model: the first thing to choose is the model needed. By default, GoSam offers the choice among the following three models: sm, smdiag and smehc, which refer to the Standard Model, the Standard Model with diagonal CKM matrix and the Standard Model with effective Higgs-gluon-gluon coupling respectively.
one: to simplify the algebraic expressions of the virtual amplitudes, a list of parameters can be set algebraically to one using this tag. It will not be possible to change the value of the parameter in the generated code, since the latter will not contain a variable for the corresponding parameter any longer. Due to the normalization conventions used between GoSam and the POWHEG BOX, the virtual amplitudes are always returned stripped off from the strong coupling constant g s and the electromagnetic coupling e, which are therefore set to one using this tag.
zero: this tag is similar to the previous one and allows to set parameters equal to zero. It is useful to set to zero the desired quark and lepton masses as well as the resonance widths.
symmetries: this tag specifies some further symmetries in the calculation of the amplitudes. The information is used when the list of helicities is generated. Possible values are:
• flavour: does not allow for flavour changing interactions. When this option is set, fermion lines are assumed not to mix.
• family: allows for flavour-changing interactions only within the same family. When this option is set, fermion lines 1-6 are assumed to mix only within families. This means that e.g. a quark line connecting an up with down quark would be considered, while a up-bottom one would not.
• lepton: means for leptons what "flavour" means for quarks.
• generation: means for leptons what "family" means for quarks.
Furthermore it is possible to fix the helicity of particles. This can be done using the command %<n>=<h>, where < n > stands for a PDG number and < h > for an helicity. For example %23=+-specifies the helicity of all Z-bosons to be "+" and "-" only (no "0" polarisation).
qgraf.options: this is a list of options to be passed to QGRAF. For the complete set of possible options we refer to the QGRAF manual [24]. Customary options which are used are onshell, notadpole, nosnail.
filter.module, filter.lo, filter.nlo: these tags can impose user-defined filters to the LO and NLO diagrams, passed to GoSam via a python function. Some ready-to-use filter functions are provided by default by GoSam. A complete list can be found in the GoSam online manual. Nevertheless, further filters can be constructed combining existing ones. Ideally, these filters are defined in a separate file, which we call filter.py. The tag filter.module can be used to set the PATH to the file containing the filter.
PSP verbosity: the verbosity of the PSP check can be set here. The possible values are: verbosity = 0 : no output, verbosity = 1 : bad points are written in a file stored in the folder BadPoints, which is automatically created in the process folder. verbosity = 2 : output whenever the rescue system is used with comments about the success of the rescue.
PSP chk threshold1, PSP chk threshold2: tags to set the threshold used to declare if a point is unstable or not. These thresholds are integers, indicating the number of digits of precision which are required. The first threshold acts on the result given by SAMURAI. If a phase-space point does not fulfill the required precision, it is recomputed using Golem95. The second threshold acts on the result from Golem95. If also the result of Golem95 does not fulfill the required accuracy, the phase-space point and some further information are written in the BadPoints folder, provided the verbosity flag is set.
PSP chk kfactor: a further threshold on the K-factor of the virtual amplitude can be set using this tag. If the value is set negative, this tag has no effect and all K-factor values are accepted.
diagsum: to increase the speed of the evaluation of the virtual matrix elements, GoSam can add diagrams which share identical loop-propagators and differ only in the external tree-structure attached to the loop, before the algebraic reduction takes place. This option can be switched on and off by setting this tag to true or false.
abbrev.level: for an optimal computational speed, during the preparation of the oneloop code, GoSam groups identical algebraic structures into abbreviations. This tag allows to set at which level these abbreviations should be defined. Possible values are helicity, group and diagram. The helicity level is most indicated for easy processes with a small number of diagrams. If the extension formopt is used, the abbrev.level must be set to diagram.
B.2 Computer option
Contrary to a standalone use of GoSam, within the POWHEG BOX framework a separate installation of the gosam-contrib package is not necessary, since this is contained in the POWHEG BOX distribution.
The following tags allow to set some options for the external programs:
qgraf.bin: the location of the QGRAF executable.
form.bin: the location of the FORM executable. Usually the user can choose between the standard version (form) and the multi-threads version called tform.
form.threads: when using the multi-thread version of FORM, the number of threads to be used can be set here.
form.tempdir: the path to the folder where FORM saves temporary files can be set using this tag.
Figure 1 :
1A sample of leading-order Feynman diagrams for HW j and HZj production.
Figure 2 :Figure 3 :
23A sample of one-loop Higgs-Strahlung diagrams, with no closed fermionic loop. (b) A sample of diagrams belonging to the second class of virtual corrections are illustrated in fig. 3. In this figure we have plotted Higgs-Strahlung-type diagrams when A sample of one-loop Higgs-Strahlung diagrams, with massless and massive closed fermionic loop.
Figure 4 :
4the last class, we have the Feynman diagrams where the Higgs boson couples directly to the massive top-quark loop. A sample of this type of diagrams is illustrated in figs. A sample of virtual diagrams involving a massive top-quark loop, where the Higgs boson couples directly to the top quark.
Figure 5 :
5A sample of virtual diagrams involving a massive top-quark loop, where the Higgs boson couples directly to the top quark.
Figure 6 :
6NLO rapidity distributions of the HZ pair (left plot) and of the Z boson (right plot), in HZj production. The red curves were obtained by using the full set of virtual diagrams, including the Feynman graphs containing a top-quark loop. The blue curves were computed neglecting the diagrams belonging to classes (b) and (c).
Figure 7 :
7NLO transverse-momentum distributions of the HZ pair (left plot) and of the H boson (right plot), in HZj production. The labels are as in fig. 6. momentum. Observe that, in this case, our generator would still include correctly the effect of the diagrams of the classes (b) and (c). In fact, the MiNLO correction affects their contribution only at small transverse momentum, where they are negligible (see sec. 4 for more details).
Figure 8 :
8Total cross section variation for HVJ-MiNLO (solid red) and HV (dashed black). The maximum and minimum values for the total cross section are taken from tabs. 1 and 2. The total cross section with central scales is drawn in dotted lines.
Figure 9 :
9Comparison between the HW+PYTHIA result and the HWJ-MiNLO+PYTHIA result for the HW − rapidity distribution at the LHC at 8 TeV. The left plot shows the 7-point scale-variation band for the HW generator, while the right plot shows the HWJ-MiNLO 7-point band.
Figure 10 :Figure 11 :Figure 12 :
101112Comparison between the HW+PYTHIA result and the HWJ-MiNLO+PYTHIA result for the rapidity distribution of the charged lepton from the W − decay, at the LHC at 8 TeV. The left plot shows the 7-point scale-variation band for the HW generator, while the right plot shows the HWJ-MiNLO 7-point band. Comparison between the HW+PYTHIA result and the HWJ-MiNLO+PYTHIA result for the HW − transverse-momentum distribution. The bands are obtained as in fig. 9. Same as fig. 11 for a different p HW T range.
Figure 13 :
13Comparison between the HWJ-MiNLO+PYTHIA and the NLO HWJ result for the transverse momentum of the second hardest jet, at the LHC at 8 TeV, in two different p T ranges. The plots shows the 7-point scale-variation band for the HWJ generator.
Figure 14 :
14Comparison between the HZ+PYTHIA result and the HZJ-MiNLO+PYTHIA result for the HZ rapidity distribution at the LHC at 8 TeV. The left plot shows the 7-point scale-variation band for the HZ generator, while the right plot shows the HZJ-MiNLO 7-point band.
Figure 15 :Figure 16 :Figure 17 :Figure 18 :
15161718Comparison between the HZ+PYTHIA result and the HZJ-MiNLO+PYTHIA result for the rapidity distribution of the electron from the Z decay, at the LHC at 8 TeV. The left plot shows the 7-point scale-variation band for the HZ generator, while the right plot shows the HZJ-MiNLO 7Comparison between the HZ+PYTHIA result and the HZJ-MiNLO+PYTHIA result for the HZ transverse-momentum distribution. The bands are obtained as in fig. 14. Same as fig. 16 for a different p HZ T range. Standard Model processes. With the addition of this new tool, the generation of all matrix elements in the POWHEG BOX package is performed automatically, and one only needs to Comparison between the HZJ-MiNLO+PYTHIA and the NLO HZJ result for the transverse momentum of the second hardest jet, at the LHC at 8 TeV, in two different p T ranges. The plots shows the 7-point scale-variation band for the HZJ generator.
Table 1 :
1Total cross section for HW − → Hl −ν l at the 8 TeV LHC, obtained with the HWJ-MiNLO and the HW programs, at NLO level, for different scales combinations. The maximum and minimum of the cross sections are highlighted. HZ → He + e − production total cross sections in fb at the LHC, 8 TeVKR, KF
1, 1
1, 2
2, 1
1, 1
2
1
2 , 1
1
2 , 1
2
2, 2
HZJ-MiNLO
12.818(9)
12.478(7)
12.97(1)
12.93(2)
12.659(9)
13.14(2)
12.684(9)
HZ
13.0979(4) 12.9304(4) 13.1705(5) 13.3002(5) 13.0501(4) 13.2559(4) 12.9986(4)
Table 2 :
2Total cross section for HZ → He + e − at the 8 TeV LHC, obtained with the HZJ-MiNLO and the HZ programs, at NLO level, for different scales combinations. The maximum and minimum of the cross sections are highlighted.
This phase in GoSam can be performed by FORM (version 4.0 or higher) or through the JAVA programHaggies[27].
AcknowledgmentsWe acknowledge several fruitful discussions with the other members of the GoSam collaboration. The work of G.L. was supported by the Alexander von Humboldt Foundation, in the framework of the Sofja Kovaleskaja Award Project "Advanced Mathematical Methods for Particle Physics", endowed by the German Federal Ministry of Education and Research. G.L. would like to thank the University of Milano-Bicocca for support and hospitality during the early stages of the work, and the CERN PH-TH Department for partial support and hospitality during later stages of the work.A. Generation of a new process using GoSam within the POWHEG BOX In order to generate a new process in the POWHEG BOX using GoSam, the user has to install QGRAF[24], FORM[25], in addition to the GoSam package.As an example we report here the filters used for the generation of the codes of the processes presented in this paper, where we want to neglect diagrams in which the Higgs boson couples directly to the massless fermions. Therefore, in a file called filter.py, we define the following filter, selecting diagrams which do not have this vertex: In the gosam.rc card we can then add the following lines: filter out filter.module=filter.py filter.lo= no hff filter.nlo= no hff extensions: this tag can be used to list a set of options to be applied in the generation of the one-loop code. Among them, the name of the code that will be used in the computation of the diagrams at running time (usually SAMURAI and Golem95), if numerical polarization vectors for external massless vector bosons should be used, if the code should be generated with the option that the Monte Carlo programs controls the numbering of the files containing unstable points. . . The list of possible extensions useful in conjunction with the POWHEG BOX is:• samurai: use SAMURAI for the reduction PSP check: this tag switches the detection of unstable points on and off and can take the values true or false.
CMS-PAS-HIG-12-044Search for the standard model Higgs boson produced in association with W or Z bosons, and decaying to bottom quarks for HCP 2012. CMS Collaboration Collaboration, Search for the standard model Higgs boson produced in association with W or Z bosons, and decaying to bottom quarks for HCP 2012, CMS-PAS-HIG-12-044.
Search for the Standard Model Higgs boson produced in association with a vector boson and decaying to a b-quark pair with the ATLAS detector. G Aad, ATLAS Collaboration CollaborationarXiv:1207.0210Phys.Lett. 718ATLAS Collaboration Collaboration, G. Aad et al., Search for the Standard Model Higgs boson produced in association with a vector boson and decaying to a b-quark pair with the ATLAS detector, Phys.Lett. B718 (2012) 369-390, [arXiv:1207.0210].
Evidence for a particle produced in association with weak bosons and decaying to a bottom-antibottom quark pair in Higgs boson searches at the Tevatron. T Aaltonen, CDF Collaboration ; D0 Collaboration CollaborationarXiv:1207.6436Phys.Rev.Lett. 10971804CDF Collaboration, D0 Collaboration Collaboration, T. Aaltonen et al., Evidence for a particle produced in association with weak bosons and decaying to a bottom-antibottom quark pair in Higgs boson searches at the Tevatron, Phys.Rev.Lett. 109 (2012) 071804, [arXiv:1207.6436].
Combined search for the standard model Higgs boson decaying to a bb pair using the full CDF data set. T Aaltonen, CDF Collaboration CollaborationarXiv:1207.1707Phys.Rev.Lett. 109CDF Collaboration Collaboration, T. Aaltonen et al., Combined search for the standard model Higgs boson decaying to a bb pair using the full CDF data set, Phys.Rev.Lett. 109 (2012) 111802, [arXiv:1207.1707].
Combined search for the standard model Higgs boson decaying to bb using the D0 Run II data set. V M Abazov, D0 Collaboration CollaborationarXiv:1207.6631Phys.Rev.Lett. 109D0 Collaboration Collaboration, V. M. Abazov et al., Combined search for the standard model Higgs boson decaying to bb using the D0 Run II data set, Phys.Rev.Lett. 109 (2012) 121802, [arXiv:1207.6631].
ATLAS-CONF-2012-078Search for the Associated Higgs Boson Production in the W H → W W W ( * ) → lνlνlν Decay Mode Using 4.7 fb −1 of Data Collected with the ATLAS Detector at √ s = 7 TeV. ATLAS Collaboration Collaboration, Search for the Associated Higgs Boson Production in the W H → W W W ( * ) → lνlνlν Decay Mode Using 4.7 fb −1 of Data Collected with the ATLAS Detector at √ s = 7 TeV, ATLAS-CONF-2012-078.
CMS-PAS-HIG-13-009Search for SM Higgs in W H to W W W to 3l3ν. CMS Collaboration Collaboration, Search for SM Higgs in W H to W W W to 3l3ν, CMS-PAS-HIG-13-009.
Search for the standard model Higgs boson decaying to tau pairs produced in association with a W or Z boson. CMS-PAS-HIG-12-051CMS Collaboration Collaboration, Search for the standard model Higgs boson decaying to tau pairs produced in association with a W or Z boson, CMS-PAS-HIG-12-051.
A new method for combining NLO QCD with shower Monte Carlo algorithms. P Nason, hep-ph/0409146JHEP. 1140P. Nason, A new method for combining NLO QCD with shower Monte Carlo algorithms, JHEP 11 (2004) 040, [hep-ph/0409146].
A Positive-Weight Next-to-Leading Order Monte Carlo Simulation for Higgs Boson Production. K Hamilton, P Richardson, J Tully, arXiv:0903.4345JHEP. 11604K. Hamilton, P. Richardson, and J. Tully, A Positive-Weight Next-to-Leading Order Monte Carlo Simulation for Higgs Boson Production, JHEP 04 (2009) 116, [arXiv:0903.4345].
Herwig++ Physics and Manual. M Bahr, arXiv:0803.0883Eur. Phys. J. 58M. Bahr et al., Herwig++ Physics and Manual, Eur. Phys. J. C58 (2008) 639-707, [arXiv:0803.0883].
Matching NLO QCD computations with Parton Shower simulations: the POWHEG method. S Frixione, P Nason, C Oleari, arXiv:0709.2092JHEP. 1170S. Frixione, P. Nason, and C. Oleari, Matching NLO QCD computations with Parton Shower simulations: the POWHEG method, JHEP 11 (2007) 070, [arXiv:0709.2092].
A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX. S Alioli, P Nason, C Oleari, E Re, arXiv:1002.2581JHEP. 0643S. Alioli, P. Nason, C. Oleari, and E. Re, A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX, JHEP 06 (2010) 043, [arXiv:1002.2581].
Generic user process interface for event generators. E Boos, M Dobbs, W Giele, I Hinchliffe, J Huston, hep-ph/0109068E. Boos, M. Dobbs, W. Giele, I. Hinchliffe, J. Huston, et al., Generic user process interface for event generators, hep-ph/0109068.
A Standard format for Les Houches event files. J Alwall, A Ballestrero, P Bartalini, S Belov, E Boos, hep-ph/0609017Comput.Phys.Commun. 176J. Alwall, A. Ballestrero, P. Bartalini, S. Belov, E. Boos, et al., A Standard format for Les Houches event files, Comput.Phys.Commun. 176 (2007) 300-304, [hep-ph/0609017].
PYTHIA 6.4 Physics and Manual. T Sjostrand, S Mrenna, P Z Skands, hep-ph/0603175JHEP. 060526T. Sjostrand, S. Mrenna, and P. Z. Skands, PYTHIA 6.4 Physics and Manual, JHEP 0605 (2006) 026, [hep-ph/0603175].
A Brief Introduction to PYTHIA 8.1. T Sjostrand, S Mrenna, P Z Skands, arXiv:0710.3820Comput.Phys.Commun. 178T. Sjostrand, S. Mrenna, and P. Z. Skands, A Brief Introduction to PYTHIA 8.1, Comput.Phys.Commun. 178 (2008) 852-867, [arXiv:0710.3820].
G Corcella, I Knowles, G Marchesini, S Moretti, K Odagiri, hep-ph/0210213HERWIG 6.5 release note. G. Corcella, I. Knowles, G. Marchesini, S. Moretti, K. Odagiri, et al., HERWIG 6.5 release note, hep-ph/0210213.
Automated One-Loop Calculations with GoSam. G Cullen, N Greiner, G Heinrich, G Luisoni, P Mastrolia, arXiv:1111.2034Eur.Phys.J. C72. 1889G. Cullen, N. Greiner, G. Heinrich, G. Luisoni, P. Mastrolia, et al., Automated One-Loop Calculations with GoSam, Eur.Phys.J. C72 (2012) 1889, [arXiv:1111.2034].
NLO Higgs Boson Production Plus One and Two Jets Using the POWHEG BOX, MadGraph4 and MCFM. J M Campbell, R K Ellis, R Frederix, P Nason, C Oleari, arXiv:1202.5475JHEP. 1207J. M. Campbell, R. K. Ellis, R. Frederix, P. Nason, C. Oleari, et al., NLO Higgs Boson Production Plus One and Two Jets Using the POWHEG BOX, MadGraph4 and MCFM, JHEP 1207 (2012) 092, [arXiv:1202.5475].
MINLO: Multi-Scale Improved NLO. K Hamilton, P Nason, G Zanderighi, arXiv:1206.3572JHEP. 1210K. Hamilton, P. Nason, and G. Zanderighi, MINLO: Multi-Scale Improved NLO, JHEP 1210 (2012) 155, [arXiv:1206.3572].
Merging H/W/Z + 0 and 1 jet at NLO with no merging scale: a path to parton shower + NNLO matching. K Hamilton, P Nason, C Oleari, G Zanderighi, arXiv:1212.4504JHEP. 130582K. Hamilton, P. Nason, C. Oleari, and G. Zanderighi, Merging H/W/Z + 0 and 1 jet at NLO with no merging scale: a path to parton shower + NNLO matching, JHEP 1305 (2013) 082, [arXiv:1212.4504].
Associated WH production at hadron colliders: a fully exclusive QCD calculation at NNLO. G Ferrera, M Grazzini, F Tramontano, arXiv:1107.1164Phys.Rev.Lett. 107152003G. Ferrera, M. Grazzini, and F. Tramontano, Associated WH production at hadron colliders: a fully exclusive QCD calculation at NNLO, Phys.Rev.Lett. 107 (2011) 152003, [arXiv:1107.1164].
Automatic Feynman graph generation. P Nogueira, J.Comput.Phys. 105P. Nogueira, Automatic Feynman graph generation, J.Comput.Phys. 105 (1993) 279-289.
. J Kuipers, T Ueda, J Vermaseren, J Vollinga, arXiv:1203.6543Comput.Phys.Commun. 184FORM version 4.0J. Kuipers, T. Ueda, J. Vermaseren, and J. Vollinga, FORM version 4.0, Comput.Phys.Commun. 184 (2013) 1453-1467, [arXiv:1203.6543].
Spinney: A Form Library for Helicity Spinors. G Cullen, M Koch-Janusz, T Reiter, arXiv:1008.0803Comput.Phys.Commun. 182G. Cullen, M. Koch-Janusz, and T. Reiter, Spinney: A Form Library for Helicity Spinors, Comput.Phys.Commun. 182 (2011) 2368-2387, [arXiv:1008.0803].
Optimising Code Generation with haggies. T Reiter, arXiv:0907.3714Comput.Phys.Commun. 181T. Reiter, Optimising Code Generation with haggies, Comput.Phys.Commun. 181 (2010) 1301-1331, [arXiv:0907.3714].
Scattering AMplitudes from Unitarity-based Reduction Algorithm at the Integrand-level. P Mastrolia, G Ossola, T Reiter, F Tramontano, arXiv:1006.0710JHEP. 100880P. Mastrolia, G. Ossola, T. Reiter, and F. Tramontano, Scattering AMplitudes from Unitarity-based Reduction Algorithm at the Integrand-level, JHEP 1008 (2010) 080, [arXiv:1006.0710].
Golem95C: A library for one-loop integrals with complex masses. G Cullen, J P Guillet, G Heinrich, T Kleinschmidt, E Pilon, arXiv:1101.5595Comput.Phys.Commun. 182G. Cullen, J. P. Guillet, G. Heinrich, T. Kleinschmidt, E. Pilon, et al., Golem95C: A library for one-loop integrals with complex masses, Comput.Phys.Commun. 182 (2011) 2276-2284, [arXiv:1101.5595].
Reducing full one-loop amplitudes to scalar integrals at the integrand level. G Ossola, C G Papadopoulos, R Pittau, hep-ph/0609007Nucl. Phys. 763G. Ossola, C. G. Papadopoulos, and R. Pittau, Reducing full one-loop amplitudes to scalar integrals at the integrand level, Nucl. Phys. B763 (2007) 147-169, [hep-ph/0609007].
A Numerical Unitarity Formalism for Evaluating One-Loop Amplitudes. R K Ellis, W Giele, Z Kunszt, arXiv:0708.2398JHEP. 08033R. K. Ellis, W. Giele, and Z. Kunszt, A Numerical Unitarity Formalism for Evaluating One-Loop Amplitudes, JHEP 0803 (2008) 003, [arXiv:0708.2398].
FF: A Package to evaluate one loop Feynman diagrams. G Van Oldenborgh, Comput.Phys.Commun. 66G. van Oldenborgh, FF: A Package to evaluate one loop Feynman diagrams, Comput.Phys.Commun. 66 (1991) 1-15.
Scalar one-loop integrals for QCD. R K Ellis, G Zanderighi, arXiv:0712.1851JHEP. 08022R. K. Ellis and G. Zanderighi, Scalar one-loop integrals for QCD, JHEP 0802 (2008) 002, [arXiv:0712.1851].
OneLOop: For the evaluation of one-loop scalar functions. A Van Hameren, arXiv:1007.4716Comput.Phys.Commun. 182A. van Hameren, OneLOop: For the evaluation of one-loop scalar functions, Comput.Phys.Commun. 182 (2011) 2427-2438, [arXiv:1007.4716].
A Proposal for a standard interface between Monte Carlo tools and one-loop programs. T Binoth, F Boudjema, G Dissertori, A Lazopoulos, A Denner, arXiv:1001.1307Comput.Phys.Commun. 181T. Binoth, F. Boudjema, G. Dissertori, A. Lazopoulos, A. Denner, et al., A Proposal for a standard interface between Monte Carlo tools and one-loop programs, Comput.Phys.Commun. 181 (2010) 1612-1622, [arXiv:1001.1307].
Summing Soft Emission in QCD. J Kodaira, L Trentadue, Phys.Lett. 11266J. Kodaira and L. Trentadue, Summing Soft Emission in QCD, Phys.Lett. B112 (1982) 66.
Nonleading Corrections to the Drell-Yan Cross-Section at Small Transverse Momentum. C Davies, W J Stirling, Nucl.Phys. 244337C. Davies and W. J. Stirling, Nonleading Corrections to the Drell-Yan Cross-Section at Small Transverse Momentum, Nucl.Phys. B244 (1984) 337.
Drell-Yan Cross-Sections at Small Transverse Momentum. C Davies, B Webber, W J Stirling, Nucl.Phys. 256413C. Davies, B. Webber, and W. J. Stirling, Drell-Yan Cross-Sections at Small Transverse Momentum, Nucl.Phys. B256 (1985) 413.
New parton distributions for collider physics. H.-L Lai, M Guzzi, J Huston, Z Li, P M Nadolsky, arXiv:1007.2241Phys.Rev. 8274024H.-L. Lai, M. Guzzi, J. Huston, Z. Li, P. M. Nadolsky, et al., New parton distributions for collider physics, Phys.Rev. D82 (2010) 074024, [arXiv:1007.2241].
Parton distributions for the LHC. A D Martin, W J Stirling, R S Thorne, G Watt, arXiv:0901.0002Eur. Phys. J. 63A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Parton distributions for the LHC, Eur. Phys. J. C63 (2009) 189-285, [arXiv:0901.0002].
Parton distributions with LHC data. R D Ball, V Bertone, S Carrazza, C S Deans, L Del Debbio, arXiv:1207.1303Nucl.Phys. 867R. D. Ball, V. Bertone, S. Carrazza, C. S. Deans, L. Del Debbio, et al., Parton distributions with LHC data, Nucl.Phys. B867 (2013) 244-289, [arXiv:1207.1303].
Dispelling the N 3 myth for the k t jet-finder. M Cacciari, G P Salam, hep-ph/0512210Phys.Lett. 641M. Cacciari and G. P. Salam, Dispelling the N 3 myth for the k t jet-finder, Phys.Lett. B641 (2006) 57-61, [hep-ph/0512210].
The anti-k T jet clustering algorithm. M Cacciari, G P Salam, G Soyez, arXiv:0802.1189JHEP. 0463M. Cacciari, G. P. Salam, and G. Soyez, The anti-k T jet clustering algorithm, JHEP 04 (2008) 063, [arXiv:0802.1189].
NLO Higgs boson production via gluon fusion matched with shower in POWHEG. S Alioli, P Nason, C Oleari, E Re, arXiv:0812.0578JHEP. 09042S. Alioli, P. Nason, C. Oleari, and E. Re, NLO Higgs boson production via gluon fusion matched with shower in POWHEG, JHEP 0904 (2009) 002, [arXiv:0812.0578].
S Dittmaier, S Dittmaier, C Mariotti, G Passarino, R Tanaka, arXiv:1201.3084Handbook of LHC Higgs Cross Sections: 2. Differential Distributions. S. Dittmaier, S. Dittmaier, C. Mariotti, G. Passarino, R. Tanaka, et al., Handbook of LHC Higgs Cross Sections: 2. Differential Distributions, arXiv:1201.3084.
| []
|
[]
| [
"Fei Jiang [email protected] \nCollege of Mathematics and Computer Science\nFuzhou University\n350108FuzhouChina\n\nInstitute of Applied Physics and Computational Mathematics\n100088, +86-18305950592BeijingChina\n",
"Song Jiang \nInstitute of Applied Physics and Computational Mathematics\n100088, +86-18305950592BeijingChina\n"
]
| [
"College of Mathematics and Computer Science\nFuzhou University\n350108FuzhouChina",
"Institute of Applied Physics and Computational Mathematics\n100088, +86-18305950592BeijingChina",
"Institute of Applied Physics and Computational Mathematics\n100088, +86-18305950592BeijingChina"
]
| []
| We investigate the instability of a smooth Rayleigh-Taylor steady-state solution to compressible viscous flows without heat conductivity in the presence of a uniform gravitational field in a bounded domain Ω ⊂ R 3 with smooth boundary ∂Ω. We show that the steady-state is linearly unstable by constructing a suitable energy functional and exploiting arguments of the modified variational method. Then, based on the constructed linearly unstable solutions and a local wellposedness result of classical solutions to the original nonlinear problem, we further reconstruct the initial data of linearly unstable solutions to be the one of the original nonlinear problem and establish an appropriate energy estimate of Gronwall-type. With the help of the established energy estimate, we show that the steady-state is nonlinearly unstable in the sense of Hadamard by a careful bootstrap argument. As a byproduct of our analysis, we find that the compressibility has no stabilizing effect in the linearized problem for compressible viscous flows without heat conductivity. | 10.1007/s00021-018-0375-4 | [
"https://arxiv.org/pdf/1403.5016v2.pdf"
]
| 118,952,391 | 1403.5016 | a4c20910bbc7f539e042d39c2d711b48e89b01a0 |
2000
Fei Jiang [email protected]
College of Mathematics and Computer Science
Fuzhou University
350108FuzhouChina
Institute of Applied Physics and Computational Mathematics
100088, +86-18305950592BeijingChina
Song Jiang
Institute of Applied Physics and Computational Mathematics
100088, +86-18305950592BeijingChina
2000Preprint submitted to March 21, 2014On the dynamical Rayleigh-Taylor instability in compressible viscous flows without heat conductivity * Corresponding author:Compressible Navier-Stokes equationssteady solutionsRayleigh-Taylor instabilityinstability in the Hadamard sense
We investigate the instability of a smooth Rayleigh-Taylor steady-state solution to compressible viscous flows without heat conductivity in the presence of a uniform gravitational field in a bounded domain Ω ⊂ R 3 with smooth boundary ∂Ω. We show that the steady-state is linearly unstable by constructing a suitable energy functional and exploiting arguments of the modified variational method. Then, based on the constructed linearly unstable solutions and a local wellposedness result of classical solutions to the original nonlinear problem, we further reconstruct the initial data of linearly unstable solutions to be the one of the original nonlinear problem and establish an appropriate energy estimate of Gronwall-type. With the help of the established energy estimate, we show that the steady-state is nonlinearly unstable in the sense of Hadamard by a careful bootstrap argument. As a byproduct of our analysis, we find that the compressibility has no stabilizing effect in the linearized problem for compressible viscous flows without heat conductivity.
Introduction
The motion of a three-dimensional (3D) compressible viscous fluid without heat conductivity in the presence of a uniform gravitational field in a bounded domain Ω ⊂ R 3 with smooth boundary is governed by the following Navier-Stokes equations: ρ t + div(ρv) = 0, ρv t + ρv · ∇v + ∇p = µ∆v + µ 0 ∇divv − ρge 3 , ρe t + ρv · ∇e + pdivv = µ|∇v + ∇v T | 2 /2 + λ(divv) 2 .
(1.1)
Here the unknowns ρ := ρ(t, x), v := v(t, x), e := e(t, x) and p = aρe denote the density, velocity, specific internal energy and pressure of the fluid respectively, µ 0 = µ + λ and a = γ − 1. The known constants λ, µ and γ are the viscosity coefficients and the ratio of specific heats satisfying the natural restrictions:
µ > 0, 3λ + 2µ ≥ 0; γ > 1.
g > 0 is the gravitational constant, e 3 = (0, 0, 1) T is the vertical unit vector, and −ge 3 is the gravitational force.
In this paper we consider the problem of the Rayleigh-Taylor (RT) instability for the system (1.1). Thus, we choose a RT (steady-state) density profileρ :=ρ(x 3 ) which is independent of (x 1 , x 2 ) and satisfies
ρ ∈ C 4 (Ω), inf x∈Ωρ > 0,ρ ′ (x 0 3 ) > 0 for some x 0 3 ∈ {x 3 | (x 1 , x 2 , x 3 ) ∈ Ω},(1.2)
whereρ ′ := dρ/dx 3 . We remark that the first condition in (1.2) guarantees that the steady density profile belongs to some C 0 ([0, T ), H 3 (Ω)), the second one in (1.2) prevents us from treating vacuum in the construction of unstable solutions, while the third one in (1.2) assures that there is at least a region in which the RT density profile has larger density with increasing x 3 (height), thus leading to the classical RT instability as will be shown in Theorem 1.1 below. By the theory of first-order linear ODE, for givenρ in (1.2) we can find a corresponding steady internal energȳ e that only depends on x 3 and is unique up to a constant divided byρ, i.e.,
e = −g(aρ) −1 ρ(x 3 )dx 3 ,
such that 0 <ē ∈ C 4 (Ω) and ∇p = −ρge 3 in Ω, (1.3) wherep := aρē. Clearly, the RT density profile (ρ, v ≡ 0,ē) gives a steady state to the system (1.1). Now, we define the perturbation of (ρ, v, e) by
̺ = ρ −ρ, u = v − 0, θ = e −ē.
Then, the triple (̺, u, θ) satisfies the perturbed equations ̺ t + div((̺ +ρ)u) = 0, (̺ +ρ)u t + (̺ +ρ)u · ∇u + a∇[(̺ +ρ)(θ +ē) −ρē] = µ∆u + µ 0 ∇divu − g̺e 3 , θ t + u · ∇(θ +ē) + a(θ +ē)divu = {µ|∇u + ∇u T | 2 /2 + λ(divu) 2 }/(̺ +ρ).
(1.4)
To complete the statement of the perturbed problem, we specify the initial and boundary conditions:
(̺, u, θ)| t=0 = (̺ 0 , u 0 , θ 0 ) in Ω (1.5) and u(t, x)| ∂Ω = 0 for any t > 0. (1.6) Moreover, the initial data should satisfy the compatibility condition {(̺ 0 +ρ)u 0 · ∇u 0 + a∇[(̺ 0 +ρ)(θ 0 +ē) −ρē]}| ∂Ω = (µ∆u 0 + µ 0 ∇divu 0 − g̺ 0 e 3 )| ∂Ω .
If we linearize the equations (1.4) around the steady state (ρ, 0,ē), then the resulting linearized equations read as ̺ t + div(ρu) = 0, ρu t + a∇(ē̺ +ρθ) = µ∆u + µ 0 ∇divu − g̺e 3 , θ t +ē ′ u 3 + aēdivu = 0.
(1.7)
The RT instability is well-known as gravity-driven instability in fluid dynamics when a heavy fluid is on top of a light one. Instability of the linearized problem (i.e. linear instability) for an incompressible fluid was first introduced by Rayleigh in 1883 [30]. In the recent years, the study on the mathematical theory of the RT instability for fluid dynamics and magnetohydrodynamics (MHD), based on the (generalized) variational method, has attracted much attention, and some progress has been made. In 2003, Hwang and Guo [15] first proved the nonlinear RT instability of (̺, u) L 2 (Ω) in the sense of Hadamard for a 2D nonhomogeneous incompressible inviscid fluid with boundary condition u·n| ∂Ω = 0, where Ω = {(x 1 , x 2 ) ∈ R 2 | −l < x 2 < m} and n denotes the outer normal vector to ∂Ω. Later, Jiang, Jiang and Ni [18] showed the nonlinear RT instability of u 3 L 2 (R 3 ) for the Cauchy problem of nonhomogeneous incompressible viscous flows in the sense of the Lipschitz structure, and further gave the nonlinear RT instability of u 3 L 2 (Ω) in [19] in the sense of Hadamard in a unbounded horizontal period domain Ω. In addition, similar results on the nonlinear RT instability were established for two layer incompressible viscous fluids with a free interface (so-called stratified fluids), where the RT steady-state solution is a denser fluid lying above a lighter one separated by a free interface and the domain is also a flat domain (such as R 3 and a horizontal period domain), please see [29,32]. We mention that the analogue of the RT instability arises when fluids are electrically conducting and a magnetic field is present, and the growth of the instability will be influenced by the magnetic field due to the generated electromagnetic induction and the Lorentz force. The aforementioned partial results of the RT instability have been extended to the case of MHD fluids by circumventing additional difficulties induced by presence of the magnetic field, see [3,19,20] for examples.
All the above mentioned results are obtained for a flat or horizontal period domain, because in such a case one can apply the method of the Fourier transform (or discrete mode-e iξ·x ) to analyze properties of spectrums of the associated linearized problems. This basic technique has also been applied to the instability study for other problems, for example, for the periodic BGK equilibria [10], for the space periodic quasi-geostrophic equation [4], for an ideal space periodic fluid [5,8,31] and for the space periodic and whole space forced incompressible MHD equations [2,7]. Recently, Guo and Tice [12] used a modified variational method to investigate a ODE problem arising in the investigation of the linear RT instability for compressible stratified flows. Motivated by their work, Jiang and Jiang [17] adapted the modified variational method to avoid the Fourier transform and constructed the unstable linear solutions of a nonhomogeneous incompressible viscous flow in a general bounded domain Ω, and they proved the nonlinear RT instability by developing a new energy functional to overcome the difficulty induced by the compatibility conditions on boundary under the restriction
inf x∈Ω {ρ ′ (x)} > 0. (1.8)
In contrast to the incompressible fluid case, there are very few results on the nonlinear RT instability for compressible flows which are much more complicated and involved to deal with mathematically due to the difficulties induced by compressibility, and hence, new techniques have to be employed (see Remarks 1.1, 1.3 and the paragraph below Remark 1.4 for more comments). In [14] Hwang investigated the nonlinear RT instability of a compressible inviscid MHD fluid in a period domain. We also mention that there are some articles studying the the role of the compressibility effects on the linear RT instability, we refer to [6,11,13,[22][23][24] for more details.
The above mentioned nonlinear RT instability results are concerned either with incompressible flows or with compressible isentropic flows for a spatially periodic domain. To our best knowledge, there is no result on the nonlinear RT instability in compressible non-isentropic flows in a general bounded domains. In this paper we shall prove the nonlinear RT instability for the initial-boundary problem (1.4)-(1.6) of a compressible non-isentropic flow without heat diffusion in a general bounded domain in the sense of Hadamard. Moreover, we shall show that the sharp growth rate of solutions to the linearized problem (1.7) is not less than that of the solutions in the corresponding incompressible fluid case [17], this means that that the compressibility does not have a stabilizing effect in the linearized problem (1.5)-(1.7) (also see Remark 1.2). Besides, the condition (1.8) is not needed in the proof of the nonlinear instability. The current work is a further continuation of our previous studies [17] where incompressible fluids were investigated.
Before stating the main result of this paper, we explain the notations used throughout this paper. For simplicity, we drop the domain Ω in Sobolve spaces and the corresponding norms as well as in integrands over Ω, for example,
L p := L p (Ω), H 1 0 := W 1,2 0 (Ω), H k := W k,2 (Ω), := Ω .
In addition, a product space (X) n of vector functions are still denoted by X, for examples, the vector function u ∈ (H 2 ) 3 is denoted by u ∈ H 2 with norm u H 2 := ( 3 k=1 u k 2 H 2 ) 1/2 . We shall use the abbreviations:
D k := {∂ k 1 x 1 ∂ k 2 x 2 ∂ k 3 x 3 } k 1 +k 2 +k 3 =k , | D k f | := k 1 +k 2 +k 3 =k | ∂ k 1 x 1 ∂ k 2 x 2 ∂ k 3 x 3 f | for some norm | · | .
Now we are able to state our main result on the nonlinear RT instability of the problem (1.4)-(1.6).
Theorem 1.1. Assume that the RT density profileρ and the steady internal energyē satisfy (1.2)-(1.3). Then, the steady state (ρ, 0,ē) of the system (1.4)-(1.6) is unstable in the Hadamard sense, that is, there are positive constants Λ, m 0 , ε and δ 0 , and functions (̺ 0 ,ū 0 ,θ 0 , u r ) ∈ H 3 , such that for any δ ∈ (0, δ 0 ) and the initial data
(̺ 0 , u 0 , θ 0 ) := δ(̺ 0 ,ū 0 ,θ 0 ) + δ 2 (̺ 0 , u r ,θ 0 ) ∈ H 3 ,
there is a unique solution (̺, u, θ) ∈ C 0 ([0, T max ), H 3 ) of (1.4)-(1.6) satisfying the compatibility condition and
(u 1 , u 2 )(T δ ) L 2 , u 3 (T δ ) L 2 ≥ ε (1.9)
for some escape time T δ := 1 Λ ln 2ε m 0 δ ∈ (0, T max ), where T max denotes the maximal time of existence of the solution (̺, u, θ), and u i denotes the i-th component of u = (u 1 , u 2 , u 3 ) T . then we can get the instability of the perturbed density, i.e., Thoerm 1.1 holds with ̺(T δ ) L 2 ≥ ε. The additional condition (1.10) is used to show̺ 0 := div(ρṽ 0 ) ≡ 0 in the conctruction of a linear unstable solution (cf. (2.9)), where (̺ 0 ,ṽ 0 ) is a solution to the time-independent system (2.1). It is not clear to the authors whether one could get̺ 0 ≡ 0 without the condition (1.10). In the incompressible fluid case, however, we can obtain̺ 0 ≡ 0 without (1.10).
Remark 1.2. The constant Λ > 0 in Theorem 1.1 is called sharp growth rate, since any solution (̺,û,θ) to (1.5)-(1.7) satisfies (̺,û,θ)(t) 2 H 2 ≤ Ce 2Λt (̺ 0 , u 0 , θ 0 ) 2 H 2 for some constant C (see Appendix). Moreover, it is unique defined by the relation (2.22). Recently, we proved the nonlinear RT instability in nonhomogeneous incompressible viscous fluids in [17], where the sharp growth rate Λ inc is defined by
Λ 2 inc := sup v∈{H 1 0 | divu=0, ρ|ṽ| 2 =1} g ρ ′ṽ2 3 dx − Λ inc µ |∇ṽ| 2 dx .
If we consider the incompressible fluid case corresponding to (1.5)-(1.7), we easily find that Λ inc is also the sharp growth rate of the incompressible fluid case corresponding to (1.5)-(1.7). On the other hand, by the relation (2.22), one easily gets Λ ≥ Λ inc by contradiction. Hence, we can conclude that the compressibility does not have a stabilizing effect in the linearized problem for compressible non-isentropic flows without heat conductivity.
̺ 2 −ρ ′ +ρ u 2 g +ρ θ 2 gē dx + t 0 2µ g |∇u| 2 + 2µ 0 g |divu| 2 dxdt = ̺ 2 0 −ρ ′ +ρ u 2 0 g +ρ θ 2 0 gē dx.
However, it is not clear to authors whether the corresponding nonlinear system (1.4)-(1.6) around the state (ρ, 0,ē) is stable, even ifρ ′ is a positive constant. We mention that the stability of a nonhomogeneous incompressible viscous flow around some steady state (ρ, 0) withρ ′ being a positive constant was shown by making use of the incompressible condition divu = 0, see [17, Theorem 1.2] for details.
Remark 1.4. We remark that our results can not be generalized to the case with heat conduction, i.e., adding the term κ ν ∆e to the right hand of the equation (1.4) 3 , where κ ν = κ/c ν , and κ is the heat conductivity coefficient and c ν is the specific heat at constant volume, since there does not exist a steady solution (ρ, 0,ē) satisfying (1.2), (1.3) and ∆ē = 0. In fact, if such a steady solution existed, thenē would enjoy the form:
e = c 1 1ds = −g(aρ(x 3 )) −1 ρ(x 3 )dx 3 > 0 in Ω, for some constant c 1 > 0. Thus, one has −g ρ(x 3 )dx 3 = ac 1ρ (x 3 ) 1dx 3 , whence, 0 > −gρ(x 3 ) = ac 1ρ (x 3 ) + ac 1ρ ′ (x 3 ) 1dx 3 = ac 1ρ (x 3 ) + aρ ′ē (x 3 ) > 0 for x 3 = x 0 3 ,
which obviously is a contradiction.
Next, we sketch the main idea in the proof of Theorem 1.1. The proof is broken up into three steps. Firstly, as in [17] we make the following ansatz of growing mode solutions to the linearized problem:
(̺(x, t), u(x, t), θ(x, t)) = e Λt (ρ(x),ṽ(x),θ(x)) for some Λ > 0 (1.11) and deduce (1.7) thus into a time-independent PDE system on the unknown functionṽ. Then we adapt and modify the modified variational method in [12] to the time-independent system to get a non-trivial solutionṽ with a sharp growth rate Λ, which immediately implies that the linearized problem has a unstable solution in the form (1.11). This idea was used probably first by Guo and Tice to deal with an ODE problem arising in constructing unstable linear solutions, and later adapted by other researchers to treat other linear instability problems of viscous fluids, see [16,20]. Here we directly adapt this idea to the time-independent PDE system to avoid the use of the Fourier transform and to relax the restriction on domains. Secondly, we establish energy estimates of Gronwall-type in H 3 -norm. Similar (global in time) estimates were obtained for the non-isentropic compressible Navier-Stokes equations with heat conductivity under the condition of small initial data and external forces [26,27]. Here we have to modify the arguments in [26,27] to deal with the compressible Navier-Stokes equations without heat conductivity. Namely, we control the sumē̺ +ρθ as one term (see (3.16)) instead of dividing it into two terms in [27]; and we use the equations (1.4) 1 and (1.4) 2 independently to bound ̺ H 3 and θ H 3 (i.e. Lemma 3.1), rather than coupling the equations together to bound ̺ H 3 in [27]. With these slight modifications in techniques, we can get the desired estimates. Finally, we use the frame of bootstrap arguments in [9] to show Theorem 1.1 and we have to circumvent two additional difficulties due to presence of boundary which do not appear for spatially periodic problems considered in [9]: (i) The idea of Duhamel's principle on the linear solution operator in [9] can not be directly applied here to our boundary problem, since the nonlinear term in (1.4) 2 does not vanish on boundary. To overcome this trouble, we employ some specific energy estimates to replace Duhamel's principle (see Lemma 4.2 on the error estimate for (̺ d , u d , θ d ) 2
L 2 ). (ii) At the boundary the initial data of the linearized problem may not satisfy the compatibility condition imposed for the initial data of the corresponding nonlinear system (1.4)-(1.6). To circumvent this difficulty, we use the elliptic theory to construct initial data of (1.4)-(1.6) that satisfy the compatibility condition and are close to the initial data of the linearized problem. We also mention that in [17] the authors got around a similar problem of compatibility conditions for incompressible flows by imposing the condition (1.8) and introducing a new energy functional to show that the initial data of the linearized problem can be used as the initial data of the corresponding nonlinear incompressible system. The rest of this paper is organized as follows. In Section 2 we construct unstable linear solutions, while in Section 3 we deduce the nonlinear energy estimates. Section 4 is dedicated to the proof of Theorem 1.1, and finally, in Appendix we give a proof of the sharp growth rate of solutions to the linearized problem in H 2 -norm.
Linear instability
In this section, we adapt the modified variational method in [12] to construct a solution to the linearized equations (1.7) that has growing H 3 -norm in time. We first make a solution ansatz (1.11) of growing normal mode. Substituting this ansatz into (1.7), one obtains the following time-independent system:
Λρ + div(ρṽ) = 0, Λρṽ + a∇(ēρ +ρθ) = µ∆ṽ + µ 0 ∇divṽ − gρe 3 , Λθ +ē ′ṽ 3 + aēdivṽ = 0, v| ∂Ω = 0. (2.1)
Eliminating̺ andθ, one has
Λ 2ρṽ + ∇(gρṽ 3 − (1 + a)pdivṽ) = Λµ∆ṽ + Λµ 0 ∇divṽ + (gρ ′ṽ 3 + gρdivṽ)e 3 , v| ∂Ω = 0, (2.2)
whereṽ denotes the third component of v. In view of the basic idea of the modified variational method, we modify the boundary problem (2.2) as follows.
Λ 2ρṽ + ∇(gρṽ 3 − (1 + a)pdivṽ) = sµ∆ṽ + sµ 0 ∇divṽ + (gρ ′ṽ 3 + gρdivṽ)e 3 , v| ∂Ω = 0, (2.3)
We remark that if s = Λ (fixed point), then the problem (2.3) becomes (2.2). Now, multiplying (2.3) 1 byṽ and integrating the resulting identity, we get
Λ 2 ρ|ṽ| 2 = {gρ ′ṽ2 3 + [2gρṽ 3 − (1 + a)pdivṽ]divṽ}dx − s µ|∇ṽ| 2 + µ 0 |divṽ| 2 dx. (2.4)
We define
E 1 (ṽ) = {gρ ′ṽ2 3 + [2gρṽ 3 − (1 + a)pdivṽ]divṽ}dx, and E 2 (ṽ) = (µ|∇ṽ| 2 + µ 0 |divṽ| 2 )dx.
Then the standard energy functional for the problem (2.3) is given by
E(ṽ) = E 1 (ṽ) − sE 2 (ṽ) (2.5)
with an associated admissible set
A := ṽ ∈ H 1 0 J(ṽ) := ρṽ 2 dx = 1 . (2.6)
Recalling (2.4), we can thus find Λ by maximizing
Λ 2 := sup v∈A E(ṽ). (2.7)
Obviously, supṽ ∈A E(ṽ) < ∞ for any s ≥ 0. In order to emphasize the dependence of E(ṽ) upon s > 0, we shall sometimes write
E(ṽ, s) := E(ṽ) and α(s) := sup v∈A E(ṽ, s).
Next we show that a maximizer of (2.7) exists and that the corresponding Euler-Lagrange equations are equivalent to (2.3).
Proposition 2.1. Assume that (ρ,ē) satisfies (1.2)-(1.
3), then for any but fixed s > 0, the following assertions hold.
(1) E(ṽ) achieves its supremum on A.
(2) Letṽ 0 be a maximizer and Λ := supṽ ∈A E(ṽ) > 0, then there exists a functionṽ 0 ∈ H 4 satisfying the boundary problem (2.3) and v 2 01 +ṽ 2 02 ≡ 0.
(2.8)
In addition div(ρṽ 0 ) ≡ 0, providedρ ′ ≥ 0. (2.9)
Proof. (1) Letṽ n ∈ A be a maximizing sequence, then E(ṽ n ) is bounded from below. This fact together with (2.6) implies thatṽ n is bounded in H 1 . So, there exists aṽ 0 ∈ H 1 ∩ A and a subsequence (still denoted by v n for simplicity), such thatṽ n →ṽ 0 weakly in H 1 and strongly in L 2 . Moreover, by the lower semi-continuity, one has
sup v∈A E(ṽ) = lim sup n→∞ E(ṽ n ) = lim n→∞ (gρ ′ṽ2 n3 + 2gρṽ n3 divṽ n )dx − lim inf n→∞ [(1 + a)pdivṽ n divṽ n + s µ|∇ṽ n | 2 + µ 0 |divṽ n | 2 ]dx ≤E(ṽ 0 ) ≤ sup v∈A E(ṽ),
which shows that E(ṽ) achieves its supremum on A.
(2) To show the second assertion, we notice that since E(ṽ) and J(ṽ) are homogeneous of degree 2, (2.7) is equivalent to
Λ 2 = sup v∈H 1 0 E(ṽ) J(ṽ)
.
(2.10)
For any τ ∈ R and w ∈ H 1 0 , we takew(τ ) :=ṽ 0 + τ w. Then (2.10) gives
E(w(τ )) − Λ 2 J(w(τ )) ≤ 0. If we set I(τ ) = E(w(τ )) − Λ 2 J(w(τ )), then we see that I(τ ) ∈ C 1 (R), I(τ ) ≤ 0 for all τ ∈ R and I(0) = 0. This implies I ′ (0) = 0. Hence, a direct computation leads to Ω {sµ∇ṽ 0 : ∇w + [sµ 0 + (1 + a)p]divṽ 0 divw}dx = Ω [gρdivṽ 0 e 3 + gρ ′ṽ 03 e 3 − ∇(gρṽ 03 ) − Λ 2ρṽ 0 ] ·wdx. (2.11)
which shows thatṽ is a weak solution to the boundary problem (2.3). Recalling that 0 < p ∈ C 4 (Ω),ρ ∈ C 4 (Ω) andṽ 0 ∈ H 1 (Ω), by a bootstrap argument and the classical elliptic theory, we infer from the weak form (2.11) thatṽ 0 ∈ H 4 (Ω).
Next we turn to the proof of (2.8) and (2.9) by contradiction. Suppose thatṽ 2 01 +ṽ 2 02 ≡ 0 or div(̺ṽ 0 ) ≡ 0, then
0 < Λ 2 = {gρ ′ṽ2 03 + [2gρṽ 03 − (1 + a)p∂ x 3ṽ 03 ]∂ x 3ṽ 03 }dx − s µ|∇ṽ 03 | 2 + µ 0 |∂ x 3ṽ 03 | 2 dx = − (1 + a)p|∂ x 3ṽ 03 | 2 dx − s µ|∇ṽ 03 | 2 + µ 0 |∂ x 3ṽ 03 | 2 dx < 0, (2.12) or 0 < Λ 2 = {gρ ′ṽ2 03 + [2gρṽ 03 − (1 + a)pdivṽ 0 ]divṽ 0 }dx − s µ|∇ṽ 0 | 2 + µ 0 |divṽ 0 | 2 dx = − [gρ ′ṽ2 03 + (1 + a)p|divṽ 0 | 2 ]dx − s µ|∇ṽ 0 | 2 + µ 0 |divṽ 0 | 2 dx < 0,(2.13)
which contradicts. Therefore, (2.8) and (2.9) hold. This completes the proof.
Next, we want to show that there is a fixed point such that Λ = s > 0. To this end, we first give some properties of α(s) as a function of s > 0.
Proposition 2.2. Assume that (ρ,ē) satisfies (1.2)-(1.3).
Then the function α(s) defined on (0, ∞) enjoys the following properties:
(1) α(s) ∈ C 0,1 loc (0, ∞) is nonincreasing. (2)
There are constants c 1 , c 2 > 0 which depend on g,ρ and µ, such that
α(s) ≥ c 1 − sc 2 .
(2.14)
Proof.
(1) Let {ṽ n s i } ⊂ A be a maximizing sequence of supṽ ∈A E(ṽ, s i ) = α(s i ) for i = 1 and 2. Then
α(s 1 ) ≥ lim sup n→∞ E(ṽ n s 2 , s 1 ) ≥ lim inf n→∞ E(ṽ n s 2 , s 2 ) = α(s 2 ) for any 0 < s 1 < s 2 < ∞.
Hence α(s) is nonincreasing on (0, ∞). Next we use this fact to show the continuity of α(s).
Let I := [b, c] ⊂ (0, ∞) be a bounded interval.
Noting that, by Cauchy-Schwarz's inequality,
E(ṽ) ≤ (gρ ′ṽ2 3 + 2gρṽ 3 divṽ)dx − (1 + a) p|divṽ| 2 dx ≤g ρ ′ ρ L ∞ + g (1 + a) ρ p L ∞ .
Hence, by the monotonicity of α(s) we have
|α(s)| ≤ max |α(b)|, g ρ ′ ρ L ∞ + g (1 + a) ρ p L ∞ := L < ∞0 ≤ µ|∇ṽ| 2 + µ 0 |divṽ| 2 dx = 1 s {gρ ′ |ṽ n s3 | 2 + [2gρṽ n s3 − (1 + a)pdivṽ n s ]divṽ n s }dx − E(ṽ n s , s) s ≤ 1 + L b + g b ρ ′ ρ L ∞ + g (1 + a) ρ p L ∞ := K.
Thus, for s i ∈ I (i = 1, 2), we further find that (2.17)
Reversing the role of the indices 1 and 2 in the derivation of the inequality (2.17), we obtain the same boundedness with the indices switched. Therefore, we deduce that
|α(s 1 ) − α(s 2 )| ≤ K|s 1 − s 2 |,
which yields α(s) ∈ C 0,1 loc (0, ∞). (2) We turn to prove (2.14). First we construct a function v ∈ H 1 0 , such that
divv = 0, ρ ′ v 2 3 dx > 0. (2.18) Noting that sinceρ ′ (x 0 3 ) > 0 for some point x 0 3 ∈ {x 3 | (x 1 , x 2 , x 3 ) ∈ Ω}, there is a ball B δ x 0 := {x | |x − x 0 | < δ} ⊂ Ω, such thatρ ′ > 0 on B δ x 0 . Now, choose a smooth function f (r) ∈ C 1 (R), such that f (r) = −f (−r), |f (r)| > 0 if 0 < |r| < δ/4, and f (r) = 0 if |r| ≥ δ/4, and define thenv (x) := f (x 1 ) 0, −f (x 3 ) x 2 −δ/4 f (r)dr, f (x 2 ) x 3 −δ/4 f (r)dr .
It is easy to check that the non-zero functionv(
x) ∈ H 1 0 (B δ 0 ), thus v := u(x − x 0 ) ∈ H 1 0 (Ω)v∈H 1 0 E(ṽ, s) J(ṽ) ≥ E(v, s) J(v) = g ρ ′ v 2 3 dx ρv 2 dx − s µ |∇v| 2 dx ρv 2 dx := c 1 − sc 2
for two positive constants c 1 := c 1 (g,ρ) and c 2 := c 2 (g, µ,ρ). This completes the proof of Proposition 2.2.
Next we show that there exists a functionṽ satisfying (2.2) with a grow rate Λ. Let S := sup{s | α(τ ) > 0 for any τ ∈ (0, s)}.
By virtue of Proposition 2.2, S > 0; and moreover, α(s) > 0 for any s < S. Since α(s) = supṽ ∈A E(ṽ, s) < ∞, we make use of the monotonicity of α(s) to deduce that lim s→0 α(s) exists and the limit is a positve constant. (2.19) On the other hand, by virtue of Poincáre's inequality, there is a constant c 3 dependent of g, ρ and Ω, such that
g (ρ ′ṽ2 3 + 2ρṽ 3 divṽ)dx ≤ c 3 |∇ṽ| 2 dx for anyṽ ∈ A.
Thus, if s > c 3 /µ, then
g (ρ ′ṽ2 3 + 2ρṽ 3 divṽ)dx − sµ |∇ṽ| 2 dx < 0 for anyṽ ∈ A,Λ 2 = sup w∈H 1 0 (Ω) E 1 (w) − ΛE 2 (w) ρ|w| 2 dx . (2.22)
Moreover,ṽ satisfies div(ρṽ) ≡ 0,ṽ 2 1 +ṽ 2 2 ≡ 0 andṽ 3 ≡ 0. In particular, let (ρ,θ) := −(div(ρṽ),ρē ′ṽ 3 +pdivṽ)/Λ, then (ρ,ṽ,θ) ∈ H 3 satisfies (2.1). In addition,ρ ≡ 0 provided ρ ′ ≥ 0.
As a result of Proposition 2.3, one immediately gets the following linear instability.
(u 1 , u 2 )(t) L 2 and u 3 (t) L 2 → ∞ as t → ∞,
where the constant growth rate Λ is the same as in Proposition 2.3. Moreover,ρ ≡ 0 provided ρ ′ ≥ 0.
Nonlinear energy estimates
In this section, we derive some nonlinear energy estimates for the perturbed Cauchy problem (1.4)-(1.6) and an estimate of Gronwall-type in H 3 -norm, which will be used in the proof of is sufficiently small (the smallness depends on the physical parameters in (1.4)), and
0 < ρ ≤ ρ(t, x) ≤ρ := ̺ +ρ < ∞ for any t ≥ 0, x ∈ Ω,
where ρ andρ are two constants. We remark here that these assumptions will be repeatedly used in what follows. Moreover, we assume that the solution (̺, u, θ) possesses proper regularity, so that the procedure of formal calculations makes sense. For simplicity, we only sketch the outline and shall omit the detailed calculations for which we remind that we shall repeatedly use the Soblev embedding theorem [28, Subsection 1.3.5.8], Young's, Höldear's and Poincaré's inequalities, and the following interpolation inequality [1,Chapter 5]:
f H j f 1 j+1 L 2 f j j+1 H j+1 ≤ C ǫ f L 2 + ǫ f H j+1 for any constant ǫ > 0.
In addition, we shall always use the following abbreviations in what follows.
E 0 = E(̺ 0 , u 0 , N 0 ), d dt := ∂ t + u · ∇ denotes the material derivative, L ̺ ≡ L ̺ (̺, u) := ̺ t +ρ ′ u 3 +ρdivu = −div(̺u) := N ̺ (̺, u) ≡ N ̺ , (3.2) L u ≡ L u (̺, u, θ) :=ρu t + a∇(ē̺ +ρθ) − µ∆u − µ 0 ∇divu + g̺e 3 = −(̺ +ρ)u · ∇u − ̺u t − a∇(̺θ) := N u (̺, θ, u) ≡ N u , (3.3) L θ ≡ L θ (̺, u, θ) := θ t +ē ′ u 3 + aēdivu = −u · ∇θ − aθdivu +[µ|∇u + ∇(u) T | 2 /2 + λ(divu) 2 ]/(̺ +ρ) := N θ (̺, u, θ) ≡ N θ , (3.4) R(t) := ̺, θ, u t , d dt (ē̺ +ρθ) 2 H 2 + E( u H 3 + u 2 H 4 + E 2 ),
and the symbol a b means that a ≤ Cb for some constant C > 0 which may depend on some physical parameters in the perturbed equations (1.4). Now, we start to establish a series of lemmas which imply a priori estimates for the perturbed density, velocity and temperature. Firstly, from the following identities
t 0 D k L ̺ D k ̺dxdτ = t 0 D k N ̺ D k ̺dxdτ, t 0 D k L θ D k θdxdτ = t 0 D k N θ D k θdxdτ for 0 ≤ k ≤ 3,
the following estimate on the perturbed density and temperature follows.
Lemma 3.1. For 0 ≤ k ≤ 3, it holds that (̺, θ) 2 H k (̺, θ)(0) 2 H k + t 0 E( u H k+1 + E 2 )dτ.
Secondly, we control the perturbed velocity. Since the viscosity term of (3.3) defines a strongly elliptic operator on u, we have for u ∈ H k ∩ H 1 0 (1 ≤ k ≤ 3) that
u 2 H k µ∆u + µ 0 ∇divu 2 H k−2 . (3.5)
Thus, applying (3.5) to the system
− µ∆u − µ 0 ∇divu = N u −ρu t − g̺e 3 − a∇(ē̺ +ρθ),(3.u t 2 H 1 + (̺, θ) 2 H 2 + E 4 .
Thirdly, we bound the time-derivative of the perturbed velocity.
Lemma 3.3. It holds that (̺, θ) t 2 H k u 2 H k+1 + E 4 E 2 for 0 ≤ k ≤ 2, (3.7) u t (t) 2 H 1 + t 0 u tt 2 L 2 dτ Du t (0) 2 L 2 + t 0 ( u 2 H 2 + E 4 )dτ, (3.8) u t (t) 2 H 2 u tt 2 L 2 + u 2 H 2 + E 4 . (3.9)
Proof. The inequality (3.7) follows directly from (3.2) and (3.4). By (3.3), we see that
u t 2 H 1 (̺, θ) 2 H 2 + u 2 H 3 + E 4 E 2 .
(3.10)
Hence, using (3.7) with k = 1, (3.10) and Poincaré's inequality, we get (3.8) from
t 0 L u t · u tt dxdτ = t 0 N u t · u tt dxdτ.
Finally, applying (3.5) to ∂ t (3.6) and making use of (3.7) and (3.10), we obtain (3.9).
Fourthly, we establish the interior estimates of higher-order mass derivatives ofē̺ +ρθ. Let χ 0 be an arbitrary but fixed function in C ∞ 0 (Ω). Then, recalling the equation
t 0 aχ 2 0ē ρ D k L ̺ D k ̺ + χ 2 0 D k L u · D k u + χ 2 0ρ e D k L θ D k θ dxdτ = t 0 χ 2 0ē ρ D k N ̺ D k ̺ + χ 2 0 D k N u · D k u + χ 2 0ρ e D k N θ D k θ dxdτ, one obtains Lemma 3.4. For 1 ≤ k ≤ 3, it holds that χ 0 D k (̺, u, θ)(t) 2 L 2 + t 0 χ 0 D k+1 u 2 L 2 + χ 0 D k d dt (ē̺ +ρθ) 2 L 2 dτ E 2 0 + t 0
Rdτ.
Fifthly, let us establish the estimates near the boundary. Similarly to that in [25,27]
, we choose a finite number of bounded open sets
{O j } N j=1 in R 3 , such that ∪ N j=1 O j ⊃ ∂Ω.
In each open set O j we choose the local coordinates (ψ, φ, r) as follows:
(1) The surface O j ∩ ∂Ω is the image of a smooth vector function y = (y 1 , y 2 , y 3 )(ψ, φ) (e.g., take the local geodesic polar coordinate), satisfying |y ψ | = 1, y ψ · y φ = 0, and |y φ | ≥ δ > 0, where δ is some positive constant independent of 1 ≤ j ≤ N.
(2) Any x ∈ O j is represented by
x i = x i (ψ, φ, r) = rn i (ψ, φ) + y i (ψ, φ), (3.11)
where n = (n 1 , n 2 , n 3 )(ψ, φ) represents the internal unit normal vector at the point of the surface coordinated (ψ, φ).
For the simplicity of presentation, we omit the subscript j in what follows. For k = 1, 2, we define the unit vectorẽ k = (ẽ 1 k ,ẽ 2 k ,ẽ 3 k ) byẽ i 1 = y i ψ ,ẽ i 2 = y i φ /|y φ |. Then Frenet-Serret's formula implies that there are smooth functions α, β, γ, α ′ , β ′ , γ ′ of (ψ, φ) satisfying
∂ ∂ψ ẽ i 1 e i 2 n i = 0 −γ −α γ 0 −β α β 0 ẽ i 1 e i 2 n i , ∂ ∂φ ẽ i 1 e i 2 n i = 0 −γ ′ −α ′ γ ′ 0 −β ′ α ′ β ′ 0 ẽ i 1 e i 2 n i .
An elementary calculation shows that the Jacobian J of the transform (3.11) is
J = |x ψ × x φ | = |y φ | + (α|y φ | + β ′ )r + (αβ ′ − βα ′ )r 2 . (3.12)
By (3.12), we find the transform (3.11) is regular by choosing r small if needed. Therefore, the functions (ψ, φ, r) x i (x) make sense and can be expressed by, using a straightforward calculation,
ψ x i = 1 J (x φ × x r ) i = 1 J (Aẽ i 1 + Be i 2 )
,
φ x i = 1 J (x r × x φ ) i = 1 J (Cẽ i 1 +Dẽ i 2 )
,
r x i = 1 J (x ψ × x φ ) i = n i ,(3.13)
where A = |y φ | + β ′ r, B = −rα ′ , C = −βr,D = 1 + αr and J = AD − BC > 0. Hence, (3.13) gives
∂ ∂x i = 1 J (Aẽ i 1 + Bẽ i 2 ) ∂ ∂ψ + 1 J (Cẽ i 1 +Dẽ i 2 ) ∂ ∂φ + n i ∂ ∂r .
Thus, in each O j , we can rewrite the equations (3.2)-(3.4) in the local coordinates (ψ, φ, r) as follows:
L ̺ := ̺ t + zero order terms of u 3 +ρ J [(Aẽ 1 + Bẽ 2 ) · u ψ + (Cẽ 1 +Dẽ 2 ) · u φ + Jn · u r ] = N ̺ L u :=ρu t − µ J 2 [(A 2 + B 2 )u ψψ + 2(AC + BD)u ψφ + (C 2 +D 2 )u φφ + J 2 u rr ] + less two order terms of u + g̺e 3 + 1 J (Aẽ 1 + Bẽ 2 ) µ 0 ρē +p d dt (ēρ +ρθ) + aē̺ + aρθ ψ + 1 J (Cẽ 1 +Dẽ 2 ) µ 0 ρē +p d dt (ēρ +ρθ) + aē̺ + aρθ φ + n µ 0 ρē +p d dt (ēρ +ρθ) + aē̺ + aρθ r = N u + µ 0 ∇{[ρ(N θ + u · ∇θ) −ē̺divu + (ē ′ ̺ +ρ ′ θ)u 3 ]/(ρē +p)} :=Ñ u , L θ := θ t + zero order terms of u 3 + aē J [(Aẽ 1 + Bẽ 2 ) · u ψ + (Cẽ 1 +Dẽ 2 ) · u φ + Jn · u r ] = N θ ,
where we note that J 2 = (AC + BD) 2 − (A 2 + B 2 )(C 2 +D 2 ). Let χ j be an arbitrary but fixed function in C ∞ 0 (O j ). Estimating the integral
t 0 aχ 2 jē ρ D k ψ,φL ̺ D k ψ,φ ̺ + χ 2 j D k ψ,φL u · D k ψ,φ u + χ 2 jρ e D k ψ,φL θ D k ψ,φ θ dxdτ = t 0 aχ 2 jē ρ D k ψ,φ N ̺ D k ψ,φ ̺ + χ 2 j D k ψ,φÑ u · D k ψ,φ u + χ 2 jρ e D k ψ,φ N θ D k ψ,φ θ dxdτ
in a way similar to that in Lemma 3.4, we obtain the following estimates on tangential derivatives:
Lemma 3.5. For 1 ≤ k ≤ 3, it holds that χ j D k ψ,φ (̺, u, θ)(t) 2 L 2 + t 0 χ j D k ψ,φ Du 2 L 2 + χ j D k ψ,φ d dt (ēρ +ρθ) 2 L 2 dτ E 2 0 + t 0 Rdτ, where χ j D k ψ,φ f 2 L 2 := k 1 +k 2 =k χ j ∂ k 1 ψ ∂ k 2 φ f 2 L 2 .
In order to bound the normal derivatives, we use the equations D r (ēL ̺ +ρL θ −ēN ̺ −ρN θ ) = 0 and n · (L u −Ñ u ) = 0, which have the form
d dt (ēρ +ρθ) r +ρē +p J [(Aẽ 1 + Bẽ 2 ) · u rψ + (Cẽ 1 +Dẽ 2 ) · u rφ + Jn · u rr ]
+ less than second order terms of u = [ρ(N θ + u · ∇θ) −ē̺divu + (ē ′ ̺ +ρ ′ θ)u 3 ] r andρ n · u t − µn J 2 · [(A 2 + B 2 )u ψψ + 2(AC + BD)u ψφ + (C 2 +D 2 )u φφ + J 2 u rr ] + less than second order terms of u + g̺e 3 · n + µ 0 ρē +p d dt (ēρ +ρθ) + aē̺ + aρθ r = n ·Ñ u (3.14)
Eliminating µn · u rr from (3.14), we get
(µ + µ 0 ) ρē +p d dt (ēρ +ρθ) + aē̺ + aρθ r = −ρn · u t + µn J 2 [(A 2 + B 2 )u ψψ + 2(AC + BD)u ψφ +(C 2 +D 2 )u φφ ] − µ J [(Aẽ 1 + Bẽ 2 ) · u rψ + (Cẽ 1 + Dẽ 2 ) · u rφ ] + less than second (3.15) order terms of u + g̺e 3 · n = n ·Ñ u + μ ρē +p [ρ(N θ + u · ∇θ) −ē̺divu + (ē ′ ̺ +ρ ′ θ)u 3 ] r .
If we apply D k ψ,φ D l r (k + l = 0, 1, 2) to (3.15), multiply then by χ 2 j D k ψ,φ D l r [d(ēρ +ρθ)/dt] r and χ 2 j D k ψ,φ D l r (ē̺ +ρθ) r respectively, and integrate them, we can bound the derivatives in the normal direction to the boundary as follows. Lemma 3.6. For 0 ≤ k + l ≤ 2, it holds that
χ j D k ψ,φ D l+1 r (ēρ +ρθ)(t) 2 L 2 + t 0 χ j D k ψ,φ D l+1 r (ēρ +ρθ) 2 L 2 + χ j D k ψ,φ D l+1 r d dt (ēρ +ρθ) 2 L 2 dτ (̺ 0 , θ 0 ) 2 H 3 + t 0 ( D k+1 ψ,φ D l r Du 2 L 2 + R)dτ.
Finally, we introduce the following lemma on the stationary Stokes equations to get the estimates on the tangential derivatives of both u andē̺ +ρθ.
Lemma 3.7. Consider the problem
−µ∆u + a∇σ = g, divu = f, u| ∂Ω = 0,
where f ∈ H k+1 and g ∈ H k (k ≥ 0). Then the above problem has a solution (σ, u) ∈ H k+1 × H k+2 ∩H 1 0 which is unique modulo a constant of integration for σ. Moreover, this solution satisfies
u 2 H k+2 + Dσ 2 H k f 2 H k+1 + g 2 H k .
Now, taking χ j D k ψ,φ (k = 1, 2) to the Stokes problem:
−µ∆u + a∇(ē̺ +ρθ) = N u −ρu t − g̺e 3 + µ 0 ∇divu, (ρē +p)divu =ρ(N θ + u · ∇θ) −ē̺divu + (ē ′ ̺ +ρ ′ θ)u 3 − d dt (ē̺ +ρθ) − (ρē) ′ u 3 , u| ∂Ω = 0, we obtain −µ∆(χ j D k ψ,φ u) + a∇[χ j D k
ψ,φ (ē̺ +ρθ)] = less than fourth order of u + less than third order of (̺, θ) + χ j D k ψ,φ (N u −ρu t − g̺e 3 + µ 0 ∇divu), div(χ j D k ψ,φ u) = less than third order of u + χ j D k
ψ,φ ρ(N θ + u · ∇θ) −ē̺ divu + (ē ′ ̺ +ρ ′ θ)u 3 − d dt (ē̺ +ρθ) − (ρē) ′ u 3 (ρē +p) −1 , χ j D ψ,φ u| ∂Ω = 0,
Applying Lemma 3.7 to the above problem, we obtain Lemma 3.8. For 0 ≤ l + k ≤ 2, we have
χ j D 2+l D k ψ,φ u 2 L 2 + χ j D 1+l D k ψ,φ (ēρ +ρθ) 2 L 2 χ j D 1+l D k ψ,φ d dt (ēρ +ρθ) 2 L 2 + R(t).
Now, we are able to establish the desired energy estimate. Putting Lemmas 3.5-3.8 together, we conclude that
3 k=0 t 0 χ j D k+1 u 2 L 2 + χ j D k d dt (ēρ +ρθ) 2 L 2 dτ E 2 0 + t 0 Rdτ,
which, together with Lemma 3.4, yields that
t 0 u 2 H 4 + d dt (ēρ +ρθ) 2 H 3 dτ E 2 0 + t 0
Rdτ.
Noting that, by Lemma 3.3, the interpolation inequality (for j = 4) and Young's inequality, one has
t 0 Rdτ E 2 0 + t 0 [ (̺, θ) 2 H 2 + E( u H 3 + E 2 )]dτ , whence t 0 u 2 H 4 + d dt (ēρ +ρθ) 2 H 3 dτ E 2 0 + t 0 (̺, θ) 2 H 2 + E( u H 3 + E 2 ) dτ. (3.16)
On the other hand, by Lemmas 3.1-3.2, and (3.8) in Lemma 3.3, we find that
E 2 (t) + (̺, θ) t 2 H 2 + u t (t) 2 H 1 + t 0 u tt 2 L 2 dτ E 2 0 + t 0 [ u 2 H 2 + E( u H 4 + E 2 )]dτ.
Consequently, in view of the above inequality and (3.16), and the interpolation inequality, we obtain
E 2 (t) + (̺, θ) t 2 H 2 + u t (t) 2 H 1 + t 0 u tt 2 L 2 dτ E 2 0 + t 0 C ǫ (̺, u, θ) 2 L 2 + E 2 (ǫ + C ǫ E) dτ,(3.17)
where the constant C ǫ depends on ǫ and some physical parameters in (1.4). In particular, we shall take ǫ = Λ later on. Now, let us recall that the local existence and uniqueness of solutions to the perturbed equations (1.4) have been established in [21, Remark 6.1] forρ andē being constants, while the global existence and uniqueness of small solutions to the perturbed equations (1.4) with heat conductivity have been shown in [26] for (ρ,ē) being close to a constant state. By a slight modification in the proof of the local existence in [21,26], one can easily obtain the existence and uniqueness of a local solution (ρ, v, θ) ∈ C 0 ([0, T ], H 3 ) to the perturbed problem (1.4)-(1.6) for some T > 0. Moreover, this local solution satisfies the above a priori estimate (3.17). Therefore, we arrive at the following conclusion: Moreover, there is a sufficiently small constant δ 0 1 ∈ (0, 1], such that if E(t) ≤ δ 0 1 on [0, T ], then the solution (̺, u, θ) satisfies
E 2 (t) + (̺, θ) t (t) 2 H 2 + u t (t) 2 H 1 + t 0 u tt (τ ) 2 L 2 dτ ≤ CE 2 0 + t 0 (C (̺, u, θ)(τ ) 2 L 2 + ΛE 2 (τ ))dτ,(3.
18)
where the constant C only depends on δ 0 1 , Λ, Ω and the known physical parameters in (1.4).
Nonlinear instability
Now we are in a position to prove Theorem 1.1 by adopting and modifying the ideas in [9,16,17]. In view of Theorem 2.1, we can construct a (linear) solution ̺ l , u l , θ l = e Λt ̺ 0 ,ū 0 ,θ 0 (4.1)
to the linearized problem (1.5)-(1.7) with the initial data (̺ 0 ,ū 0 ,θ 0 ) ∈ H 3 . Furthermore, this solution satisfies (ū 01 ,ū 02 ) L 2 ū 03 L 2 > 0, (4.2) whereū 0i stands for the i-th component ofū 0 for i = 1, 2 and 3. In what follows, C 1 , · · · , C 7 will denote generic constants that may depend on (̺ 0 ,ū 0 ,θ 0 ), δ 0 1 , Λ, Ω and the known physical parameters in (1.4), but are independent of δ.
Obvious, we can not directly use the initial data of the linearized equations (1.5)-(1.7) as the one of the associated nonlinear problem, since the linearized and nonlinear equations enjoy different compatibility conditions at the boundary. A similar problem also arises in [16], where Jang and Tice studied the instability of the spherically symmetric Navier-Stokes-Poisson equations. To get around this obstacle, Jang and Tice used the implicit function theorem to produce a curve of initial data that satisfy the compatibility conditions and are close to the linear growing modes. Since our problem involves higher-dimension, we instead use the elliptic theory to construct initial data of the nonlinear equations problem which are close to the linear growing modes.
Lemma 4.1. Let (̺ 0 ,ū 0 ,θ 0 ) be the same as in (4.1). Then there exists a δ 0 2 ∈ (0, 1) depending on (̺ 0 ,ū 0 ,θ 0 ), such that for any δ ∈ (0, δ 0 2 ), there is a u r which may depend on δ and enjoys the following properties:
(1) The modified initial data
(̺ δ 0 , u δ 0 , θ δ 0 ) = δ(̺ 0 ,ū 0 ,θ 0 ) + δ 2 (̺ 0 , u r ,θ 0 ) (4.3)
satisfy u δ 0 | ∂Ω = 0 and the compatibility condition:
(̺ δ 0 +ρ)u δ 0 · ∇u δ 0 + a∇[(̺ δ 0 +ρ)(θ δ 0 +ē) −ρē] − µ∆u δ 0 − µ 0 ∇divu δ 0 + g̺ δ 0 e 3 | ∂Ω = 0.
(2) (̺ r , u r , θ r ) satisfies the following estimate:
u r H 3 ≤ C 1 ,
where the constant C 1 depends on (̺ 0 ,ū 0 ,θ 0 ) H 3 and other physical parameters, but is independent of δ.
Proof. Notice that (̺ 0 ,ū 0 ,θ 0 ) satisfies
u 0 | ∂Ω = 0, [a∇(ē̺ 0 +ρθ 0 ) − µ∆ū 0 − µ 0 ∇divū 0 + g̺ 0 e 3 ]| ∂Ω = 0.
Hence, if the modified initial data satisfy (4.3), then we expect u r to satisfy the following problem: µ∆u r + µ 0 ∇divu r − δ 2 ̺ * * 0 u r · ∇u r − δ̺ * * 0 (ū 0 · ∇u r + u r · ∇ū 0 ) = a∇(ē̺ 0 +ρθ 0 ) + g̺ 0 e 3 + ̺ * * 0ū 0 · ∇ū 0 − a∇(̺ * 0 θ * 0 ) := F (̺ 0 ,ū 0 ,θ 0 ), u r | ∂Ω = 0 (4.4) where ̺ * 0 := (1 + δ)̺ 0 , θ * 0 = (1 + δ)θ 0 and ̺ * * 0 := (̺ δ 0 +ρ) = (δ + δ 2 )̺ 0 +ρ. Thus the modified initial data naturally satisfy the compatibility condition.
Next we shall look for a solution u r to the boundary problem (4.4) when δ is sufficiently small. We begin with the linearization of (4.4) which reads as
µ∆u r + µ 0 ∇divu r = F (̺ 0 ,ū 0 ,θ 0 ) + δ 2 ̺ * * 0 v · ∇v + δ̺ * * 0 (ū 0 · ∇v + v · ∇ū 0 ) (4.5)
with boundary condition u r | Ω = 0. (4.6)
Let v ∈ H 3 , then it follows from the elliptic theory that there is a solution u r of (4.5)-(4.6) satisfying
u r H 3 ≤ F (̺ 0 ,ū 0 ,θ 0 ) + δ 2 ̺ * * 0 v · ∇v + δ̺ * * 0 (ū 0 · ∇v + v · ∇ū 0 ) H 1 ≤C m (1 + (̺ 0 ,ū 0 ,θ 0 ) 2 H 2 + δ 2 v 2 H 2 ). Now, we take C 1 = C m (2 + (̺ 0 ,ū 0 ,θ 0 ) 2 H 2 ) and δ ≤ min{C −1 1 , 1}. Then for any v 2 H 3 ≤ C 1 , one has u r H 3 ≤ C 1 .
Therefore we can construct an approximate function sequence u n r , such that
µ∆u n+1 r + µ 0 ∇divu n+1 r − δ 2 ̺ * * 0 u n r · ∇u n r − δ̺ * * 0 (ū 0 · ∇u n r + u n r · ∇ū 0 ) = F (̺ 0 ,ū 0 ,θ 0 ),
and for any n,
u n r H 3 ≤ C 1 , u n+1 r − u n r H 3 ≤ C 2 δ u n r − u n−1 r H 3
for some constant C 2 independent of δ and n. Finally, we choose a δ sufficiently small so that C 2 δ < 1, and then use a compactness argument to get a limit function which solves the nonlinear boundary problem (4.4). Moreover u r H 3 ≤ C 1 . Thus we have proved Lemma 4.1.
Let (̺ δ 0 , u δ 0 , θ δ 0 ) be constructed as in Lemma 4.1. Then there is a constant
C 3 ≥ max{1, ̺ 0 ,ū 0 ,θ 0 L 2 }
depending on (̺ 0 ,ū 0 ,θ 0 ), such that for any δ ∈ (0, δ 0 2 ) ⊂ (0, 1),
E(̺ δ 0 , u δ 0 , θ δ 0 ) ≤ C 3 δ,
where E is defined by (3.1). Recalling inf x∈Ω {ρ,ē} > 0 and the embedding theorem H 2 ֒→ L ∞ , we can choose a sufficiently small δ, such that
inf x∈Ω {̺ δ 0 +ρ, θ δ 0 +ē} > 0. (4.7)
Hence, by virtue of Proposition 3.1, there is a δ 0 3 ∈ (0, δ 0 2 ), such that for any δ < δ 0 3 , there exists a unique local solution (̺ δ , u δ , θ δ ) ∈ C([0, T ], H 3 ) to (1.4) and (1.6), emanating from the initial data (̺ δ 0 , u δ 0 , θ δ 0 ). Moreover, (4.7) holds for any δ satisfying E(̺ δ 0 , u δ 0 , θ δ 0 ) ≤ C 3 δ 0 3 . Let C > 0 and δ 0 1 > 0 be the same constants as in Proposition 3.1 and δ 0 = min{δ 0 3 , δ 0 1 /C 3 }. Let δ ∈ (0, δ 0 ) and
T δ = 1 Λ ln 2ε 0 δ > 0, i.e., δe ΛT δ = 2ε 0 ,(4.8)
where ε 0 ≤ 1, independent of δ, is sufficiently small and will be fixed later. In what follows, we denote E δ (t) := E(̺ δ , u δ , θ δ )(t).
Define
T * = sup {t ∈ (0, T max ) | E δ (t) ≤ C 3 δ 0 } and T * * = sup t ∈ (0, T max ) ̺ δ , u δ , θ δ (t) L 2 ≤ 2δC 3 e Λt ,
where T max denotes the maximal time of existence of the solution (̺ δ , u δ , θ δ ). Obviously, T * T * * > 0, and furthermore,
E δ (T * ) = C 3 δ 0 if T * < ∞, (4.9) ̺ δ , u δ , θ δ (T * * ) L 2 = 2δC 3 e ΛT * * if T * * < T max . (4.10)
Then for all t ≤ min{T δ , T * , T * * }, we deduce from the estimate (3.18) and the definition of T * and T * * that
E 2 δ (t) + (̺ δ , θ δ ) t (t) 2 H 2 + u δ t (t) 2 H 1 + t 0 u δ tt 2 L 2 dτ ≤ C[E 2 (̺ δ 0 , u δ 0 , θ δ 0 ) + 2C 2 3 δ 2 e 2Λt /Λ] + Λ t 0 E 2 δ (τ )dτ ≤ C 4 δ 2 e 2Λt + Λ t 0 E 2 δ (τ )dτ
for some constant C 4 > 0. Thus, applying Gronwall's inequality, one concludes
E 2 δ (t) + (̺ δ , θ δ ) t (t) 2 H 2 + u δ t (t) 2 H 1 + t 0 u δ tt 2 L 2 dτ ≤ C 5 δ 2 e 2Λt (4.11)
for some constant C 5 > 0. Let (̺ d , u d , θ d ) = (̺ δ , u δ , θ δ ) − δ(̺ l , u l , θ l ). Noting that (̺ a δ , u a δ , θ a δ ) := δ(̺ l , u l , θ l ) is also a solution to the linearized problem (1.5)-(1.7) with the initial data δ(̺ 0 ,ū 0 ,θ 0 ) ∈ H 3 , we find that (̺ d , u d , θ d ) satisfies the following non-homogenous equations:
̺ d t + div(ρu d ) = −div(̺ δ u δ ) := N ̺ (ρ δ , u δ ) := N ̺ δ , ρu d t + a∇(ē̺ d +ρθ d ) − µ∇divu d − µ 0 ∆u d + g̺ d e 3 = −(̺ δ +ρ)u δ · ∇u δ − ̺ δ u δ t − a∇(̺ δ θ δ ) := N u (ρ δ , u δ , θ δ ) := N u δ , θ d t +ē ′ u d 3 + aēdivu d = [µ|∇u δ + ∇(u δ ) T | 2 /2 + λ(divu δ ) 2 ]/(̺ δ +ρ) −u δ · ∇θ δ − aθ δ divu δ := N e (ρ δ , u δ , θ δ ) := N θ δ ,(4.12)
with initial data (̺ d (0), u d (0), θ d (0)) = δ 2 (̺ 0 , u r ,θ 0 ) and boundary condition u d | ∂Ω = 0. Next, we shall establish the error estimate for (̺ d , u d , θ d ) in L 2 -norm.
Lemma 4.2.
There is a constant C 6 , such that for all t ≤ min{T δ , T * , T * * },
(̺ d , u d , θ d )(t) 2 L 2 ≤ C 6 δ 3 θ 3Λt . (4.13)
Proof. We differentiate the linearized momentum equations (4.12) 2 in time, multiply the resulting equations by u t in L 2 (Ω), and use the equations (4.12) 1 and (4.12) 3 to deduce
d dt ρ|u d t | 2 − gρ ′ (u d 3 ) 2 + [(1 + a)pdivu d − 2gρu d 3 ]divu d dx = −2µ |∇u d t | 2 dx − 2µ 0 |divu d t | 2 dx + 2 [∂ t N u δ − gN ̺ δ e 3 − a∇(ēN ̺ δ +ρN θ δ )] · u d t dx.
(4.14)
Thanks to (2.22), one has
{gρ ′ (u d 3 ) 2 + [2gρu d 3 − (1 + a)pdivu d ]divu d }dx ≤ Λ µ|∇u d | 2 + µ 0 |divu d | 2 dx + Λ 2 ρ|u d | 2 dx.
Thus, integrating (4.14) in time from 0 to t, we get
√ρ u d t (t) 2 L 2 + 2 t 0 (µ ∇u d τ 2 L 2 + µ 0 divu d τ 2 L 2 )dτ ≤ I 0 1 + Λ 2 √ρ u d (t) L 2 + Λµ ∇u d (t) 2 L 2 + Λµ 0 divu d (t) 2 L 2 + 2 t 0 [∂ τ N u δ − gN ̺ δ e 3 − a∇(ēN ̺ δ +ρN θ δ )] · u d τ dxdτ,(4.15)
where
I 0 1 = ρ|u d t | 2 − gρ ′ (u d 3 ) 2 + [(1 + a)pdivu d − 2gρu d 3 ]divu d dx t=0 .
Using Newton-Leibniz's formula and Cauchy-Schwarz's inequality, we find that
Λ(µ ∇u d (t) 2 L 2 + µ 0 divu d (t) 2 L 2 ) = I 0 2 + 2Λ t 0 Ω µ 1≤i,j≤3 ∂ x i u d jτ ∂ x i u d jτ dxdτ + µ 0 divu d τ divu d dxdτ ≤ I 0 2 + t 0 (µ ∇u d τ 2 L 2 + µ 0 divu d τ 2 L 2 )dτ + Λ 2 t 0 (µ ∇u d 2 L 2 + µ 0 divu d 2 L 2 )dτ,(4.16)
where I 0 2 = Λ(µ ∇u d (0) 2 L 2 + µ 0 divu d (0) 2 L 2 ) and u d jτ denotes the j-th component of u d τ . On the other hand,
Λ∂ t √ρ u d (t) 2 L 2 = 2Λ Ωρ u d (t) · u d t (t)dx ≤ √ρ u d t (t) 2 L 2 + Λ 2 √ρ u d (t) 2 L 2 . (4.17)
Hence, putting (4.15)-(4.17) together, we obtain the differential inequality
∂ t √ρ u d (t) 2 L 2 + µ ∇u d (t) 2 L 2 + µ 0 divu d (t) 2 L 2 ≤ 2Λ √ρ u d 2 L 2 + t 0 (µ ∇u d 2 L 2 + µ 0 divu d 2 L 2 )ds + I 0 1 + 2I 0 2 Λ + 2 Λ t 0 [∂ τ N u δ − gN ̺ δ e 3 − a∇(ēN ̺ δ +ρN θ δ )] · u d τ dxdτ.
(4.18)
Next, we control the last two terms on the right hand of (4.18). Noting that δe Λt ≤ 2ε 0 ≤ 2 for any t ≤ min{T δ , T * , T * * },
we utilize (4.11) and (4.1), Höldear's inequality and Sobolev's embedding theorem to infer that 2 t 0 [∂ τ N u δ − gN ̺ δ e 3 − a∇(ēN ̺ δ +ρN θ δ )] · u d τ dxdτ t 0 ( (N ̺ δ , N θ δ ) H 1 + ∂ τ N u δ L 2 )( u a τ L 2 + u δ τ L 2 )dτ t 0 (δ 3 e 3Λτ + δ 2 e 2Λτ + δe Λτ u δ τ τ L 2 )δe Λτ dτ
δ 3 e 3Λt + δ 4 e 4Λt δ 3 e 3Λt ,(4.20)
and
(I 0 1 + 2I 0 2 )/Λ ( √ρ u d t 2 L 2 + ∇u d 0 2 L 2 + u d 3 2 L 2 )| t=0 [ (̺ d , θ d ) 2 H 1 + u d 2 H 2 + E 2 δ (E 2 δ + E 4 δ + u δ t 2
L 2 )]| t=0 δ 4 ( (̺ 0 ,θ 0 ) 2 H 1 + u r 2 H 2 ) + δ 2 e 2Λt (δ 2 e 2Λt + δ 4 e 4Λt ) δ 3 e 3Λt .
(4.21)
Thus, substituting (4.21) and (4.20) into (4.18), we obtain
∂ t √ρ u d (t) 2 L 2 + µ ∇u d (t) 2 L 2 + µ 0 divu(t) 2 L 2 ≤ 2Λ √ρ u d (t) 2 L 2 + t 0 (µ ∇u d 2 L 2 + µ 0 divu d 2 L 2 )dτ + C 7 δ 3 e 3Λt .
Applying Gronwall's inequality to the above inequality, one obtains
√ρ u d (t) 2 L 2 + t 0 (µ ∇u d 2 L 2 + µ 0 divu d 2 L 2 )dτ δ 3 e 3Λt + δ 4 √ρ u r 2 L 2 δ 3 e 3Λt (4.22)
for all t ≤ min{T δ , T * , T * * }. Thus, making use of (4.15), (4.16) and (4.20)-(4.22), we deduce that Finally, using the equations (4.12) 1 and (4.12) 2 , and the estimates (4.19) and (4.24), we find that
1 Λ √ρ u d t (t) 2 L 2 + µ ∇u d (t) 2 L 2 + µ 0 divu d (t) 2 L 2 ≤ Λ √ρ u d (t) 2 L 2 + 2Λ t 0 (µ ∇u d 2 L 2 + µ 0 divu d 2 L 2 )dτ + I 0 1 + 2I 0 2 Λ + 2 Λ t 0 [∂ τ N u δ − gN ̺ δ e 3 − a∇(ēN ̺ δ +ρN θ δ )] · u d τ dxdτ δ 3 e 3Λt .(̺ d , θ d )(t) L 2 ≤δ 2 (̺ r , θ r ) L 2 + t 0 (̺ d , θ d ) τ L 2 dτ δ 2 + t 0 ( u H 1 + (N ̺ δ , N θ δ ) L 2 )dτ δ 2 + t 0 (δ 3 2 e 3Λ 2 τ + E 2 δ (τ ))dτ δ 3 2 e 3Λ 2 t .
Putting the previous estimates together, we get (4.13) immediately. This completes the proof of Lemma 4.2.
Now, we claim that T δ = min T δ , T * , T * * , (4.25) provided that small ε 0 is taken to be
ε 0 = min C 3 δ 0 4 √ C 5 , C 2 3 8C 6 , m 2 0 C 6 , 1 > 0,(4.26)
where m 0 = min{ (ū 01 ,ū 02 ) L 2 , ū 03 L 2 } > 0 due to (4.2). Indeed, if T * = min{T δ , T * , T * * }, then T * < ∞. Moreover, from (4.11) and (4.8) we get for any t ≥ 0, where Λ is constructed by (2.21), and the constant C may depend on g, µ, µ 0 ,ē, ρ, Λ and Ω.
Proof. The first estimate (A.2) can be shown by an argument similar to that in Lemma 4.2.
In fact, following the process in the derivation of (4.18) and (4.23), we obtain the following two inequalities ∂ t √ρ u(t) 2 L 2 + µ ∇u(t) 2 L 2 + µ 0 divu(t) 2
L 2 ≤ I 1 + 2Λ √ρ u 2 L 2 + t 0 (µ ∇u 2 L 2 + µ 0 divu 2 L 2 )dτ (A.4) and 1 Λ √ρ u t (t) 2 L 2 + µ ∇u(t) 2 L 2 + µ 0 divu(t) 2 L 2 ≤ I 1 + Λ √ρ u(t) 2 L 2 + 2Λ t 0 (µ ∇u 2 L 2 + µ 0 divu 2 L 2 )dτ (A.5) with I 1 = 2(µ ∇u 2 L 2 + µ 0 divu 2 L 2 ) + 1 Λ ρ|u t | 2 − gρ ′ u 2 3 + [(1 + a)pdivu − 2gρu 3 ]divu dx t=0 .
An application of Gronwall's inequality to (A.4) implies that for any t ≥ 0, Hence the estimate (A.2) follows from the above two estimates. Finally, following the arguments in the proof of (3.17), one find that (̺, u, θ)(t) 2 H 2 +
√ρ u(t) 2 L 2 + t 0 (µ ∇u 2 L 2 + µ 0 divu 2 L 2 )dτ ≤e 2Λt √ρ u 0 2 L 2 + I 1 2Λ e 2Λt − 1 ≤Ce 2Λt ( (̺ 0 , θ 0 ) 2 H 1 + u 0 2 H 2 ),
Remark 1. 1 .
1Under the assumption of Theorem 1.1, if we further assume that ρ ′ ≥ 0, (1.10)
Remark 1. 3 .
3Letē > 0 be a constant andρ satisfy sup x∈Ωρ ′ < 0 and ∇p = −ρge 3 , then the linearized system (1.5)-(1.7) around (ρ, 0,ē) is stable, more precisely, any solution to the system (1.5)-(1.7) satisfies
+
|s 1 − s 2 | lim sup n→∞ Ω (µ|∇ṽ n s 1 | 2 + µ 0 |divṽ n s 1 | 2 )dx ≤α(s 2 ) + K|s 1 − s 2 |.
employing a fixed-point argument, exploiting(2.19),(2.20), and the continuity of α(s) on (0, S), we find that there exists a unique Λ ∈ (0, S), of Proposition 2.1, there is a solutionṽ ∈ H 4 to the boundary problem (2.3) with Λ constructed in (2.21). Moreover, Λ 2 = E(ṽ, Λ),ṽ 2 1 +ṽ 2 2 ≡ 0 andṽ 3 ≡ 0 by (2.21) and (2.5). In addition, div(ρṽ) ≡ 0 providedρ ′ ≥ 0. Thus we have provedProposition 2.3. Assume that (ρ,ē) satisfies (1.2)-(1.3). Then there exists aṽ ∈ H 4 satisfying the boundary problem (2.2) with a growth rate Λ > 0 defined by
Theorem 2. 1 .
1Assume that (ρ,ē) satisfies (1.2)-(1.3). Then the steady state (ρ, 0,ē) of the linearized system (1.5)-(1.7) is linearly unstable. That is, there exists an unstable solution (̺, u, θ) := e Λt (ρ,ṽ,θ) to (1.5)-(1.7), such that (ρ,ṽ,θ) ∈ H 3 and
Theorem 1.1 in the next section. To this end, let (̺, u, θ) be a solution of the perturbed problem (1.4)-(1.6), such that E(t) := E(̺, u, θ)(t) := (̺, u, θ)(t) H 3 (3.1)
Proposition 3. 1 .
1Assume that (ρ,ē) satisfies (1.2)-(1.3). For any given initial data (̺ 0 , u 0 , θ 0 ) ∈ H 3 satisfying the compatibility condition and inf x∈Ω {̺ 0 +ρ,θ 0 +ē} > 0, then there exist a T > 0 and a unique solution (̺, u, θ) ∈ C 0 ([0, T ], H 3 ) to the perturbed problem (1.4)-(1.6) satisfying inf (0,T )×Ω{̺ +ρ, θ +ē} > 0.
dτ δ 3 e 3Λt .(4.24)
ds ≤ Ce 2Λt ( (̺ 0 , θ 0 ) using (A.1) 1 and (A.1) 2 , we have(̺, θ)(t) L 2 ≤ (̺ 0 , θ 0 ) L 2 + t 0 (̺, θ) s (s) L 2 ds ≤ (̺ 0 , θ 0 ) L 2 + (1 + a) (ρ,θ) H 1 t 0 u(s) H 1 ds ≤Ce Λt ( (̺ 0 , θ 0 ) H 1 + u 0 H 2 ).
3 + (ēρ t +ρθ t ) 2 H 2 dτ ≤ C (̺ 0 , u 0 , θ 0 ) 2 H 2 + t 0 [C (̺, u, θ) 2 L 2 + Λ (̺, u, θ) 2 H 2 ]dτ,which, combined with (A.2), gives (A.3) due to Gronwall's inequality. This completes the proof.
for any s ∈ I. (2.15) On the other hand, for any s ∈ I, there exists a maximizing sequence {ṽ n s } ⊂ A of supṽ ∈A E(ṽ, s),such that
|α(s) − E(ṽ n
s , s)| < 1.
(2.16)
Making use of (2.5), (2.15) and (2.16), we infer that
which contradicts with(4.9). On the other hand, if T * * = min{T δ , T * , T * * }, then T * * < T max . Moreover, in view of (4.1),(4.8)and(4.13), we see thatwhich also contradicts with (4.10). Therefore, (4.25) holds.Finally, we again use (4.26) and (4.13) to deduce thatThis completes the proof of Theorem 1.1 by defining ε = m 0 ε 0 . In addition, ifρ ′ ≥ 0, then the functionρ 0 constructed in (4.1) satisfies ρ 0 L 2 > 0. Thus we also obtain ̺ δ (T δ ) L 2 ≥ m 0 ε 0 , if we define m 0 = min{ ̺ 0 L 2 , (ū 01 ,ū 02 ) L 2 , ū 03 L 2 } > 0. Hence, the assertion in Remark 1.1 holds.AppendixIn this section we show that Λ defined by (2.22) is the sharp growth rate for any solutions to the linearized problem (1.5)-(1.7). Since the density varies for a compressible fluid, the spectrums of the linearized solution operator are difficult to analyze in comparison with an incompressible fluid, and it is hard to obtain the largest growth rate of the solution operator in some Sobolev space in the usual way. Here we exploit energy estimates as in[17]to show that e Λt is indeed the sharp growth rate for (̺, u, θ) in H 2 -norm.Proposition Appendix .1. Assume that the assumption of Theorem 2.1 is satisfied. Let (̺, u, θ) solve the following linearized problem:with initial and boundary conditions (̺, u, θ)| t=0 = (̺ 0 , u 0 , θ 0 ) in Ω and u| ∂Ω = 0 for t > 0. Then, we have the following estimates.
. R A Adams, J John, Sobolev Space, Academic PressNew YorkR.A. Adams, J. John, Sobolev Space, Academic Press: New York, 2005.
Instability of the forced magnetohydrodynamics system at small reynolds number. I Bouya, SIAM J. Math. Anal. 45I. Bouya, Instability of the forced magnetohydrodynamics system at small reynolds number , SIAM J. Math. Anal., 45 (2013) 307-323.
On the Rayleigh-Taylor instability for incompressible, inviscid magnetohydrodynamic flows. R Duan, F Jiang, S Jiang, SIAM J. App. Math. 71R. Duan, F. Jiang, S. Jiang, On the Rayleigh-Taylor instability for incompressible, inviscid mag- netohydrodynamic flows, SIAM J. App. Math. 71 (2012) 1990-2013.
Nonlinear instability for the critically dissipative quasigeostrophic equation. S Friedlander, P Nataša, V Vicol, Commun. Math. Phys. 292S. Friedlander, P. Nataša, V. Vicol, Nonlinear instability for the critically dissipative quasi- geostrophic equation, Commun. Math. Phys. 292 (2009) 797-810.
Nonlinear instability in an ideal fluid. S Friedlander, W Strauss, M Vishik, Annales de l'Institut Henri Poincare (C) Non Linear Analysis. 14S. Friedlander, W. Strauss, M. Vishik, Nonlinear instability in an ideal fluid, Annales de l'Institut Henri Poincare (C) Non Linear Analysis 14 (1997) 187-209.
Compressibility effects in rayleigh-taylor instability-induced flows. S Gauthier, B Le Creurer, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 368S. Gauthier, B. Le Creurer, Compressibility effects in rayleigh-taylor instability-induced flows, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sci- ences 368 (2010) 1681-1704.
Oscillating solutions of incompressible magnetohydrodynamics and dynamo effect. D Gérard-Varet, SIAM J. Math. Anal. 37D. Gérard-Varet, Oscillating solutions of incompressible magnetohydrodynamics and dynamo effect, SIAM J. Math. Anal. 37 (2005) 815-840.
On the nonlinear instability of Euler and Prandtl equations. E Grenier, Comm. Pure Appli. Math. 53E. Grenier, On the nonlinear instability of Euler and Prandtl equations, Comm. Pure Appli. Math. 53 (2000) 1067-091.
Dynamics near unstable, interfacial fluids. Y Guo, C Hallstrom, D Spirn, Commun. Math. Phys. 270Y. Guo, C. Hallstrom, D. Spirn, Dynamics near unstable, interfacial fluids, Commun. Math. Phys. 270 (2007) 635-689.
Instability of periodic BGK equilibria. Y Guo, W Strauss, Comm. Pure Appl. Math. 48Y. Guo, W. Strauss, Instability of periodic BGK equilibria, Comm. Pure Appl. Math. 48 (1995) 861-894.
. Y Guo, I Tice, Rayleigh-Taylor Compressible, Instability, Indiana Univ. Math. J. 60Y. Guo, I. Tice, Compressible, inviscid Rayleigh-Taylor instability, Indiana Univ. Math. J. 60 (2011) 677-712.
Linear rayleigh-taylor instability for viscous, compressible fluids. Y Guo, I Tice, SIAM J. Math. Anal. 42Y. Guo, I. Tice, Linear rayleigh-taylor instability for viscous, compressible fluids, SIAM J. Math. Anal. 42 (2011) 1688-1720.
Compressibility effects on the Rayleigh-Taylor instability growth rates. Y He, X Hu, Z Jiang, Chin. Phys. Lett. 251015Y. He, X. Hu, Z. Jiang, Compressibility effects on the Rayleigh-Taylor instability growth rates , Chin. Phys. Lett. 25 (2008) 1015.
Variational approach to nonlinear gravity-driven instability in a MHD setting. H J Hwang, Quart. Appl. Math. 66H.J. Hwang, Variational approach to nonlinear gravity-driven instability in a MHD setting, Quart. Appl. Math. 66 (2008) 303-324.
On the dynamical Rayleigh-Taylor instability. H J Hwang, Y Guo, Arch. Rational Mech. Anal. 167H.J. Hwang, Y. Guo, On the dynamical Rayleigh-Taylor instability, Arch. Rational Mech. Anal. 167 (2003) 235-253.
Instability theory of the Navier-Stokes-Poisson equations. J Jang, I Tice, To appear in Analysis & PDEJ. Jang, I. Tice, Instability theory of the Navier-Stokes-Poisson equations, To appear in Analysis & PDE (2011).
On instability and stability of three-dimensional gravity flows in a bounded domain. F Jiang, S Jiang, SubmittedF. Jiang, S. Jiang, On instability and stability of three-dimensional gravity flows in a bounded domain (Submitted) 2014.
F Jiang, S Jiang, G X Ni, Nonlinear instability for nonhomogeneous incompressible viscous fluids. 56F. Jiang, S. Jiang, G.X. Ni, Nonlinear instability for nonhomogeneous incompressible viscous fluids, Sci. China Math. 56 (2013) 665-686.
Nonlinear Rayleigh-Taylor instability in nonhomogeneous incompressible viscous magnetohydrodynamic fluids, sumbission. F Jiang, S Jiang, W Wang, F. Jiang, S. Jiang, W. Wang, Nonlinear Rayleigh-Taylor instability in nonhomogeneous incompress- ible viscous magnetohydrodynamic fluids, sumbission (2013).
On the Rayleigh-Taylor instability for the incompressible viscous magnetohydrodynamic equations. F Jiang, S Jiang, Y Wang, Comm. P.D.E. 39F. Jiang, S. Jiang, Y. Wang, On the Rayleigh-Taylor instability for the incompressible viscous magnetohydrodynamic equations, Comm. P.D.E. 39 (2014) 399-438.
Systems of a hyperbolic-parabolic composite type, with applications to the equations of magnetohydrodynamics. S Kawashima, S. Kawashima, Systems of a hyperbolic-parabolic composite type, with applications to the equations of magnetohydrodynamics (1984).
M A Lafay, B Le Creurer, S Gauthier, Compressibility effects on the Rayleigh-Taylor instability between miscible fluids. 7964002EPLM.A. Lafay, B. Le Creurer, S. Gauthier, Compressibility effects on the Rayleigh-Taylor instability between miscible fluids, EPL (Europhysics Letters) 79 (2007) 64002.
Compressibility effects on the Rayleigh-Taylor instability growth between immiscible fluids. D Livescu, 10.1063/1.1630800Phys. Fluids. 16D. Livescu, Compressibility effects on the Rayleigh-Taylor instability growth between immiscible fluids, Phys. Fluids 16 (2004), doi:10.1063/1.1630800.
Comment on compressible Rayleigh-Taylor instabilities in supernova remnants. D Livescu, Phys. of Fluids. 1669101Phys. FluidsD. Livescu, Comment on compressible Rayleigh-Taylor instabilities in supernova remnants [Phys. Fluids 16, 4661 (2004)], Phys. of Fluids 17 (2005) 069101.
The initial value problem for the equation of compressible viscous and heat-conductive fluids. A Matsumura, T Nishida, Proc. Jpn. Acad. Ser-A. 55A. Matsumura, T. Nishida, The initial value problem for the equation of compressible viscous and heat-conductive fluids, Proc. Jpn. Acad. Ser-A. 55 (1979) 337-342.
Initial-boundary value problems for the equations of motion of general fluids, computing methods in applied sciences and engineering. A Matsumura, T Nishida, J. Math. Kyoto. Univ. V. 20A. Matsumura, T. Nishida, Initial-boundary value problems for the equations of motion of general fluids, computing methods in applied sciences and engineering, J. Math. Kyoto. Univ. V (Versailles, 1981) 20 (1982) 389-406.
Initial boundary value problems for the equations of motion of compressible viscous and heat conductive fluids. A Matsumura, T Nishida, Comm. Math. Phys. 89A. Matsumura, T. Nishida, Initial boundary value problems for the equations of motion of com- pressible viscous and heat conductive fluids, Comm. Math. Phys. 89 (1983) 445-464.
A Novotnỳ, I Straškraba, Introduction to the Mathematical Theory of Compressible Flow. USAOxford University PressA. Novotnỳ, I. Straškraba, Introduction to the Mathematical Theory of Compressible Flow, Oxford University Press, USA, 2004.
On the Rayleigh-Taylor instability for the two-phase Navier-Stokes equations. J Prüess, G Simonett, Indiana Univ. Math. J. 59J. Prüess, G. Simonett, On the Rayleigh-Taylor instability for the two-phase Navier-Stokes equa- tions, Indiana Univ. Math. J. 59 (2010) 1853-1871.
Analytic solutions of the Rayleigh equations for linear density profiles. L Rayleigh, Proc. London. Math. Soc. 14L. Rayleigh, Analytic solutions of the Rayleigh equations for linear density profiles, Proc. London. Math. Soc. 14 (1883) 170-177.
Nonlinear instability in two dimensional ideal fluids: the case of a dominant eigenvalue. M Vishik, S Friedlander, Comm. Math. Phys. 243M. Vishik, S. Friedlander, Nonlinear instability in two dimensional ideal fluids: the case of a dominant eigenvalue, Comm. Math. Phys. 243 (2003) 261-273.
The viscous surface-internal wave problem: nonlinear rayleigh-taylor instability. Y Wang, I Tice, Commun. P.D.E. 37Y. Wang, I. Tice, The viscous surface-internal wave problem: nonlinear rayleigh-taylor instability, Commun. P.D.E. 37 (2012) 1967-2028.
| []
|
[
"New Upper Bounds on the Distance Domination Numbers of Grids",
"New Upper Bounds on the Distance Domination Numbers of Grids"
]
| [
"Armando Grez [email protected] \nDepartment of Mathematics\nDepartment of Mathematics Florida Gulf\nFlorida Gulf Coast University Fort Myers\n33965Florida\n",
"Michael Farina [email protected] \nCoast University Fort Myers\n33965Florida\n"
]
| [
"Department of Mathematics\nDepartment of Mathematics Florida Gulf\nFlorida Gulf Coast University Fort Myers\n33965Florida",
"Coast University Fort Myers\n33965Florida"
]
| []
| In his 1992 Ph.D. thesis Chang identified an efficient way to dominate m×n grid graphs and conjectured that his construction gives the most efficient dominating sets for relatively large grids. In 2011 Gonçalves, Pinlou, Rao, and Thomassé proved Chang's conjecture, establishing a closed formula for the domination number of a grid. In March 2013 Fata, Smith and Sundaram established upper bounds for the kdistance domination numbers of grid graphs by generalizing Chang's construction of dominating sets to k-distance dominating sets. In this paper we improve the upper bounds established by Fata, Smith, and Sundaram for the k-distance domination numbers of grids. | null | [
"https://arxiv.org/pdf/1410.4149v1.pdf"
]
| 73,537,108 | 1410.4149 | 87e3f335fff38a8cd3e870805e95bbeccd090ed0 |
New Upper Bounds on the Distance Domination Numbers of Grids
15 Oct 2014 Published: Oct 30, 2014
Armando Grez [email protected]
Department of Mathematics
Department of Mathematics Florida Gulf
Florida Gulf Coast University Fort Myers
33965Florida
Michael Farina [email protected]
Coast University Fort Myers
33965Florida
New Upper Bounds on the Distance Domination Numbers of Grids
15 Oct 2014 Published: Oct 30, 2014Submitted: July 15, 2014; Accepted: Sept 1, 2014;Mathematics Subject Classifications: 05C6905C1205C30
In his 1992 Ph.D. thesis Chang identified an efficient way to dominate m×n grid graphs and conjectured that his construction gives the most efficient dominating sets for relatively large grids. In 2011 Gonçalves, Pinlou, Rao, and Thomassé proved Chang's conjecture, establishing a closed formula for the domination number of a grid. In March 2013 Fata, Smith and Sundaram established upper bounds for the kdistance domination numbers of grid graphs by generalizing Chang's construction of dominating sets to k-distance dominating sets. In this paper we improve the upper bounds established by Fata, Smith, and Sundaram for the k-distance domination numbers of grids.
Introduction
Let G = (V, E) denote a graph with vertex set V and edge set E. We say that a subset S of V is a dominating set of G if every vertex in G is either in S or adjacent to at least one vertex in S. The domination number of a graph G is defined to be the cardinality of the smallest dominating set in G and is denoted by γ(G).
We define the distance between two vertices v, w ∈ V to be the minimum number of edges in any path connecting v and w in G. We denote the distance between v and w by d(v, w). We say that a set S is a k-distance dominating set of G if every vertex v in G is either in S or there is a vertex w ∈ S with d(v, w) k, and we define the k-distance domination number of G to be the size of the smallest k-distance dominating set of G. For a comprehensive study of graph domination and its variants we refer the interested reader to the two excellent texts by Haynes, Hedetniemi and Slater [11,12].
This paper studies k-distance domination numbers on m × n grid graphs, which generalize domination numbers of grid graphs. For the past three decades, mathematicians and computer scientists searched for closed formulas to describe the domination numbers of m × n grids. This search was recently rewarded with a proof of a closed formula for the domination number of any m × n grid with m n 16 [8]. We recount a brief history of the investigation here, and henceforth we let G m,n denote an m × n grid graph.
In 1984, Jacobson and Kinch [14] started the hunt for domination numbers of grids by publishing closed formulas for the values of γ(G 2,n ), γ(G 3,n ), and γ(G 4,n ). In 1993, Chang, Clark, and Hare [4] extended these results by finding formulas for γ(G 5,n ) and γ(G 6,n ). In his Ph.D. thesis, Chang [3] constructed efficient dominating sets for G m,n proving that when m and n are greater than 8, the domination number γ(G m,n ) is bounded above by the formula
γ(G m,n ) (n + 2)(m + 2) 5 − 4.(1)
Chang also conjectured that equality holds in Equation (1) when n m 16. In an effort to confirm Chang's conjecture, a number of mathematicians and computer scientists began exhaustively computing the values of γ(G m,n ). In 1995, Hare, Hare, and Hedetniemi [9] developed a polynomial time algorithm to compute γ(G m,n ) when m is fixed. Alanko, Crevals, Isopoussu,Östergard, and Petterson [1] computed γ(G m,n ) for m, n 29 in addition to m 27 and n 1000. Finally in 2011, Gonçalves, Pinlou, Rao, and Thomassé [8] confirmed Chang's conjecture for all n 16. Their proof uses a combination of analytic and computer aided techniques for the large cases (n m 24) and exhaustive calculations for all smaller cases.
While the concept of graph domination has been generalized in countless ways including distance domination, R-domination, double-domination, and (t, r)-broadcast domination to name just a few [16,13,10,15,2], relatively little is known about these other domination theories in grid graphs. However, in 2013, Fata, Smith, and Sundaram generalized Chang's construction of dominating sets for grids to construct distance dominating sets that give the following upper bound on k-distance domination numbers of grids − 4 for large m and n, but they did not consider γ k (G m,n ) for k 3 [2, Theorem 3.7].
γ k (G m
The main result of this paper improves the upper bounds established by Fata, Smith, and Sundaram: Theorem 1. Assume that m and n are greater than 2(2k 2 + 2k + 1). Then the k-distance domination number of an m×n grid graph G m,n is bounded above by the following formula: The rest of this paper proceeds as follows. In Section 2 we describe an embedding of G m,n into the integer lattice Z 2 and the k-distance neighborhood Y m+2k,n+2k of G m,n . Then we describe a family of efficient dominating sets for Z 2 as the inverse images of a ring homomorphism φ k : Z 2 → Z 2k 2 +2k+1 . In Section 3 we prove that there exists an
γ k (G m,n ) (m + 2k)(n + 2k) 2k 2 + 2k + 1 − 4.ℓ ∈ Z 2k 2 +2k+1 such that φ −1 k l ∩ Y m+2k,n+2k (m+2k)(n+2k) 2k 2 +2k+1
in Corollary 5. In Section 4 we prove that when m and n are sufficiently large, we can remove at least one vertex from each corner of φ −1 k l ∩ Y m+2k,n+2k to obtain a dominating set for G m,n in Lemma 6. Our main result then follows immediately from Corollary 5 and Lemma 6.
k-Distance Dominating Sets in Z 2
Let Z × Z = Z 2 denote the integer lattice in R 2 . We embed an m × n grid graph G m,n into Z 2 by identifying G m,n with the following subset of Z 2 G m,n = {(i, j) ∈ Z 2 : 0 i m − 1 and 0 j n − 1}.
We define a neighborhood Y m+2k,n+2k around G m,n in Z 2 by adding k rows and columns to the boundary of G m,n . That is
Y m+2k,n+2k = {(i, j) ∈ Z 2 : −k i m + k − 1 and − k j n + k − 1}.
Fata, Smith, and Sundaram noted that a k-distance neighborhood of a vertex in Z 2 is a diamond-shaped collection of vertices containing at most 2k 2 + 2k + 1 elements [7, Lemma V.3]. To condense our notation, we will denote the number of vertices in a kdistance neighborhood by p = 2k 2 + 2k + 1. We will now describe a family of dominating sets of the lattice Z 2 as the inverse images under a ring homomorphism. We define a homomorphism φ k :
Z × Z → Z p by (i, j) → (k + 1)i + kj. Letl denote an element of Z p . One can easily verify that φ −1 k l is a k-distance dominating set of Z 2 [7, Lemma V.8].
The inverse image φ −1 2 (0) and the 2-distance neighborhoods of a few of its elements are depicted in Figure 1.
Since the set φ −1 k (l) is a k-distance dominating set of Z 2 and the set Y m+2k,n+2k is a k-distance neighborhood of G m,n , the intersection of these sets φ −1 k l ∩ Y m+2k,n+2k is the ustars conference proceedings 1 (2014), #USTARS/2014/01 Figure 2 illustrates this construction for 3-distance domination of G 6,6 (the resulting dominating set S is highlighted in red). ( Figure 2: The grid G 6,6 , its neighborhood Y 12,12 , and a 3-distance dominating set.
t ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ t t t t t t ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ Figure 1: The set φ −1 2 (0) a k-distance dominating set of G m,n for alll ∈ Z p . By moving each vertex in the set φ −1 k l ∩ (Y m+2k,n+2k − G m,n ) to its nearest neighbor inside G m,n we obtain a dominating set S ⊂ G m,n .✈ ✈ ✈ ✲ ❄ ✏ ✏ ✏ ✮ ❍ ❍
In the next section we will give an upper bound on the number of vertices in the set S and show that certain vertices can be removed from each corner of the set S and still k-distance dominate G m,n .
Finding an upper bound for
φ −1 k l ∩ Y m+2k,n+2k
Let p = 2k 2 + 2k + 1 and φ k : Z 2 → Z p be defined by (i, j) → (k + 1)i + kj as in Section 2. The following lemma proves that the inverse image φ −1 k l contains exactly one vertex in any p consecutive vertices in any row or column of Z 2 .
the ustars conference proceedings 1 (2014), #USTARS/2014/01 Lemma 2. Letl ∈ Z p . Then every p consecutive vertices in any row or column of G m,n will contain exactly one element of φ −1 k l .
Proof. Recall that (i, j) is in φ −1 k (l) for somel ∈ Z p if and only if φ k ((i, j)) = (k + 1)i + kj =l ∈ Z p . Suppose now that (i, j) ∈ Z 2 is in φ −1 k (l)
. We will show that the points (i ± p, j) and (i, j ± p) are the closest points to (i, j) in φ −1 k l contained in the same row or column as (i, j). Let a ∈ Z be any integer. In the quotient ring Z p we calculate
φ k ((i + ap, j)) = (k + 1)(i + ap) + kj = [(k + 1)i + kj] + (k + 1)ap = [ℓ] + (k + 1)ap =l and φ k ((i, j + ap)) = (k + 1)(i) + k(j + ap) = [(k + 1)i + kj] + kap = [ℓ] + kap =l.
Thus we see that (i + ap, j) and (i, j + ap) are also in φ −1 k l for any a ∈ Z. Suppose that 0 < q < p. We will show that (i ± q, j) / ∈ φ −1 k l .
φ k ((i±, j)) = (k + 1)(i ± q) + kj
= [(k + 1)i + kj] ± (k + 1)q = [ℓ] ± (k + 1)q
First note that that 0 < k + 1, q < p. Hence (k + 1)q is a multiple of p if and only if (k + 1)q = p. Now note that p = 2k 2 + 2k + 1 has no real roots. Thus p can not possibly factor as the product (k + 1)q for any 0 < q < p, and therefore [ℓ] ± (k + 1)q =l in Z p . Similarly, we note that p = 2k 2 + 2k + 1 can not factor as kq for any 0 < q < p. Hence we see that that (i, j ± q) / ∈ φ −1 k l for any 0 < q < p by computing
φ k ((i, j ± q)) = (k + 1)i + k(j ± q) = [(k + 1)i + kj] ± kq = [ℓ] ± kq =l.
This completes our proof that the points (i ± p, j) and (i, j ± p) are the closest points to (i, j) in φ −1 k l contained in the same row or column as (i, j), and thus we conclude that every p consecutive vertices in any row or column G m,n will contain exactly one element from the set φ −1 k (ℓ). Our next result uses Lemma 2 to count the cardinality of the set φ −1 k l ∩ G m,n for anyl ∈ Z p when either m or n is a multiple of p.
Lemma 3. If either m or n is a multiple of p, then for anyl ∈ Z p the cardinality of the set φ −1
k l ∩ G m,n is |φ −1 k l ∩ G m,n | = mn p .
Proof. By Lemma 2, we know for everyl ∈ Z p that every p consecutive vertices in any row or column of G m,n will contain exactly one element of φ When neither m nor n is a multiple of p, it is considerably harder to count the elements in the set φ −1 k l ∩ G m,n for a particularl ∈ Z p . However, our next result proves that there is at least onel ∈ Z p for which the cardinality of this set is bounded above by mn p . Proof. To prove our claim, we will suppose that for some 1 n m < p and for alll ∈ Z p that |φ −1 k l ∩ G m,n | > mn p and derive a contradiction. Note that this is equivalent to assuming that
|φ −1 k l ∩ G m,n | mn p + 1(2)
for alll ∈ Z p . Now we consider the mp by np grid G mp,np . By Lemma 3 we know that for anyl ∈ Z p we have |φ −1 k l ∩ G mp,np | = mnp. We can also partition G mp,np into p 2 many copies of G m,n . Supposing that Equation (2) is true for alll ∈ Z p , we derive the following absurdity
φ −1 k l ∩ G mp,np p 2 mn p + 1 = ⌊mnp⌋ + p 2 = mnp + p 2 > mnp = φ −1 k l ∩ G mp,np .
This proves that Equation (2) cannot be true for everyl ∈ Z p . Hence we conclude that there exists anl ∈ Z p such that the cardinality of the set φ Corollary 5. For any m and n there exists anl ∈ Z p such that the cardinality of the set
φ −1 k l ∩ Y m+2k,n+2k satisfies φ −1 k l ∩ Y m+2k,n+2k (m + 2k)(n + 2k) p .
Proof. Note that the neighborhood Y m+2k,n+2k is isomorphic to the grid G m+2k,n+2k by its definition. Hence we can apply Lemma 3 to deduce that
φ −1 k l ∩ Y m+2k,n+2k = (m + 2k)(n + 2k) p = (m + 2k)(n + 2k) p
for alll ∈ Z p when either m + 2k or n + 2k is a multiple of p. When neither m + 2k nor n + 2k is a multiple of p we can apply Proposition 4 to conclude that there exists an
ℓ ∈ Z p such that φ −1 k l ∩ Y m+2k,n+2k (m+2k)(n+2k) p otherwise. Note that since φ −1 k l ∩ Y m+2k,n+2k is a k-distance dominating set for G m,n , Corollary 5 proves that γ k (G m,n ) (m+2k)(n+2k) p .
Main Result
In the last section, we proved that γ k (G m,n )
(m+2k)(n+2k) p
. This bound already improves on any previously known result! In this section, we describe three techniques which allow us to remove at least one vertex from each corner of φ −1 k l ∩ Y m+2k,n+2k to obtain a set that still dominates G m,n . As a result, we prove that γ k (G m,n )
(m+2k)(n+2k) p − 4.
Lemma 6. Suppose that m and n are both greater than 2p. Then an element can be removed from each corner of φ −1 k (l) ∩ Y m+2k,n+2k and the resulting set still dominates G m,n .
Proof. We will now describe how to remove at least one vertex from the northwest corner of φ −1 k (l) ∩ Y m+2k,n+2k . For a fixedl ∈ Z p , the other three corners of the dominating set φ −1 k l ∩ Y m+2k,n+2k are all either rotations or mirror images of the northwest corner of φ −1 k l ′ ∩ Y m+2k,n+2k for somel ′ ∈ Z p . Hence they are all isomorphic to one of the cases considered below, and thus we can remove a vertex from each of them as well. (We assume that m and n are both greater than 2p so that we can remove one vertex from each corner, and none of the local shifts effect the other three corners. ) We start by introducing the following notation: We let the westernmost element in φ −1 k (l) ∩ Y m+2k,n+2k on the northern boundary of Y m+2k,n+2k be denoted s. We let the northernmost element in φ −1 k (l) ∩ Y m+2k,n+2k that is one column to the west of the western boundary of G m,n be called z. Finally, we label the line through s and z by L 1 and the line through s with slope k/(k + 1) by L 2 .
Our techniques for removing a vertex from the northwest corner of φ −1 k (l) ∩ Y m+2k,n+2k depend on the slopes of L 1 and L 2 , and they break down into three cases: Either the the ustars conference proceedings 1 (2014), #USTARS/2014/01 slope of L 1 is negative, the slope of L 1 is greater than the slope of L 2 , or the slope of L 1 is positive but less than or equal to the slope L 2 . Figures 3 and 4, then the k-distance neighborhood of s does not intersect G m,n . Hence, s can be removed from φ −1 k (l) ∩ Y m+2k,n+2k and the resulting set still dominates G m,n . To obtain a dominating set of G m,n that is contained entirely in G m,n , move each element of φ −1 k (l) ∩ (Y m+2k,n+2k − G m,n ) to its nearest neighbor in G m,n .
❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ ❊ L 1 s ❅ ❅ ❅ ❅ ❅ ❅ z ❅ ❅ ❅ ❅ ❅ ❅
Case 2: If the slope of L 1 is greater than the slope of L 2 , then shift all of the elements northwest of L 1 to the east one unit so that we can remove s. As depicted in Figure 5, let the southernmost vertex in the k-distance neighborhood of s be denoted u. (It lies on the northern boundary of G m,n and is due south of s.) Let the vertex at the intersection of the northern boundary of G m,n and L 2 be denoted t. (It lies k + 1 vertices to the west of u.)
Note that after shifting all of the elements northwest of L 1 to the east one unit, the k distance neighborhood of t will contain u. Hence s can be removed from our dominating set. The previous shift leaves the vertex b on the western boundary of G m,n undominated. Note that the vertex b is k + 1 vertices north of z, so we can shift the vertex z up one unit, and the k-distance neighborhood of z will contain b and all of the vertices that z originally dominated before these two shifts. (The original domination neighborhood of z is highlighted by circles in Figure 6.) Finally, we move every vertex in this dominating set that lies outside G m,n to its nearest neighbor inside G m,n to obtain a dominating set that is contained inside of G m,n . the ustars conference proceedings 1 (2014), #USTARS/2014/01 Figure 6: Case 2 after the shifts Case 3: If the slope of L 2 is greater than or equal to the slope of L 1 , then we can shift all vertices in φ −1 k (l) ∩ Y m+2k,n+2k that lie on L 1 to the east one unit as shown in Figure 7 which causes t to dominate u. This allows us to remove s from our dominating set, but it also creates a diagonal of uncovered vertices as shown in Figure 8. Now we take the vertices in φ −1 k (l) ∩ Y m+2k,n+2k that are strictly northwest of L 1 and shift them down one unit. This shift dominates all of the vertices on the undominated diagonal. We then move every vertex in this dominating set that lies outside of G m,n to its nearest neighbor inside G m,n to obtain a dominating set completely contained in G m,n . Figure 9: Case 3 after second shift In Cases 1, 2, and 3, we have shown how to remove at least one vertex from the northwest corner of the dominating set φ −1 k l ∩ Y m+2k,n+2k for anyl ∈ Z p , and the other four corners look the same up to isomorphism. This proves that we can remove at least four vertices from φ −1 k l ∩ Y m+2k,n+2k provided the grid G m,n is large enough so that the corners do not overlap.
✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ t ❅ ❅ ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ ❅ ❅ t t t ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ t ❅ ❅ ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ ❅ ❅ t t t t t t t t t ❅ ❅ ❅ ❅ ❅ ❅ t t t t t ✲ ✲ ✲ ❅ ❅ ❅ ❅ ❅ ❅ t t ✲ ✲ ✲✻ ✲ t b z ❅ ❅ ❅ ❅ ❅ ❅ L 1 L 2 s ❅ ❅ ❅ ❅ ❅ ❅ u t Figure 5: Case 2 before shifts t ❅ ❅ ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ ❅ ❅ u t t t t b t ❅ ❅ ❅ ❅ ❅ ❅ z ❅ ❅ ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ ❅ ❅ t ❅ ❅ ❅ ❅ ❅ ❅ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ ❞ t ❅ ❅ ❅ ❅ ❅ ❅✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ L 2 ❄ ❄ ❄ ❞ ❞ ❞ ❞ ❞ ❞ ❅ ❅ ❅ ❅ ❅ ❅ t u ❅ ❅ ❅ ❅ ❅ ❅ s ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❞Figure✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ ✚ t t t t t t t t t t t t t t t t t L 2 ❅ ❅ ❅ ❅ ❅ ❅ t u ❅ ❅ ❅ ❅ ❅ ❅ s ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ z ❅ ❅ ❅ ❅ ❅ ❅ ❞
Note that the example illustrated in Figures 7-9 shows that it is sometimes possible to remove two vertices from a corner of φ −1 k l ∩ Y m+2k,n+2k when the slope of L 1 is greater than or equal to that of L 2 , because the vertex in the northwest corner of Figure 9 can also be removed from the dominating set. So there are instances where we can remove five vertices from φ −1 k l ∩ Y m+2k,n+2k and still dominate G m,n , but that is not the case in general.
We are now ready to prove our main result.
Theorem 7. Assume that m and n are both greater than 2p where p = 2k 2 + 2k + 1.
Then the k-distance domination number of an m × n grid graph G m,n is bounded above by γ k (G m,n ) (m + 2k)(n + 2k) p − 4.
Figure 1
1illustrates how our main theorem improves on the bounds for 3-distance domination number γ 3 (G m,n ) given by Fata, Smith, and Sundaram in 2013. the ustars conference proceedings 1 (2014), #USTARS/2014/01
the ustars conference proceedings 1 (2014), #USTARS/2014/01
If m = ap then every row of G m,n will have exactly a vertices from φ −1 k l in it. Similarly, if n = bp then every column of G m,n has b vertices from φ −1 k l in it. Hence |φ −1 k l ∩ G m,n | = mn p if either m or n is a multiple of p.
Proposition 4 .
4If neither m nor n is a multiple of p, then there exists anl ∈ Z p such that the cardinality of the set φ
l
∩ G m,n satisfies |φ −1 k l ∩ G m,n | mn p as desired. the ustars conference proceedings 1 (2014), #USTARS/2014/01
Figure 3 :Figure 4 :
34Case Case 1 after the shifts Case 1: If the slope of L 1 is negative as depicted in
8 :
8Case 3 after first shift the ustars conference proceedings 1 (2014), #USTARS/2014/01
the ustars conference proceedings 1 (2014), #USTARS/2014/01Proof. Corollary 5 shows that for somel ∈ Z p the set φ that if m and n are both greater than 2p then we can remove at least 4 vertices from the set φ −1 k l ∩ Y m+2k,n+2k and still dominate G m,n . Thus we have shown γ k (G m,n )
t t t t t t t t t t t t t t t t t t t t
t t t t t t ✈ ✈ t t t t ✈
t t t t t t t t t t t t t t t t t t
t t t t t t t t t t t t t t t t t
t t t t t t t t t t t t t t t t
t t t t t t t t t t t t t t t t t ✲
t t t t t t t t t t t t t t t t
AcknowledgementsWe thank our adviser, Erik Insko, for his hours of support on this project. We thank Dr. Katie Johnson for many helpful comments on an earlier draft of this paper. Finally, we thank USTARS 2014 for giving us the opportunity us to present this paper.Figure 7: Case 3 before shifts
Computing the domination number of grid graphs. S Alanko, S Crevals, A Isopoussu, P Östergard, V Petterson, Electr. J. Comb. 181S. Alanko, S. Crevals, A. Isopoussu, P.Östergard, and V. Petterson. Computing the domination number of grid graphs. Electr. J. Comb. 18(1), 2011.
On (t,r) broadcast domination of grids. D C Blessing, E Insko, K Johnson, C Mauretour, arXiv:1401.2499v1preprintD. C. Blessing, E. Insko, K. Johnson, and C. Mauretour. On (t,r) broadcast domi- nation of grids. arXiv:1401.2499v1 (preprint 2014).
Domination numbers of grid graphs. T Y Chang, Dept. of Mathematics, University of South FloridaPh.D. thesisT. Y. Chang. Domination numbers of grid graphs. Ph.D. thesis, Dept. of Mathe- matics, University of South Florida, 1992.
The domination numbers of the 5 × n and 6 × n grid graphs. T Y Chang, W E Clark, J. Graph Theory. 17T. Y. Chang and W. E. Clark. The domination numbers of the 5 × n and 6 × n grid graphs. J. Graph Theory, 17:81-107, 1993.
Dominations of complete grid graphs I. T Y Chang, W E Clark, E O Hare, Ars Combin. 38T. Y. Chang, W. E. Clark, and E. O. Hare. Dominations of complete grid graphs I. Ars Combin, 38: 97-111, 1994.
Bounds for the Domination Number of Grid Graphs. E J Cockayne, E O Hare, S T Hedetniemi, T V Wimer, Congressus Numeratium. 47E. J. Cockayne, E. O. Hare, S. T. Hedetniemi, T. V. Wimer. Bounds for the Domination Number of Grid Graphs, Congressus Numeratium 47:217-228, 1985.
Distributed Dominating Sets on Grids. E Fata, S L Smith, S Sundaram, Proceedings of ACC 2013, the 32nd American Control Conference. ACC 2013, the 32nd American Control Conferenceto appearE. Fata, S. L. Smith and S. Sundaram Distributed Dominating Sets on Grids. Proceedings of ACC 2013, the 32nd American Control Conference, 2013 (to appear).
The domination number of grids. D Gonçalves, A Pinlou, M Rao, S Thomassé, SIAM J. Discrete Math. 25D. Gonçalves, A. Pinlou, M. Rao, and S. Thomassé The domination number of grids. SIAM J. Discrete Math., 25: 1443-1453, 2011.
Algorithms for Computing the Domination Number of K × N Complete Grid Graphs. E O Hare, W R Hare, S T Hedetniemi, Congressus Numeratium. 55E. O. Hare, W. R. Hare, and S. T. Hedetniemi, Algorithms for Computing the Domination Number of K × N Complete Grid Graphs, Congressus Numeratium 55:81-92, 1986.
Double domination in graphs. F Harary, T W Haynes, Ars Combin. 55F. Harary and T.W. Haynes, Double domination in graphs. Ars Combin. 55: 201-213, 2000
Fundamentals of Domination in Graphs. T W Haynes, S T Hedetniemi, P J Slater, Marcel Dekker, New YorkT.W. Haynes, S.T. Hedetniemi, and P.J. Slater, Fundamentals of Domination in Graphs, Marcel Dekker, New York, 1998.
T W Haynes, S T Hedetniemi, P J Slater, #USTARS/2014/01Domination in Graphs: Advanced Topics. Marcel Dekker, New York1T.W. Haynes, S.T. Hedetniemi and P.J. Slater, Domination in Graphs: Advanced Topics, Marcel Dekker, New York, 1998. the ustars conference proceedings 1 (2014), #USTARS/2014/01
Distance Domination in Graphs. Domination in Graphs. M Henning, M. Henning, Distance Domination in Graphs. Domination in Graphs. 321-349, 1998.
On the domination number of products of graphs. M S Jacobson, L F Kinch, Ars. Combin. 18M. S. Jacobson and L. F. Kinch. On the domination number of products of graphs. Ars. Combin. 18:43-44, 1984.
(k, r)-Domination in Graphs Int. G Jothilakshmi, A P Pushpalatha, S Suganthi, V Swaminathan, J. Contemp. Math. Sciences. 6G. Jothilakshmi, A. P. Pushpalatha, S. Suganthi, and V. Swaminathan. (k, r)- Domination in Graphs Int. J. Contemp. Math. Sciences, Vol. 6, 29, 1439 -1446, 2011.
R-domination in graphs. P J Slater, J. Assoc. Comput. Mach. 23P.J. Slater, R-domination in graphs, J. Assoc. Comput. Mach. 23 446-450, 1976.
An on-line version of the encyclopedia of integer sequences. N J A Sloane, Electron. J. Combin. 1N. J. A. Sloane. An on-line version of the encyclopedia of integer sequences. Electron. J. Combin. 1, 1994.
Min-Plus Algebra and Graph Domination. A Spalding, #USTARS/2014/01the ustars conference proceedings. 1University of ColoradoPh.D. thesisDept. of Applied MathematicsA. Spalding. Min-Plus Algebra and Graph Domination. Ph.D. thesis. Dept. of Applied Mathematics, University of Colorado, 1998. the ustars conference proceedings 1 (2014), #USTARS/2014/01
| []
|
[
"Hysteresis, Phase Transitions and Dangerous Transients in Electrical Power Distribution Systems",
"Hysteresis, Phase Transitions and Dangerous Transients in Electrical Power Distribution Systems"
]
| [
"Charlie Duclut \nDépartement de Physique de l'ENS\nICFP\n24 rue Lhomond75005ParisFrance\n\nNew Mexico Consortium\n87544Los AlamosNMUSA\n",
"Scott Backhaus \nNew Mexico Consortium\n87544Los AlamosNMUSA\n\nMaterials, Physics & Applications Division\nLos Alamos National Laboratory\n87545NMUSA\n",
"Michael Chertkov \nNew Mexico Consortium\n87544Los AlamosNMUSA\n\nTheoretical Division and Center for Nonlinear Studies\nLos Alamos National Laboratory\n87545NMUSA\n"
]
| [
"Département de Physique de l'ENS\nICFP\n24 rue Lhomond75005ParisFrance",
"New Mexico Consortium\n87544Los AlamosNMUSA",
"New Mexico Consortium\n87544Los AlamosNMUSA",
"Materials, Physics & Applications Division\nLos Alamos National Laboratory\n87545NMUSA",
"New Mexico Consortium\n87544Los AlamosNMUSA",
"Theoretical Division and Center for Nonlinear Studies\nLos Alamos National Laboratory\n87545NMUSA"
]
| []
| The majority of dynamical studies in power systems focus on the high voltage transmission grids where models consider large generators interacting with crude aggregations of individual small loads. However, new phenomena have been observed indicating that the spatial distribution of collective, nonlinear contribution of these small loads in the low-voltage distribution grid is crucial to outcome of these dynamical transients. To elucidate the phenomenon, we study the dynamics of voltage and power flows in a spatially-extended distribution feeder (circuit) connecting many asynchronous induction motors and discover that this relatively simple 1+1 (space+time) dimensional system exhibits a plethora of nontrivial spatio-temporal effects, some of which may be dangerous for power system stability. Long-range motor-motor interactions mediated by circuit voltage and electrical power flows result in coexistence and segregation of spatially-extended phases defined by individual motor states-a "normal" state where the motors' mechanical (rotation) frequency is slightly smaller than the nominal frequency of the basic AC flows and a "stalled" state where the mechanical frequency is small. Transitions between the two states can be initiated by a perturbation of the voltage or base frequency at the head of the distribution feeder. Such behavior is typical of first-order phase transitions in physics, and this 1+1 dimensional model shows many other properties of a first-order phase transition with the spatial distribution of the motors' mechanical frequency playing the role of the order parameter. In particular we observe (a) propagation of the phase-transition front with the constant speed (in very long feeders); and (b) hysteresis in transitions between the normal and stalled (or partially stalled) phases.Popular Summary: Large electrical generators interacting over national-scale electrical transmission grids constitute a well-studied dynamical system. Rarely discussed and poorly understood are the dynamics of neighborhood-scale distribution grids extending from transmission substations to the multitude of individual customers. However, the changing nature of electrical loads, e.g. the increasing prevalence of induction motors in residential air conditioning units, is creating distribution-grid dynamical processes that, when excited by unremarkable transmission-grid disturbances, lead to irreversible transitions with major impact on the reliability of transmission grids. Here, we present a unique model and analysis of these dynamics that allow analogy with other physical systems enabling rapid progress by leveraging knowledge developed in physics.In contrast to usual approaches, we develop a spatiallycontinuous model of distribution-grid dynamics to investigate the collective dynamics that arise when many induction motors, which are individually nonlinear and hysteretic (bi-stable), are coupled via power flows and voltage evolution within a distribution grid. Normal-size perturbations excite these collective dynamics initiating soliton-like fronts that travel though the distribution grids where passage of the fronts results in transitions of individual motors between a normal state and an undesirable stalled state. Individual bi-stability of each motor promotes globally hysteric behavior that is reminiscent of well-known first-order phase transition dynamics found in other physical systems.Important extensions of this physics-based understanding include the ability to model the dynamics of the billions active "smart grid" loads predicted to revolutionize the electrical power system. | 10.1103/physreve.87.062802 | [
"https://arxiv.org/pdf/1212.0252v1.pdf"
]
| 20,671,263 | 1212.0252 | 738a5a5db7f131f2e65727b6225e0ceee454eba9 |
Hysteresis, Phase Transitions and Dangerous Transients in Electrical Power Distribution Systems
Charlie Duclut
Département de Physique de l'ENS
ICFP
24 rue Lhomond75005ParisFrance
New Mexico Consortium
87544Los AlamosNMUSA
Scott Backhaus
New Mexico Consortium
87544Los AlamosNMUSA
Materials, Physics & Applications Division
Los Alamos National Laboratory
87545NMUSA
Michael Chertkov
New Mexico Consortium
87544Los AlamosNMUSA
Theoretical Division and Center for Nonlinear Studies
Los Alamos National Laboratory
87545NMUSA
Hysteresis, Phase Transitions and Dangerous Transients in Electrical Power Distribution Systems
(Dated: May 22, 2014)
The majority of dynamical studies in power systems focus on the high voltage transmission grids where models consider large generators interacting with crude aggregations of individual small loads. However, new phenomena have been observed indicating that the spatial distribution of collective, nonlinear contribution of these small loads in the low-voltage distribution grid is crucial to outcome of these dynamical transients. To elucidate the phenomenon, we study the dynamics of voltage and power flows in a spatially-extended distribution feeder (circuit) connecting many asynchronous induction motors and discover that this relatively simple 1+1 (space+time) dimensional system exhibits a plethora of nontrivial spatio-temporal effects, some of which may be dangerous for power system stability. Long-range motor-motor interactions mediated by circuit voltage and electrical power flows result in coexistence and segregation of spatially-extended phases defined by individual motor states-a "normal" state where the motors' mechanical (rotation) frequency is slightly smaller than the nominal frequency of the basic AC flows and a "stalled" state where the mechanical frequency is small. Transitions between the two states can be initiated by a perturbation of the voltage or base frequency at the head of the distribution feeder. Such behavior is typical of first-order phase transitions in physics, and this 1+1 dimensional model shows many other properties of a first-order phase transition with the spatial distribution of the motors' mechanical frequency playing the role of the order parameter. In particular we observe (a) propagation of the phase-transition front with the constant speed (in very long feeders); and (b) hysteresis in transitions between the normal and stalled (or partially stalled) phases.Popular Summary: Large electrical generators interacting over national-scale electrical transmission grids constitute a well-studied dynamical system. Rarely discussed and poorly understood are the dynamics of neighborhood-scale distribution grids extending from transmission substations to the multitude of individual customers. However, the changing nature of electrical loads, e.g. the increasing prevalence of induction motors in residential air conditioning units, is creating distribution-grid dynamical processes that, when excited by unremarkable transmission-grid disturbances, lead to irreversible transitions with major impact on the reliability of transmission grids. Here, we present a unique model and analysis of these dynamics that allow analogy with other physical systems enabling rapid progress by leveraging knowledge developed in physics.In contrast to usual approaches, we develop a spatiallycontinuous model of distribution-grid dynamics to investigate the collective dynamics that arise when many induction motors, which are individually nonlinear and hysteretic (bi-stable), are coupled via power flows and voltage evolution within a distribution grid. Normal-size perturbations excite these collective dynamics initiating soliton-like fronts that travel though the distribution grids where passage of the fronts results in transitions of individual motors between a normal state and an undesirable stalled state. Individual bi-stability of each motor promotes globally hysteric behavior that is reminiscent of well-known first-order phase transition dynamics found in other physical systems.Important extensions of this physics-based understanding include the ability to model the dynamics of the billions active "smart grid" loads predicted to revolutionize the electrical power system.
The majority of dynamical studies in power systems focus on the high voltage transmission grids where models consider large generators interacting with crude aggregations of individual small loads. However, new phenomena have been observed indicating that the spatial distribution of collective, nonlinear contribution of these small loads in the low-voltage distribution grid is crucial to outcome of these dynamical transients. To elucidate the phenomenon, we study the dynamics of voltage and power flows in a spatially-extended distribution feeder (circuit) connecting many asynchronous induction motors and discover that this relatively simple 1+1 (space+time) dimensional system exhibits a plethora of nontrivial spatio-temporal effects, some of which may be dangerous for power system stability. Long-range motor-motor interactions mediated by circuit voltage and electrical power flows result in coexistence and segregation of spatially-extended phases defined by individual motor states-a "normal" state where the motors' mechanical (rotation) frequency is slightly smaller than the nominal frequency of the basic AC flows and a "stalled" state where the mechanical frequency is small. Transitions between the two states can be initiated by a perturbation of the voltage or base frequency at the head of the distribution feeder. Such behavior is typical of first-order phase transitions in physics, and this 1+1 dimensional model shows many other properties of a first-order phase transition with the spatial distribution of the motors' mechanical frequency playing the role of the order parameter. In particular we observe (a) propagation of the phase-transition front with the constant speed (in very long feeders); and (b) hysteresis in transitions between the normal and stalled (or partially stalled) phases.
Popular Summary: Large electrical generators interacting over national-scale electrical transmission grids constitute a well-studied dynamical system. Rarely discussed and poorly understood are the dynamics of neighborhood-scale distribution grids extending from transmission substations to the multitude of individual customers. However, the changing nature of electrical loads, e.g. the increasing prevalence of induction motors in residential air conditioning units, is creating distribution-grid dynamical processes that, when excited by unremarkable transmission-grid disturbances, lead to irreversible transitions with major impact on the reliability of transmission grids. Here, we present a unique model and analysis of these dynamics that allow analogy with other physical systems enabling rapid progress by leveraging knowledge developed in physics.
In contrast to usual approaches, we develop a spatiallycontinuous model of distribution-grid dynamics to investigate the collective dynamics that arise when many induction motors, which are individually nonlinear and hysteretic (bi-stable), are coupled via power flows and voltage evolution within a distribution grid. Normal-size perturbations excite these collective dynamics initiating soliton-like fronts that travel though the distribution grids where passage of the fronts results in transitions of individual motors between a normal state and an undesirable stalled state. Individual bi-stability of each motor promotes globally hysteric behavior that is reminiscent of well-known first-order phase transition dynamics found in other physical systems.
Important extensions of this physics-based understanding include the ability to model the dynamics of the billions active "smart grid" loads predicted to revolutionize the electrical power system.
I. INTRODUCTION
Power systems are used to generate and transfer energy to electrical loads. In today's grid, generation is primarily done at large, centralized power stations (∼100's of MW) and the transfer primarily occurs via alternating currents (AC) in national-scale, highly-meshed, high-voltage transmission grids. A subset of nodes (substations) in the transmission grid transform the high voltage to a medium voltage level and interface to distribution grids, however, another change occurs at the substation. The meshed network of national-scale transmission changes to many radial or tree-like structures in the distribution system whose spatial extent is only ∼ 1-10 km. Each radial circuit, also called a "feeder", distributes the power delivered to the substation by the transmission system to the thousands of small electrical loads (∼ 1 kW) spatially spread along its length.
Even though AC electrical generation and transmission grids are extended over large spatial scales, they are synchronized, i.e. power flows over the transmission lines create dynamical coupling between the large rotating generators forcing them to rotating in unison. Perturbations to this synchronized system results in dynamics and transients spanning a large range of temporal scales from milliseconds to many minutes. Many studies have addressed the dynamics of these large-scale transmission grids. Although many unresolved dynamic problems in the transmission grid remain, recent years have witnessed new phenomena that have refocused our attention on dynamics and transients occurring in the smaller-scale distribution circuits 1,2 . Although these phenomena occur on smaller scales, they involve collective behavior of many individual small nonlinear loads, and even coupling several dis-tribution circuits, creating a significant impact on the largerscale transmission system.
Perhaps the most drastic of these phenomena is voltage collapse [3][4][5][6][7][8] . Here, a quasi-static increase in loading pushes the distribution feeder to a bifurcation where the stationary normal/high voltage solution is lost and the feeder "collapses" to an undesirable low voltage solution that is dangerous for power system stability. However, even for moderately loaded feeders that are far from this critical point, the nonlinearity of electrical loads may result in the emergence of multiple stationary solutions. In contrast to the case just described, these solutions cannot be reached via quasi-static evolution of the electrical loads. Instead, a significant and nonlinear perturbation to the feeder creates a dynamical trajectory that terminates in one of these additional solutions that may also be a "bad solution" from the standpoint of voltage level, power system losses, stability, and equipment damage. To understand the possibility of this unwelcome outcome one needs to go beyond the traditional static description and analyze dynamics of the distribution system.
The spatio-temporal dynamics we seek to describe occur within an individual radial distribution feeder that connects many (∼ thousands) small loads to a substation. We consider electro-mechanical dynamics occurring on scales ranging from fractions of a second to tens of seconds and analyze the spatial distribution of power flows along the circuit and the spatio-temporal transients stimulated by exogenous disturbances in the voltage and base frequency at the head of the distribution feeder. Such disturbances, which primarily originate from faults and/or irregularities in the high-voltage transmission system, propagate through from load to load via power flows in the distribution feeder. The propagation is affected by the electro-mechanical response of individual loads, and here we focus on the effects of nonlinear loads such as asynchronous (i.e. induction) motors. Typical electro-dynamic transients propagate with speed comparable to the speed of light and damp out in tens of milliseconds and thus are not important in our analysis. On the other hand, composition of loads connected to a distribution feeder changes on much longer time scales (∼ minutes) and are taken as fixed on the time scale of electro-mechanical dynamics of interest in this work.
An interesting and key feature of the dynamics is the longrange coupling between the spatially distributed loads created by power flows along the feeder. These coupled dynamics are nontrivial to model and investigate, however, they are also extremely important for practical power engineering because our model solutions reveal serious problems for control and operation of power systems. One such problem that motivates our study is the phenomenon of the Fault-Induced Delayed Voltage Recovery (FIDVR) 1,2, [9][10][11] . A FIDVR event is typically initiated by a fault on the transmission grid near a substation creating large fault currents that temporarily depress the voltage at the substation, perhaps for as little at two cycles of the nominally 50/60 Hz AC frequency (∼ 30 msec). The voltage depression propagates into the substation's distribution feeders causing an almost instantaneous reduction in the electrical torque generated by the connected induction motors, however, the mechanical torque on the motors does not change instantaneously and the motors begin to decelerate. If the transmission fault and voltage depression last long enough, many of the induction motors along the feeder may stall. When the fault on the transmission grid is cleared, the voltage at the substation returns to near normal levels, however, a stalled induction motor draws large in-rush currents while at near zero rotation speed (mechanical frequency). The time synchronization of these in-rush currents cause large voltage drops in the distribution feeder and may hold the voltage at locations remote from the substation below a critical voltage for restarting. Crucially, these remote motors remain stalled (near zero mechanical frequency), and their large current draw stabilizes this spatially-extended, partially stalled state.
The tendency for a transmission fault to result in FIDVR may depend on many fault, distribution feeder, and motor parameters, e.g: voltage drop magnitude and duration; length, resistance, and reactance of the feeder; type, rotational inertia, and density of the induction motor loading; and possible postfault corrective control actions. The qualitative description of FIDVR given above provides an intuitive understanding of how some of these parameters affect the dynamics leading to the undesirable and potentially dangerous partially-stalled state. It certainly indicates that, without some sort of corrective actions, FIDVR will become more and more frequent because of the recent trends to more air conditioning driven by easy-to-stall, low inertia motors. However, the dynamics that lead to FIDVR are not generally understood and thus presumed somewhat mysterious in power engineering practice. The goal of this manuscript is to provide understanding of these interesting and practically important distribution grid dynamics.
FIDVR is an example of a broader class of problems where physics and dynamical systems modeling can provide significant insight and predictive power that are generally lacking. Another example is given by electro-mechanical waves propagating through transmission grids 12,13 , as well as transients associated with the loss of synchrony in power systems 14,15 . The key unifying feature of all these phenomena is in the nontrivial interplay of spatial coupling of individual (possibly nonlinear) dynamics via power flows over the electrical network. We believe that important insights into these complex dynamics can be gained by approaching such spatio-temporal phenomena from a homogenized prospective, i.e. studying the electrical grid not as a set of individual devices but rather as a spatially extended and continuous medium in the limit where the number of individual elements of the power system becomes infinite. This abstract continuous-medium approach, pioneered for the case of electro-mechanical waves over transmission systems in 12,13 and for the case of a radial/linear distribution system in 16 is advantageous as it enables (a) a simpler analysis and deep qualitative physical understanding of the underlying phenomena (e.g. of FIDVR and electro-mechanical waves); (b) flexibility in simulations; and (c) developing model reduction algorithms for faster state estimation and system simulation.
Motivated by the discussion above, the main goal of this manuscript is to formulate the simplest but still realistic 1+1 (space+time continuous) model that predicts and explains interesting and important spatio-temporal phenomena in a distribution feeder loaded with induction motors which can be in a normal or stalled state. The most important results reported in this manuscript are • Extending the previous works 1, 11,[17][18][19][20] , we show that if the local voltage falls sufficiently low, an individual asynchronous motor can be in either of the following two states: (1) a normal state characterized by mechanical frequency ω which is slightly lower than the base electrical frequency ω 0 of the system; (2) a stalled state characterized by low or zero mechanical frequency. Both states are locally stable but can evolve into each other under sufficiently large perturbations.
• In sufficiently long feeders, a partially stalled phase can emerge where the feeder splits into head and tail parts with the motors of the head (tail) being in the normal (stalled) state. Motors in the stalled state may occupy the entire distribution feeder or, if the feeder is very long, stalled portion can co-exist with the normally running one. In the latter case, there exist multiple, partially stalled phases characterized by different proportions of the head (normal) and tail (stalled) parts.
• The steady partially stalled phases can be interpreted as showing coexistence of the two states where the local mechanical frequency (of the motors) play the role of "order parameter". Transitions between the phases are classified as first order, using standard physics terminology.
• These transitions are hysteretic (not reversible), i.e. a perturbation leading to transition from the normal phase to the stalled phase is not an inverse of the perturbation leading from the stalled phase to the normal phase. The dynamics of the two transitions are also different, in particular fronts of the phase transitions have different shapes, and to stabilize one transition can take significantly longer than the other.
Material in the manuscript is organized as follows. Static models of single induction motor, two-bus system, and the DistFlow equations of 21,22 are discussed in Sections II A,II B and III A, respectively. A dynamic model of a distribution feeder loaded with induction motors and 1+1 space-time continuous model of this feeder are introduced and discussed in Sections III B and III C. Section IV A discusses how singlemotor bi-stability translates into the emergence of multiple phases of the feeder with supporting numerical experiments in Section IV B and special features of the phase transitions in Section IV C. Section V is devoted to in-depth discussion of the results of our numerical experiments. Section V A discusses the dynamics following a fault at the head of the feeder. Section V B analyzes the recovery from a stalled state. Section V C explores the phase space of parameters that governs whether or not a feeder will enter a stalled or normal state following a fault. Finally, we summarize and describe a path forward in Section VI. Auxiliary information explaining details of our simulations can be found in Appendix A. Appendix B provides captions for the illustrative movies of the phase transition and fault recovery processes available as a Supplementary Information (SI) to the manuscript.
II. SINGLE INDUCTION MOTOR MODELS
A. Static Motor Model
Induction motors play a significant role in FIDVR, as it is currently understood. Here, we describe the features of inductions motors that are important for the rest of our work. Although motor dynamics will be added later, we first adopt a simple static electrical model of an induction (asynchronous) motor rotating at mechanical frequency ω and connected to a distribution circuit being driven at a base frequency ω 0 , 11
P = sR m v 2 R 2 m + s 2 X 2 m ,(1)Q = s 2 X m v 2 R 2 m + s 2 X 2 m .(2)
Here, P and Q are real and reactive powers drawn by the motor; s = 1 − ω/ω 0 is the slip parameter of the motor, 0 ≤ s < 1; v is the voltage at the motor terminals; X m , R m are internal reactance and resistance of the motor, usually R m /X m = 0.1 ÷ 0.5. For steady rotation frequency at ω, the balance of electric and mechanical torques for the induction motor is
P ω 0 = T (ω/ω 0 ),(3)
where T (ω/ω 0 ) is the rotation speed-dependent torque applied to the motor shaft by the mechanical load, which is typically parameterized by
T (ω/ω 0 ) = T 0 ω ω 0 α .(4)
Here, T 0 is a reference mechanical torque and α is indicative of different types of mechanical loads with α = 1 typical of fan loads and α < 1 typical of air-conditioning loads. If α < α c 1 and T 0 is fixed, one observes the emergence of three solutions when v is in a range between two spinodal voltages v − c and v + c , i.e. for v − c < v < v + c (see Fig. 1). These solutions have widely different ω which leads to hysteresis and the interesting dynamical behavior explored in the rest of this manuscript.
Stability analysis (see Section II B for details) shows that the two extreme solutions (ω ≈ 0 and ω ≈ ω 0 ) are both stable while the solution in the middle is unstable. The consequence is hysteretic behavior of the motor frequency ω as a function of the voltage v, as displayed in Fig. 2. Starting in the highvoltage normal state (say v ∼ 1), we decrease v slowly along the dashed red curve passing through state d. If we further decrease v to state c, the normal state suddenly disappears, and the motor makes a transition to the stalled state at a. Similarly, if we start from the low-voltage stalled state (say v ∼ 0.75 on the black curve) and v is increased slowly through state a to b, the stalled state disappears and the motor makes a transition to the normal state at d. The states (a, b, c, d) are also marked in Fig. 1, and the same hysteresis loop can be traced out there.
After elimination of the auxiliary variable ω, Eqs. (1,2,3,4) describe the dependence of the power flows (P, Q) on terminal-voltage. If the reactance-to-resistance ratio of the motor, X m /R m , is sufficiently small, the hysteresis observed for mechanical frequency in Fig. 2 translates into hysteresis of real and reactive powers as seen by the multi-valued dependence of (P, Q) on v in Fig. 3. Between the spinodal voltages v − c and v + c , there exists three solutions with different values of (P, Q) for the same value of voltage. Of the three solutions, the top and bottom are stable while the middle solution is unstable. Similar to Fig. 2, we can follow an adiabatic evolution of the motor terminal voltage. Following the reactive power curve (red) and starting in the normal state with v ∼ 1, we decrease the voltage through state d and to state c. Any further reduction of v forces the motor to make a discontinuous jump to state a which is accompanied by a large increase in reactive power Q. Alternatively, we may start in the stalled state with v ∼ 0.7 and slowly increase v through state a to b where the motor is forced to jump to state d accompanied by a discontinuous decrease in Q. As discussed later, these discontinuous jumps in Q play a significant role is stabilizing the spatially extended stalled state and in the recovery from the stalled state to normal state.
i.e. v − c < v < v + c .
Solid and dashed curves (the latter partially covered by solid) show trajectory of the system under adiabatic evolution starting from low-voltage and high-voltage regimes respectively. The points (a, b, c, d) label the reactive power curve (red) and correspond to the same labels in Fig. 1 and Fig. 2. Here, Xm/Rm = 0.375.
B. Dynamical Motor Model
To study the important yet generic aspects of the dynamics in distribution circuits, we generalize Eqs. (1-3) to include induction motor dynamics. Considering for the moment a single induction motor, an imbalance in electrical and mechani-cal torques will cause a change in the motor's rotational frequency given by
M d dt ω = P ω 0 − T 0 ω ω 0 α ,(5)
where M is the motor's moment of inertia. Torque imbalances in Eq. (5) can be driven in two ways: directly via changes in the base frequency ω 0 in Eq. (5) or indirectly via changes P driven by changes in v in Eq.
(1). The coupling of Eq. (5) to ω 0 will not be strong because ω 0 is determined by the global balance of generation and load across the entire transmission system. We would not expect the local distribution dynamics under consideration here to affect ω 0 to a degree that we would have to consider its effect back on the distribution dynamics via Eq. (5). Therefore, we can generally ignore the dynamics of ω 0 and consider it an imposed exogenous parameter.
In contrast, changes in local voltage are strongly coupled to changes in the local flow of real and reactive power. As we have seen in Sec. II A, changes in voltage can lead to drastic and hysteretic changes in a motor's frequency ω resulting in a strong coupling back to the dynamics in Eq. (5). Therefore, we must consider the possibility that voltage dynamics are important for distribution feeder dynamics. However, the dynamics of v are fundamentally different than ω because the relaxation of v is entirely electrical as opposed to the mechanical dynamics of ω, i.e.
Q = s 2 X m R 2 m + s 2 X 2 m v 2 + (τ /2) d dt v 2 ,(6)
where τ is the characteristic time of this purely electrical process (See e.g. the Appendix of Pereira et al 17 ). In our numerical experiments, we observe that the important basic phenomena discussed in the manuscript are much more sensitive to variations in the moment of inertia M than to variations in τ and that setting τ = 0 still reveals the dynamical processes important for understanding FIDVR. With τ = 0, Eq. (6) is now equivalent to its static version in Eq. (2). With the dynamics now fully specified by Eq. (5), we can now justify the local (single motor) stability claims made in Section II A by simple inspection of the equilibrium states in Fig. 1. Here stability/instability is understood in terms of the temporal decay/growth of small perturbations to the dynamics of Eq. (5). The black curve in this Fig. 1 represents the mechanical torque on the motor while the colored curves are the electrical torques (each representing a different v). Consider state d in Fig. 1 which is representative of the normal states with ω/ω 0 ∼ 1. If the motor speeds up slightly (moves to the right along the light blue curve), the mechanical torque becomes larger than the electrical torque and the motor decelerates returning to d. If the motor slows slightly (moves left), the electrical torque becomes larger while the mechanical torque decreases returning the motor to state d. All of the normal states, i.e. those like d with ω/ω 0 ∼ 1, have similar behavior and are therefore stable. State a in Fig. 1 is representative of the stalled states, and following the same logic, we find that the mechanical torque is larger for a higher ω (and smaller for a lower ω) showing that all of the stalled states are stable. Following the same logic, we find the the states in between the normal and stalled states (e.g. the state given the intersection of red and black curves in Fig. 1) are unstable.
III. SPATIALLY-CONTINUOUS FEEDER POWER FLOW MODEL
In Section II, we described the dynamics of isolated induction motors, i.e. motors whose terminal voltage v is specified and not determined in part by interactions with other induction motors or electrical loads. In this Section, we consider power flow models responsible for creating the long-range coupling between the individual, local induction motor dynamics. We start with a well-known discrete power flow model which we then homogenize into a spatially continuous ODE representation. We then incorporate a homogenized version of the individual induction motor dynamics to create a PDE representation of electrical feeder dynamics.
A. Discrete Power Flow Model-Dist Flow Equations
The flow of electric power in the quasi-static approximation is controlled by the Kirchoff laws. The DistFlow equations 21,22 are these equations, written in terms of power flows and in a convenient form for the radial or tree-like distribution circuit with a discrete set of loads shown schematically in Fig. 4a,
ρ n+1 − ρ n = P n − r n ρ 2 n + φ 2 n v 2 n ,(7)φ n+1 − φ n = Q n − x n ρ 2 n + φ 2 n v 2 n ,(8)v 2 n+1 − v 2 n = −2(r n ρ n + x n φ n ) −(r 2 n + x 2 n ) ρ 2 n + φ 2 n v 2 n .(9)
Here, n = 0, · · · , N − 1 enumerates the sequentiallyconnected buses of the circuit, and ρ n , φ n are the real and reactive power flowing from bus n to n + 1. v n is the bus voltage, while P n and Q n are the overall consumption of real and reactive powers by the discrete load at bus n. The values of r n and x n are the resistance and reactance of the discrete line element connecting n and n + 1 buses. The voltage v 0 at the beginning of the line is nominally fixed by control equipment, and there can be no flow of real or reactive power out of the end of the circuit. These two observations provide the following boundary conditions:
v 0 = v 0 , ρ N +1 = φ N +1 = 0.(10)
Eqs. (7,8,9,10) combined with the given real and reactive consumption pattern, P n , Q n for n = 1, · · · , N , uniquely define profile of voltage, v n , and power flows, ρ n , φ n , along the circuit. We note that the dynamical relaxation of distribution circuit power flows ρ n and φ n will also occur on electrical time scales, i.e. much faster than the mechanical dynamics in Eq. (5). Therefore, the quasi-static power flows described in the DistFlow formulation [21][22][23][24][25] in Eqs. (7,8,9) is a sufficient starting point for the phenomena discussed in the manuscript.
B. Continuous Power Flow Model
Following 16 , the continuous form of the DistFlow equations is
∂ z ρ = −p − r ρ 2 + φ 2 v 2 ,(11)∂ z φ = −q − x ρ 2 + φ 2 v 2 ,(12)
where z is the coordinate along the distribution circuit, r, x are the per-unit-length resistance and reactance densities of the lines (assumed independent of z) and p(z) and q(z) are the local densities of real and reactive powers consumed by the density of the spatially continuous distribution of motors 16 at the position z ∈ [0; L]. The power flows ρ and φ are related to the voltage at the same position according to 16
∂ z v = − rρ + xφ v .(13)
C. PDE Model of Feeder Dynamics
Instead of the standard voltage-independent (p, q) model of distributed loads, discussed in 16 , we consider the more complex dynamical loads described above. The load densities p(z) and q(z) are related to ω(z) and v(z) through the density versions of Eqs. (1,5,6)
µ d dt ω = p ω 0 − t 0 ω ω 0 α ,(14)p = sr m v 2 r 2 m + s 2 x 2 m ,(15)q = s 2 x m r 2 m + s 2 x 2 m v 2 .(16)
where the conversion to continuous form consists of replacing X m , R m and P, Q, T 0 , M by the respective densities x m , r m and p, q, t 0 , and µ. The new boundary conditions are
v(0) = v 0 , ρ(L) = φ(L) = 0.(17)
Equations (11)(12)(13)(14)(15)(16)(17) form our PDE model of a distribution feeder loaded with induction motors. We assume that the distributions of all the density parameters along the circuit are known, and in this initial work, we assume these densities are constant. Note that evolution in the model occurs solely due to temporal derivatives in Eq. (14) representing mechanical relaxation of the spatial distribution of motors.
IV. PHASE TRANSITIONS AND HYSTERESIS IN A FEEDER
In this Section, we discuss the physical picture of phase transitions and hysteresis that emerges from analysis and simulations of the 1+1 space-time PDE model of Eqs. (11)(12)(13)(14)(15)(16)(17). In Section IV A, we begin with a qualitative description of the hysteretic, phase transition-like behavior of the distribution feeder. Section IV B discusses numerical results in the framework of the qualitative arguments of Section IV A. Then, in Section IV C we regress again to provide general physics discussion of the phase transition special features observed in the simulations. From the qualitative description of distribution circuit and induction motor dynamics and FIDVR events given in Section I, we make an analogy between FIDVR and a first-order phase transition 26 where the z-dependent motor frequency ω(z) plays the role of the order parameter, i.e. ω ω 0 in the "normal phase" and ω ≈ 0 in the "stalled phase". Recasting Eq. (14) in terms of an ω-dependent potential U , we find
∂ω ∂t = − ∂U ∂ω ,(18)where U = 1 µ ω 0 t 0 ω ω 0 α − p(ω ; v) ω 0 dω ,(19)
and p(ω; v) is defined by Eq. (15). The resulting effective potential U (ω; v) is shown graphically in Fig. 5 for different values of v.
We identify three different regimes in Fig. 5: (a) high volt- The high-voltage case (a) where the motors can only be in the normal state is the desired regime for electrical grid operations. The low-voltage case (c) is undesirable and is a result of the electrical torque at low voltage not being able to overcome the mechanical torque with the subsequent decline of ω to very small values. Case (b) also allows for the motors to be in the normal state, but the existence of two minima in the Fig. 5) is associated with the overlapping high and low voltage states in Fig. 2 which can lead to local hysteretic behavior. For example, a small voltage perturbation can kick a motor from the normal state over the potential barrier (see red curve in Fig. 5) where Eq. (18) shows that it subsequently relaxes to the stalled state. A simple reversal of the perturbation does not necessarily lead to the reverse transition. This entirely local hysteresis is important because it defines the possible states of motor operation, however, it is the long range interactions between the local motor behavior that creates phase transition-like behavior with a phase boundary between separate normal and stalled phases.
age v > v + c (purple curve) where the normal state is the only stable solution, (b) intermediate voltage v − c < v < v + c (redintermediate voltage range v − c < v < v + c (see
When a motor undergoes the local transition from a normal to a stalled state, its rotational frequency ω changes significantly. Motors distributed along the circuit do not interact via ω, however, changes in ω drive large changes in local reactive power density q (see Fig. 3) which couples to all of the other motors via the power flows (ρ, φ) and voltages in Eqs. (11)(12)(13). Crucially, if a perturbation causes a group of motors in the tail segment of the feeder (z ∼ L) to enter the stalled states (ω ≈ 0 or s ≈ 1), the increase in the local reactive load density q drives an increase in the power flow φ all along the feeder, and Eq. (13) shows that v will be depressed at all z along the feeder. The voltage depression will be the largest at the tail (z ∼ L), and if the depression is severe enough, the terminal voltage of the normal-state motors neighboring the stalled tail section will drop below v − c and the local potential U (ω) changes from the purple or light blue curves of Fig. 5 to the green or dark blue curves. Equation (18) shows that these motors relax into the stalled state, further increasing the power flows φ everywhere along feeder. This phase transition front continues to propagate toward z = 0 as the increases in local q drive increases in the non-local φ which depress the non-local v. The voltage boundary condition at the head of the feeder (Eq. (17)) may stop the the front before its reaches all the way back to z = 0, however, the establishment of a globally stalled or partially stalled phase is hysteretic because simply reversing the original spatially-local perturbation, i.e. returning the small section of motors in the tail to the running state, will not be able to overcome the globally-stalled state once it has become established.
B. From Local to Global: Numerical Experiments
Next, we perform numerical simulations of Eqs. (11)(12)(13)(14)(15)(16)(17) to explore the qualitative dynamical description given above. We examine how one property of the feeder, i.e. its length L, affects the qualitative description given above by performing two identical simulations except L = 0.45 in Fig. 6 and is only slightly longer at L = 0.5 in Fig. 7. Although the change in length is relatively minor, the final states of the two feeders are radically different. In both simulations, we start with all the induction motors (the only type of load considered here) in the normal state, i.e. ω ∼ ω 0 . At t = 0, a perturbation is applied where v 0 is abruptly lowered to 0.8 and held there long enough so that all of the motors stall. Subsequently, v 0 is restored to 1.0 and the evolution of the motors and feeder variables are monitored. The forced evolution of v 0 emulates the behavior that would be driven by a nearby fault on the transmission system supplying the feeder and its substation. Figure 6a shows the state of the L = 0.45 feeder immediately after the fault is applied. The voltage (black curve) starts at v(0) = 0.8 and droops only slightly. Although v < v − c , the inertia of the motors (although small) keeps them temporarily at ω/ω 0 ∼ 1 (blue curve) so that their local reactive loads q (pink curve) remain low as does the non-local reactive power flow (red curve). However, because v < v − c the motors no longer have an equilibrium state at ω/ω 0 ∼ 1 (see dark blue curve in Fig. 5) and Eq. (18) forces them to relax to the stalled state which is evident in Fig. 6b where ω/ω 0 ∼ 0 (blue curve) all along the feeder. All of the induction motors have now made the transition to the upper branch of the reactive power curve in Fig. 3 near to state a which is reflected by the increase in q (pink curve) and the reactive power flow φ (red curve). Subsequent to Fig. 6b, v 0 is restored to 1.0, and a leftto-right propagating phase front is formed (see blue curve in Fig. 6c) where the normal and stalled phases are segregated with all the motors to the left of the front in the normal state shows the feeder at a time after the reduction of electrical torque has decelerated all of motors to ω/ω0 ∼ 0 (blue) with an associated rise in reactive load density q (pink) and reactive power flow φ (red) causing the voltage profile (black) to droop more than in (a). Snapshot (c) shows the situation after the fault is cleared with v0 restored to 1.0. Although the reactive power flow φ is still high, it is insufficient to force v < v + c and the motors relax back to the normal state as the phase front propagates from left to right. The dynamical nature of this transition is evident from the real power load density p (yellow). The peak in p at the front is a result of the acceleration of the motors as they transition from the stalled to normal state (see Eq. (14)). The motors shown accelerating in (c) eventually reach ω/ω0 ∼ 1, and the feeder relaxes to a globally-normal state. See Appendix B and Supplementary Information for respective movie (Movie 4: movie recovery.pdf).
while those to the right are in the stalled state. The motion of the finite-thickness phase front is evident from the local real power load density (yellow curve in Fig. 6c). The peak in p (above the steady-state values observed far to the left or right of the front) is responsible for the acceleration of the motors within the front, and as the motors in front accelerate to ω/ω 0 ∼ 1, the front advances to the right. A left-propagating front would show a downward peak in p. After v 0 has been restored to 1.0, the reactive power q drawn by the stalled motors along this feeder is insufficient to create a large enough φ to lower v into the intermediate range v − c < v < v + c , and the feeder completely recovers to its initial state, i.e. the reversal of the initial perturbation restores the feeder state. Alternatively, the implicit long-range power flow interactions, expressed in a globally depressed voltage profile, are insufficient to turn the local hysteresis into global hysteresis. The dynamic version of this simulation is provided in the SI (movie 4: movie recovery.pdf).
In Fig. 7, we show the final steady state of the exact same simulation as in Fig. 6 except that we have slightly increased the feeder length, L = 0.50. After restoration of v 0 = 1.0, phase front (blue curve) still forms and propagates into the feeder, however, it becomes stationary at z ∼ 0.22. The lack of a peak in the real power load density (yellow) shows that there is no motor acceleration in the front implying that it is stationary. In this case, the reversal of the initial perturbation does not restore the original state and the feeder displays significant hysteresis. The added reactive power load density q between z = 0.45 and 0.50 interacts with the impedance over the entire length of the feeder to create conditions (i.e. v < v + c ) near z ∼ 0.25 that enable the local motor hysteresis to make the feeder globally hysteretic. The dynamic version of this simulation is provided in the SI (Movie 3: movie hysteresis.pdf). We note that a large section of motors to the right of the stationary phase front have v − c < v < v + c and therefore also have a stable normal state (see Figs. 2 and 5). Additional small perturbations could result in local transitions to the normal state and subsequent global recovery. In between L = 0.45 and 0.50 is a critical length L c where the hysteresis first appears. Feeders operating with L > L c can be called "dangerous" because they are operating normally until a voltage fault occurs. Post fault, a fraction of the feeder does not recover to normal operation and this fraction of induction motors remains stalled, i.e. the feeder has just undergone a FIDVR event. Over a period of one to several minutes, the stalled motors will get disconnected by tripping of thermal protection systems, however, this uncontrolled recovery may also lead to overvoltages that are equally troublesome.
C. Special Features of the Phase Transition
Although the dynamical behavior described above is reminiscent of many of phase transitions in physics, it also displays some very unique features, e.g. its lack of explicit spatial locality and the instantaneous nature of the global voltage adjustment. In the Ginzburg-Landau (GL) theory of first-order phase transitions, we also have a potential with two minima, however, the spatio-temporal dynamics of the phase transition are different. In contrast to the spatio-temporal dynamics of the current problem (as given in Eq. (18) Fig. 6, however the recovery is not complete. The additional loading from the motors between z = 0.45 and z = 0.50 generate long-range interactions that force v < v + c for z > 0. 22. Although v0 has been restored to 1.0, the motors with v < v + c do not recover to a normal state, even though many of them do have such a stable state. In this case, the long-range interactions are are sufficiently strong to promote the local hysteretic behavior of each motor into globally hysteretic behavior. See Appendix B and Supplementary Information for respective movie (Movie 3: movie hysteresis.pdf).
couples the order parameter ω at different spatial locations. The resulting GL dynamics is a phase transition front with a shape and speed defined by the local balance (i.e. within the front) of the added dispersion term against the existing nonlinearity (rhs of Eq. (18)) and dynamics (lhs of Eq. (18)). In contrast, the transition front dynamics of the present problem is driven by the globally-superimposed spatial inhomogeneity of the voltage profile v(z). During short periods of time when there are not abrupt global changes, the voltage profile remains relatively frozen and the motors respond locally (Eqs. (18,19)) to the mismatch between their current rotational state ω and minimum U (ω) as determined by the local voltage v(z). These adjustments occur most rapidly in the vicinity of the phase front where the state mismatch is the largest. The adjustment of v(z) to the evolving motor loads is instantaneous (Eqs. (11)(12)(13)(15)(16)(17)), but the evolution is actually temporally slow because the motor states are only adjusting in the small region of the phase transition front.
Another point of comparison is the so-called Stefan problem (see 27,28 and references therein) describing a phase transition driven by heat released at the interface between the two phases. In its one dimensional formulation, the Stefan problem considers two sub-domains with their outer boundaries (the boundaries away from the phase front) maintained at different conditions, e.g. one at a constant temperature flux and another at different temperature. Within each of the subdomains temperature plays the role of the order parameter, and it obeys simple thermal diffusion (possibly with different diffusion coefficients in the two sub-domains). The sharp phase-phase interface is subject to a boundary condition that relates its speed to the temperature at the interface. If the interface progresses, heat is released locally and it is then transported via diffusion. Different versions of the problem show many interesting behaviors. In a semi-infinite domain, selfsimilar continually slowing fronts emerge. In finite domains, the fronts may become stationary. This behavior is similar to what we observe in the present problem -the analogy with the Stefan problem is the emergence of a global solution with an inhomogeneous order parameter profile and sharp interfacial boundary. However, the physics and interplay of the mechanisms that creates the behavior in the two problems are significantly different -thermal diffusion vs heat release in the Stefan problem, and local frequency transformations vs voltage profile and rearrangement. Moreover, the latter point also emphasizes the difference -voltage profile plays the role of the heat injection but it does it globally along the feeder, also changing instantaneously, i.e. voltage rearrangement takes place with an infinite speed (electro-dynamic effects are instantaneous in our electro-mechanical model) while the heat injection condition is modified gracefully in the Stefan case, as the heat propagates along the domain with a finite speed controlled by thermal diffusion.
V. SIMULATING AND EXPLAINING FAULT INDUCED DELAYED VOLTAGE RECOVERY (FIDVR)
In the previous Sections, we built up an understanding of the different types of dynamical behavior of an induction motor-loaded distribution feeder and the different final states that may result. In this Section, we discuss in more detail the processes specific to FIDVR. Although we discussed the anatomy of a FIDVR event in the introduction, we repeat it here to motivate the following study of FIDVR dynamics. Prior to the transmission fault, the voltages v 0 at the head of all the distribution feeders extending from the substation served by the transmission line are 1.0, and all the feeders are in the normal phase, i.e. all the induction motors are in the normal state. During a transmission fault, the large fault currents in the transmission lines locally depress the transmission voltage which in turn depresses v 0 . v 0 remains depressed below 1.0 until the transmission fault has cleared, i.e. automatic protection circuit breakers have opened to de-energize the faulted line to extinguish the ionized air supporting the fault current and then reclosed to re-energize the transmission line. During the fault-clearing process, one transmission line (out of typically two or more) serving the substation is briefly removed from service so which also tends to depress v 0 . It is during this period of v 0 depression that the induction motors on the distribution feeders undergo "collapse dynamics" and may stall. If the transmission fault is cleared normally, the voltage at the the substation returns to near normal levels after the circuit breakers reclose. If there is significant motor stalling, v 0 may not fully recover to 1.0 , but we approximate the post-fault voltage as v 0 = 1.0. The induction motor-loaded feeders then undergo "recovery dynamics" which evolve to a final steady state.
In Section V A, we first discuss the collapse dynamics during a period of the fault when v 0 is depressed. This is followed by Section V B where we discuss the recovery dynamics (or partial recovery) after v 0 is restored. Finally, in Section V C, we analyze the two periods in combination asking the question: under which conditions does a fault result in a FIDVR event?
A. Dynamics During a Fault-Collapse to Stalled Sates
If the voltage drop during the fault is large enough, i.e. if v 0 < v − c (see Figs. 1,2,5), all the motors will rapidly decelerate to ω/ω 0 ≈ 0 and the entire feeder will eventually end up in the stalled state. Figure 8 (and movie 1:movie large fault.pdf in SI) illustrates the details of this process which are consistent with our interpretation in terms of an "overdamped fall" down the the v < v − c energy landscape of Fig. 5 (dark blue curve) governed by Eq. (18). We note that the long-range spatial coupling forces the motors farther down the feeder to collapse to ω/ω 0 ∼ 0 faster and reinforces the collapse after is starts. Specifically, the cumulative effect of the induction motors loads reduces the voltage at the remote locations of the feeder (black curve) resulting in a steeper local potential U (ω) (dark blue curve in Fig. 5). Eq. (18) shows that these remote motors stall sooner, which is borne out in Fig. 8c (blue curve). Additionally, as the remote motors decelerate, their local reactive load density q (pink) increases which increases φ (red) further lowering v(z) (black) and increasing the slope of the local potential U (ω). The result is an even faster collapse into a pure stalled phase. Moreover, we observe that the more severe the voltage drop or the lower the motor rotational inertia µ, the earlier these motors will be stalled.
Next we consider a less severe fault where v − c < v 0 < 1.0 (see Fig. 9 and the full movie version, Movie 2: movie small fault.pdf in the SI). Although v 0 > v − c , the immediate post-fault voltage drops along the feeder to a point z 0 where v(z 0 ) < v − c . If the voltage profile was subsequently frozen in time, we would expect the motors with z > z 0 to behave very much as in Fig. 8, i.e. a decleration to ω/ω 0 ∼ 0, although somewhat slower in the previous case because the local v(z) is slightly higher. In Fig. 9b, we do observe this initial behavior, however, the local reactive power loading q (pink) again increases as the motors stall increasing φ (red) and lowering the local v(z). Via this spatial coupling, the feeder power flows reinforce the collapsing wave allowing it to propagate to z < z 0 .
Although the reinforcement process can be significant, it cannot overcome the the boundary condition at z = 0 that maintains v 0 > v − c . As shown in Fig. 9d, the phase front finally stops at location 0 < z < z 0 where the local voltage finally stabilizes at v(z) = v − c . As this occurs, local motor accelerations cease and the phase front in ω, p, and q steepen creating a sharp demarcation between the two pure phases of normal and stalled states. One common and possibly universal feature in the two scenarios discussed above is emergence of a "collapse" transient which can be interpreted as a quasistationary phase transition front of a slowly evolving "soliton" shape propagating with the speed and shape controlled by instantaneous voltage at the point of the front which defines the slope of the energy landscape U (ω) in Eq. (19) and Fig. 5.
B. Post-Fault Dynamics-Recovery from Stalled (or Partially Stalled) States
The severity of the v 0 depression during the fault determines the final state that will be reached if the collapse dynamics are allowed to evolve to steady state. The transmission fault may be cleared (v 0 returns to 1.0) before this steady state is reached, and we explore the dependence of the recovery dynamics on the fault time duration in Section V C. For simplicity of the present discussion, here we assume we are starting from a post-collapse steady state. Whether the feeder starts in a partially or fully-stalled post-collapse steady state, the perturbation of v 0 returning abruptly to 1.0 has the chance of restoring the feeder to the fully normal phase.
We have performed a range of simulations starting from both partially and fully-stalled steady states resulting from simulation of a depressed of v 0 . We then restore v 0 = 1.0 and monitor the recovery dynamics. One such case for a fullystalled initial condition that fully recovers is shown in Fig. 10. When restoring v 0 = 1.0 is sufficient to drive a full recovery (as in Fig. 10), we observe a rather rich dynamics which can be split, roughly, into the following three stages:
• Figures 10a→b: The abrupt change of v 0 instantaneously creates a smooth voltage profile dependent upon the instantaneous adjustment of real p and reactive q load densities to the higher voltage, but at their motor's stalled rotation speed. Motors with v(z) > v + c start to accelerate, and the phase transition front is set up in the vicinity of the feeder head.
• Figures 10c→e: The recovery front matures and sharpens while propagating into the feeder. The narrow recovery front is always located near to v(z) = v + c where the fast dynamics of motor acceleration and state transition occur. These fast transitions are accompanied by jumps in p and q which act in a non-local manner to push the location of v(z) = v + c to larger z thus driving the transition front farther into the feeder. The propagation, although driven by the fast dynamics of motor state transitions, comprises a slow dynamics because the state transitions occur in a phase front that is narrow compared to the feeder length. Also part of the phase front is a peak in p above the steady-state values on either side of the front. This peak is required to supply the power to accelerate the motors from a stalled to a normal state. If the feeder is long enough, the phase front appears to reach a time-invariant shape that propagates much like a soliton. Perhaps the most remarkable feature of the recovery process is formation, at the intermediate stage, of a "soliton"a quasi-stationary shape moving with roughly constant speed from the head to the tail. The emergence of the "soliton" is due to the long-range interaction of the fast but local changes in p in q in the phase front to the slower but global changes in voltage v(z).
As we will see in the following Section, the feeder does not always fully recover simply because v 0 is restored to 1.0. In these cases, we have explored the effects of temporarily (or permanently) raising v 0 above 1.0 and found this approach to be quite effective in restoring feeders that would have otherwise remained stalled. However, such intelligent control action requires reliable detection of the entry into a FIDVR event and fast switching and/or device control to increase v 0 to a sufficient level. We postpone full analysis of such a control to future work.
C. Dynamic Transition: Will a Feeder Enter a FIDVR State?
Our simulation results suggest that the normal phase, the fully stalled phase, and any of the partially stalled phases (with the feeder split in the normal head and stalled tail) can be the final stationary and stable point of a dynamical evolution. In this Section, we explore the properties of the final state as the properties of the feeder (length L and motor inertia µ) and the fault (magnitude of voltage depression ∆v and duration T pertu ) are varied. Our goal is to develop an initial understanding of the "non-equilibrium phase diagram" that controls whether or not the feeder recovers to a fully normal phase. In the test discussed below, we consider a feeder with L > L c because those with L < L c are known to always recover to a fully normal phase no matter the size or duration of the perturbation. In our study of the phase diagram, we dissect the fourparameter space, (L, µ, ∆v, T pertu ) in six different ways: • From Figs. 11-13, it is apparent that the feeder becomes less resilient to perturbations as it grows in length and that there is an upper feeder length L * ≈ 0.63 that requires an infinitesimal perturbation to force it into a partially-stalled phase. In fact, if the feeder is too long (i.e. L > L * ), the normal phase is no longer stable (v(L) ≤ v − c ) and the feeder always has a partially stalled phase. Therefore, for the remainder of this study, feeder with L c < L < L * are of interest because feeders with L outside this range are either robust to all perturbations or always unstable to a partially-stalled phase.
(a) (b) (c) (d) (e) (f) (g)(h
• From Fig. 14, it is apparent that there is value of ∆v ≈ 0.05 such that the feeder will never stall no matter how long the perturbation is applied. At ∆v = 0.05, v 0 = 0.95 and the static voltage drop at these reduced voltages would make v(L) ≈ 0.83 = v − c . From this interpretation, this lower bound on the ∆v for extremely long T pertu can be computed from the static power flow equations by looking for the v 0 that forces v(L) = v − c .
• The two previous conditions can be computed from static considerations. The interesting dynamics behind the transition to a partially-stalled state is then expressed in Figs proximate this decline by the product of fault duration and the rate of frequency decline immediately after the application of the fault, i.e. ∆(ω/ω 0 )| t=Tpertu ≈ −(2P/v 0 ω 2 0 )(T pertu ∆v/µ) where we have used Eq. 14 and a linear expansion of Eq. 15. It is reasonable to expect that the boundary between a feeder that fully recovers and one that is partially stalled would be expressed by ∆(ω/ω 0 )| t=Tpertu ≈ constant. If such a relationship is found to hold, it would imply (T pertu ∆v/µ) ≈ constant, which within scope of our parametric study, is in rough agreement with the results in Figs. 15 and 16, especially if we account for the minimum required value of ∆v discussed immediately above.
The approximate analysis above provides a good qualitative understanding of the fault and feeder parameters that lead to a feeder with a partially-stalled phase. However, more rigorous analytical analysis of Eqs. 11-17 is required to put these conclusions on firm footing.
Beyond just determining the boundary between a fullynormal and partially-stalled feeder final states, the characteristics of the voltage fault also determine the number of motors that will be stalled after the fault is cleared. The more severe the fault (in duration T pertu or in amplitude ∆v), the more motors will get stalled, up to a point. If the fault is severe enough (∆v ≥ ∆v * or T pertu ≥ T * pertu ), the number of stalled motors will not increase any more. For example, the maximum number of stalled motors is reached in Fig. 17d for a given set of parameters of the feeder line, and an even more severe fault will end up in this same state after it is cleared.
On the other hand, the less severe the fault the less the number of stalled motors, and the system can reach a continuous set of stable partially-stalled phases between the maximum number of stalled motors and none of the motors stalled. Fig. 17 shows examples of the different phases the system can reach, and the file movie different states.pdf in SI illustrates the dynamics leading to these final states.
VI. DISCUSSIONS & PATH FORWARD
In this manuscript, we have modeled, simulated, analyzed and explained how the electro-mechanical dynamics of induction motor-loaded distribution feeders can lead to Fault-Induced Delayed Voltage Recovery (FIDVR), an effect observed recently in power distribution feeders. We approached these dynamics and the FIDVR events from the stand-point of physics -explaining and interpreting them as an instance of a broader class of nonlinear electro-mechanical phenomena in power distribution systems. The main ideas and questions discussed in this work can be summarized as follows:
• The 1+1 (space+time) continuous model, introduced in 16 and developed here, offers a computationally efficient framework for analysis of nonlinear and dynamical phenomena in distribution feeders, and crucially, the PDE model enables analogies with other known spatiotemporal dynamical systems and solutions to help build intuition about the dynamical behavior of the electrical grid's electro-mechanical dynamics.
• By coupling a spatially local model of an individual motor -that describes its nonlinear, bi-stable and hysteretic switching between normal and the stalled statesto a continuum version of the electrical power flow equations, we are able to explain the coexistence of a spatially-extended normal phase with a stalled phase (motors at the head portion of the feeder are in the normal state, while motors in the tail portion of the feeder are stalled).
• The emergence of the multiple spatially-extended states is interpreted in terms of a first-order phase transition where the distribution of the motor rotational frequency along the feeder is the order parameter. The voltage distribution along the feeder plays the role of an external field (degree of freedom) that modifies an effective energy potential for motor frequency. Different (normal, stalled or partially stalled) spatially-extended phases are stabilized and reach a steady state by achieving a global electro-mechanical balance of their voltage and frequency distributions along the feeder.
• Sufficiently strong perturbations, e.g. sudden drops or rises in voltage at the head of the feeder, lead to transients in the form of propagating phase fronts that separate normal-state motors at the head of the feeder from stalled-state motors in the tail of the feeder. We analyzed and classified the different types of transients and resulting steady-state phase distributions that emerge after the transients settle for different perturbation strengths and lengths of the feeder.
• We also experimented with the dynamics of mechanical frequency and voltage phase distributions under more realistic, but more complex, two-step perturbations -a voltage drop at the head of the feeder followed shortly by restoration back to its nominal value -that are expected to simulate the real world perturbations that re- sult in FIDVR events. The dynamics and emerging steady states were explored for different voltage perturbations (depth and duration of the voltage depression) and feeder characteristics (feeder length and inertia of the connected induction motors).
Major conclusions drawn from our numerical experiments and analysis are
• Hysteresis. The system is strongly hysteretic: reversing a perturbation does not lead to a simple reversal of the dynamical trajectory.
• Recovery Conditions. When the feeder is short enough or the voltage perturbation is weak enough (small enough in amplitude or short enough duration), the feeder recovers to a fully normal phase following a voltage perturbation thus avoiding a FIDVR event. Longer feeders or stronger voltage perturbations lead to incomplete recovery and, by modifying the three parameters beyond the recovery threshold, one explores a continuous family of different partially-stalled phases.
• Self-Similar Transients. When a feeder is sufficiently long, recovery transients appear to show universal soliton-like phase fronts with normal phase propagating into the stalled phase with an (approximately) constant speed and time-invariant shape.
This manuscript opens up a new line of research into physics-based analysis of transients and phase transitions in distribution feeders. We plan to continue this work focusing on the following generalizations and extensions:
• We modelled distribution feeders as consisting of identical induction motor loads distributed uniformly along the feeder. In reality, the motors may be different and distributed non-uniformly along the feeder, their distribution and parameters may fluctuate in time, and they are present with many other different types of loads. We will extend our purely deterministic analysis to a probabilistic framework to describe the effects of these forms of disorder and noise to resolve questions such as: what is the probability that the feeder with a given level of disorder will recover after a perturbation of given amplitude and duration and not enter a FIDVR state?
• As recently shown in 29 , controls associated with distributed generation, e.g. inverters coupled to distributed photovoltaic (PV) generation, can also be incorporated into the spatially continuous modelling framework (ODE framework). The static model of 29 suggests that a feeder with a sufficiently large penetration of the PV generation may show a rather complicated bifurcation diagram with the emergence of multiple low-voltage solutions that are related to the lowvoltage, stalled solutions discussed in this manuscript. Much like the effects discussed here, these inverterdriven low-voltage states are not the result of the behavior of a single inverter, but rather result from the collective action of many inverters and their and interaction with the nonlinearity and nonlocal behavior of the power flow equations. To address these dynamical problems in a more comprehensive manner, we will extend dynamical description of this manuscript to the case of distributed generation, and more generally, to the case of a feeder containing a mixed portfolio of distributed generation and different types of loads, including induction motors.
• In our first physics-based paper on the subject of electro-mechanical dynamical transients in distribution feeders, we relied mainly on numerical experiments. However, the problem formulation may allow asymptotic theoretical analysis, in particular: accurate resolution of the phase transition boundaries, determination of the bifurcation (spinodal) points, propagation of soliton-like phase fronts, and analysis of the tails of the distribution functions that account for the aforementioned effects of noise and disorder.
• We have focused primarily on describing electromechanical dynamics and the perturbations and transients that lead to FIDVR events. Armed with the comprehensive understanding gained in the process, we are ready to attack the larger question of distribution feeder voltage control. Specifically, what is the least control effort needed to avoid a FIDVR event following a given type of fault? moving front approaches the critical voltage v + c .
d. Movie 4: "movie recovery.pdf"
This movie (see 33 ) is identical to Movie 3, except that the length of the feeder line now is significantly shorter with L < L c . In this case, there is no hysteretic behavior after a voltage fault is cleared -the feeder recovers completely because the shorter L has diminished the non-local effects of the motors near the tail of the feeder. The recovery process take a long time allowing us to distinguish different phases of the process. First, at 1 < t < 1.3 the beginning of the line accelerates quickly; then, at 1.3 < t < 9.5, one identifies emergence of the recovery front, which propagates down the feeder with a nearly stationary soliton-like shape. The recovery front starts to change shape as it approaches and interacts with the boundary condition at the feeder tail. The absence of motors to accelerate beyond z = L leads to fast recovery of the normal states in the vicinity of the tail for t > 9.5. e. Movie 5: "movie different states.pdf"
There are in fact five movies (see 34 ) glued together in one file with a different movie on each page of the .pdf . Each of the movies shows the dynamics of the feeder line during a voltage fault and after the fault is cleared. Characteristics of the feeder and faults are identical in all the movies, except we increase T pertu from segment to segment. We observe that the number of motors stalled in steady state increases with in T pertu showing that different partially-stalled states can be reached depending on the duration of the fault. However, we also notice existence of an upper limit to the number of motors that can be stalled, i.e. no matter how severe the fault, half of the line will always recover. The two last movies in the file illustrate this phenomenon, i.e. even though the fourth segment has T pertu = 0.5 and the fifth segment has T pertu = 1, they both result in identical steady states with the same number of stalled motors.
FIG. 1 :
1Electric and mechanical torques as functions of the mechanical frequency ω/ω0 for a range of motor terminal voltages v, reference mechanical torque T0 = 0.32, and α = 0.1. For the v = 0.86 electrical torque curve (red), there are three equilibrium solutions indicated by intersections with the mechanical torque curve (black). The solution with the highest ω/ω0 is the "normal" stable solution with the induction motor rotating near the grid frequency ω0. The "stalled-state" with ω/ω0 0 is also stable while the intermediate solution is unstable. For v > v + c = 0.9 (light blue curve), there is only one solution corresponding to the normal state. For v < v − c = 0.83 (green curve), there is only one solution corresponding to the stalled state. The points (a, b, c, d) correspond to the same labels inFig. 2.
FIG. 2 :FIG. 3 :
23Hysteretic behavior of an induction motor inFig. 1as the voltage v at its terminals is varied. The dashed red (solid black) curves indicate the path of equilibrium states as the voltage v is decreased (increased) starting from the high-voltage normal (lowvoltage stalled) state. The vertical lines at the spinodal voltages v ± c indicated the abrupt hysteretic transitions between states. v ± c correspond to the same labels in the legend ofFig. 1 and in Fig. 3. The points (a, b, c, d) correspond to the states where the motor must make transitions from normal to stalled (c → a) and from stalled to normal (b → d). Typical (P, Q) versus v curves for the same motor as inFig. 1 and 2and described by Eqs.(1,2,3,4). Three solutions (two stable and one unstable) are observed between the spinodal voltages,
FIG. 4 :
4A feeder line modeled with a) a discrete set of electrical loads and with b) a continuous distribution of loads.
A. From Local (Motor) to Global (Feeder): Qualitative Picture
curve) where the normal state and the stalled state coexist, i.e. motors can be in either of the two states, and (c) low voltage v < v − c (dark blue curve) where the stalled state is the only stable solution. The boundaries between these regions occur at the spinodal voltages v + c (light blue curve) where the state FIG. 5: Local potential U (ω) for different values of the voltage v and α = 0.1 demonstrating the continuous evolution of the normal and stalled states from only the normal state at high voltage (purple), coexisting normal and stalled states at intermediate voltages (red), and only the stalled state at low voltage (dark blue). The light blue curve is for the spinodal voltage v + c -the boundary between the high and intermediate voltage cases. The green curve is for spinodal voltage v − c -the boundary between the low and intermediate voltage cases. The inset is a view of the region near ω = 0 showing the emergence of a stalled ω ≈ 0 minimum of the potential for v < v + c . The states labelled (a, b, c, d) are the same as those in Figs. 1-3 jumps from c to a in Figs. 1-3 and at v − c (green curve) where the state jumps from b to d in Figs. 1-3
FIG. 6 :
6Sequence of snapshots of a simulation for a feeder with L = 0.45 < Lc that undergoes a short voltage fault. In this case L is short enough so that the long-range interactions via power flows and voltage are insufficient to promote the local hysteresis at each motor into globally hysteretic behavior. Snapshot (a) shows the beginning of the sequence (immediately post fault) where the small inertia of the motors maintains ω/ω0 ∼ 1 (blue) so that the the reactive power load density q (pink) and reactive power flow φ (red) are low and the voltage profile (black) relatively flat, although near 0.8. Snapshot (b)
), GL dynamics are typically driven by a dispersion term, i.e. by adding a term such as ∂ 2 z ω to the right hand side of Eq. (18) which effectively FIG. 7: Voltage fault and subsequent hysteretic recovery for L = 0.50 > Lc. The sequence of events (fault, transient and initial recovery) are identical to the one shown in
•FIG. 8 :FIG. 9 :
89Figures 10f→h:The slow dynamics of the recovery approaches to within about one or two phase front widths of the end of feeder. With fewer motors to accelerate, the voltage adjusts faster and the front propagation Sequence of snapshots illustrating the collapse dynamics during a large voltage fault with v0 < v − c and L = 0.6 > Lc. Snapshot (a) shows the situation just after the application of the fault -although the v(z) < v − c , the motors continue to rotate with ω/ω0 ∼ 1 because of their inertia. Snapshot (b) corresponds to a short time after the application of the fault when the motors are just starting to decelerate. The more remote motors experience smaller v and steeper U (ω). Their faster deceleration results in smaller ω/ω0 at these remote locations. The collapse is reinforced by an increase in q (pink) as the motors reach lower ω. Snapshot (c) is taken later in the process: the end of the line is completely stalled, the wave of deceleration starts to propagate backwards, from the tail to the head. Snapshot(d) shows the final phase: all the motors are stalled. Note a significant increase in the reactive power drawn by the stalled feeder. See Movie 1: movie large fault.pdf in SI for a dynamic version of this process. Collapse dynamics caused by a small voltage fault with v0 > v − c and L = 0.6 > Lc. The collapse proceeds slower than in Fig. 8 because the higher voltages result in shallower U (ω). Snapshot (a) corresponds to a short time after the fault as the motors have just begun to decelerate, however, the rise in q (pink) is already reinforcing the collapse. Snapshot (b) corresponds to later during the fault where the end of the line is stalled and the deceleration front is starting to feel the influence of the boundary condition at z = 0. Snapshot (c) the deceleration front has become nearly stationary as the long-range interactions can no longer overcome the boundary condition at z = 0. Snapshot (d) show the feeder in the stabilized, partially-stalled steady state.See Movie 2: movie small fault.pdf in SI for a dynamic version of this process. speed increases until is has consumed the entire feeder and the feeder reaches a uniform normal phase. See Fig. 10f-h for illustration. See Movie 4: movie recovery.pdf in SI for the full movie of the recovery phenomenon.
) FIG. 10 :
)10Eight snapshots of the evolution of a feeder from a steady-state, fully-stalled phase to the fully-normal phase following the restoration of v0 = 1.0. (a) The final fully-stalled steady state reached after simulating the feeder for a v0 = 0.8 fault voltage depression. (b) v0 is restored to 1.0 and the motors at the head of the feeder start to accelerate. (c) A recovery front is built which starts to propagate into the feeder. (d) The front continues to propagate at roughly constant speed and also showing a universal "soliton"-like shape. (e) The front continues to propagate with roughly the same speed and a universal "soliton"-like shape. (f) Completion of the recovery process where interaction with the end of the feeder accelerates the recovery front. (g) The recovery front still propagates while the shape of the front starts to change because fo interactions with the end of the feeder. (h) End of the recovery process: the entire feeder is back to the normal state. See Movie 4: movie recovery.pdf in SI for dynamical illustration. 1. fixing µ and T pertu and exploring the (L, ∆v) subspace, see Fig. 11; 2. fixing µ and ∆v and exploring the (L, T pertu ) subspace, see Fig. 12; 3. fixing ∆v and T pertu and exploring the (L, µ) subspace, see Fig. 13; 4. fixing L and µ and exploring the (T pertu , ∆v) subspace, see Fig. 14; 5. fixing the values of L > L c and (sufficiently large) ∆ v and exploring the (µ, T pertu ) subspace, see Fig. 15;6. fixing L and T pertu and exploring the (µ, ∆v) subspace, seeFig. 16.Starting as usual with the feeder in a normal state with v 0 = 1, we apply a v 0 depression of magnitude ∆v and duration T pertu with subsequent recovery to v 0 = 1 and then integrate the dynamics until the feeder reaches a steady state. Unless all of the motors on the feeder recover to a normal state, the feeder final state is classified as partially stalled. In all of the subsequent Figures, the filled red circles indicate this boundary between a fully normal phase feeder and a partially-stalled feeder. From our studies of the subspaces and the followingFigures, we can make the following conclusions:
. 15 and 16 which can be understood by considering the decline of the motor rotational frequency that occurs at the far end of the feeder during the time T pertu of the fault. We can crudely ap-FIG. 11: Phase diagram of the dynamic transition from the normal phase to a partially stalled phase. All the points above the curve lead to a partially stalled phase, all the points under the curve result in a normal state. For L ≤ Lc 0.46, there is no hysteresis and the system always ends in a fully running phase.
FIG. 12 :
12Phase diagram of the dynamic transition from the normal phase to a partially stalled phase. All the points above the curve lead to a partially stalled phase, all the points under the curve result in a normal phase. For L ≤ Lc 0.46, there is no hysteresis and the system always ends in a fully running phase.FIG. 13: Phase diagram of the dynamic transition from the normal phase to a partially stalled phase. All the points above the curve lead to a partially stalled phase, all the points under the curve result in a normal phase. For L ≤ Lc 0.46, there is no hysteresis and the system always ends in a fully running phase.
FIG. 14 :
14Phase diagram of the dynamic transition from the normal phase to a partially stalled phase. All the points above the curve lead to a partially stalled phase, all the points under the curve result in a normal phase.FIG. 15: Phase diagram of the dynamic transition from the normal state to a partially stalled state. All points under the curve lead to a partially stalled regime, all points above the curve result in the normal final state.
FIG. 16 :
16Phase diagram of the dynamic transition from the normal state to a partially stalled phase. All the points above the curve lead to a partially stalled phase, all the points under the curve result in a normal phase.
( a )
aTpertu = 0.221 and ∆v = 0.15 (b)Tpertu = 0.23 and ∆v = 0.15 (c)Tpertu = 0.3 and ∆v = 0.15 (d)Tpertu = 0.5 and ∆v = 0.15 FIG. 17: Different final states can be achieved by manipulating the duration of the perturbation (Tpertu), while its amplitude ∆v remains constant. From left to right, the fault is longer and longer and the number of stalled motors increases. See file movie different states.pdf in SI for five movies illustrating the different phases that can be reached and the dynamics that lead to the phases.
D. Kosterev, A. Meklin, J. Undrill, B. Lesieutre, W. Price, D. Chassin, R. Bravo, and S. Yang. Load modeling in power sys-
AcknowledgmentsWe are thankful to David Chassin and Ian Hiskens for very helpful discussions, advice and explanations on the history of the FIDVR-related research and literature. We are also thankful to Igor Kolkolov, Vladimir Lebedev and Konstantin Turitsyn for comments and suggestions. This material is based upon work supported by the National Science Foundation award # 1128501, EECS Collaborative Research "Power Grid Spectroscopy" under NMC. The work at LANL was carried out under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396.Appendix A: MethodsAnalysis of Static ODETo solve numerically Eqs.(11,12,13,14,15,16)in the stationary (time-independent) case, we proceed as follows:• using the mechanical torque t = t 0 ω ω0 α (i.e t ∈ [0, t 0 ]), as a scanning parameter, we "map" the values of p, q, ω and v through equations(14,15,16));• for all these "maps", we choose only one part of the hysteresis curve (either the one when we start from low voltage, or the one from high voltage);• we then solve equations(11,12,13)using the "maps" of the other variables and the matlab bvp4c solver.Note that some of the stationary solutions discovered in the results of dynamic exploration were missed by the static analysis detailed above.Dynamic SimulationsTo solve numerically Eqs.(11,12,13,14,15,16), we employ space-time discretization and use an explicit finite differences scheme. Our dynamical simulations are split in two steps.• During the first sub-step we take p and q fixed (outputs of the previous time step) and solve the feeder-global static Eqs.(11,12,13)for v, φ and ρ under conditions of the fixed voltage at the head of the line and zero fluxes at the end of the line (Eq.(17)). This sub-step instantaneously imposes a spatially smooth and globally correlated voltage profile.• Once the global, v, ρ, φ variables are fully updated, we start updating on the second sub-step the local variables, ω, p, q. For ω, we use its explicit time dependence:where ∆ t = T /N t is the time step (N t is the number of steps, and T the time of the simulation). Once ω is updated, we easily get the updated values of p and q using Eqs.(15,16). This movie (see 30 ) shows the dynamical transient following a large drop in v 0 to 0.8, i.e. v 0 < v − c . At t < 0, the line is in a stable stationary state where all the motors in the normal state. At t = 0 + (the first picture in the movie), v 0 suddenly drops from 1 to 0.8, and the voltage profile along the line (black curve) immediately responds. In response to the lower v(z), the motors' rotational frequency (blue) decelerates everywhere along the line at a rate dependant on the local value of the voltage -motors closer to the head of line where the voltage is larger decelerate at a lower rate. Motors at the far end of the line decelerate faster and become stalled first. The stalling of these remote motors results in an increase in their local reactive load q (pink) and the overall reactive power flow φ (red) which reinforces the reduction in v(z) creating a normal-to-stalled transition front that propagates from the tail to the head of the feeder. Eventually, the entire feeder becomes fully stalled. The response of the voltage and the mechanical frequency of the motors in the tail portion of the line is similar to that in Movie 1 -the voltage near the feeder tail is too low and they decelerate with those nearer the tail getting to the stalled ω/ω 0 ∼ 0 state first. The transition to the stalled state expands as it is again reinforced by the increase in reactive loading q (pink) and reactive power flow φ (red). However, the phase front slows down, sharpens, and eventually stops propagating as it begins to feel the boundary condition at z = 0 that does not allow the motors near the head of the feeder to stall. The result is a half-stalled, mixed phase distribution with motors in the tail part of the line stalled, while motors near the feeder's head in the normal state. The slowing down of the front is related to the effect of critical slow down typical of first-order phase transitions as they approach a spinodal point, i.e. as v(z) approaches v − c .c. Movie 3: "movie hysteresis.pdf"'This movie (see 32 ) shows hysteretic behavior of a feeder with L > L c , i.e. the application of an opposite perturbation does not lead to a reversal of the dynamical trajectory back to the original state. From t = 0 to 0.9, the movie is the same as Movie 1 "movie large fault.pdf", i.e. v 0 is reduced to 0.8 at t = 0 and the feeder and shows a complete collapse into the fully stalled phase. At t = 0.9, the perturbation is reversed as v 0 is restored to 1.0. The increased voltage causes the motors at the head of the feeder accelerate and return to the normal state and a propagating phase front is formed (more details about this recovery front can be found in the description of Movie 4 "movie recovery.pdf"). The front advances towards the tail until it reaches the point where the voltage has fallen to to v + c (voltage above which a stalled motors accelerates). At this point, the phase front can proceed no further and we end up with a partially stalled feeder. The voltage is unable to increase further because of the non-local reinforcement of the voltage drop by the stalled motors beyond the point where v = v + c . Similarly to what was seen in Movie 2 (and what is generally the case when the system approaches a spinodal point) the dynamics gets slower as the
Wecc progress update. Power and Energy Society General Meeting -Conversion and Delivery of Electrical Energy in the 21st Century. studies: Wecc progress update. In Power and Energy Society General Meeting -Conversion and Delivery of Electrical Energy in the 21st Century, 2008 IEEE, pages 1 -8, july 2008.
Fault-induced delayed voltage recovery (fidvr). NERC Technical Reference Group. NERCTechnical reportNERC Technical Reference Group. Fault-induced delayed voltage recovery (fidvr). Technical report, NERC, 2009.
Voltage stability of radial power links. B M Weedy, B R Cox, Electrical Engineers, Proceedings of the Institution of. 115B.M. Weedy and B.R. Cox. Voltage stability of radial power links. Electrical Engineers, Proceedings of the Institution of, 115(4):528 -536, april 1968.
Estimation of electrical power system steady-state stability in load flow calculations. Power Apparatus and Systems. V A Venikov, V A Stroev, V I Idelchick, V I Tarasov, IEEE Transactions on. 943V.A. Venikov, V.A. Stroev, V.I. Idelchick, and V.I. Tarasov. Es- timation of electrical power system steady-state stability in load flow calculations. Power Apparatus and Systems, IEEE Transac- tions on, 94(3):1034 -1041, may 1975.
Transient Processes in Electrical Power Systems. V A Venikov, English Translation, MIR PublishersMoscowV. A. Venikov. Transient Processes in Electrical Power Systems. English Translation, MIR Publishers, Moscow, 1977.
Power System Voltage Stability. C W Taylor, McGraw-Hill IncC.W. Taylor. Power System Voltage Stability. McGraw-Hill Inc., 1994.
Voltage Stability of Electric Power Systems. T Van Cutsem, C Vournas, SpringerT. Van Cutsem and C. Vournas. Voltage Stability of Electric Power Systems. Springer, 1998.
Voltage instability: phenomena, countermeasures, and analysis methods. T Van Cutsem, Proceedings of the IEEE. 882T. Van Cutsem. Voltage instability: phenomena, countermeasures, and analysis methods. Proceedings of the IEEE, 88(2):208 -227, feb 2000.
Transmission voltage recovery delayed by stalled air conditioner compressors. Power Systems. B R Williams, W R Schmus, D C Dawson, IEEE Transactions on. 73B.R. Williams, W.R. Schmus, and D.C. Dawson. Transmission voltage recovery delayed by stalled air conditioner compressors. Power Systems, IEEE Transactions on, 7(3):1173 -1181, aug 1992.
Air conditioner response to transmission faults. Power Systems. J W Shaffer, IEEE Transactions on. 122J.W. Shaffer. Air conditioner response to transmission faults. Power Systems, IEEE Transactions on, 12(2):614 -621, may 1997.
Stability analysis of induction motors network. D H Popovic, I A Hiskens, D J Hill, Electrical Power and Energy Systems. 207D.H. Popovic, I.A. Hiskens, and Hill D.J. Stability analysis of induction motors network. Electrical Power and Energy Systems, 20(7):475-487, 1998.
Continuum modeling of electromechanical dynamics in large-scale power systems. Circuits and Systems I: Regular Papers. M Parashar, J S Thorp, C E Seyler, IEEE Transactions on. 519septM. Parashar, J.S. Thorp, and C.E. Seyler. Continuum model- ing of electromechanical dynamics in large-scale power systems. Circuits and Systems I: Regular Papers, IEEE Transactions on, 51(9):1848 -1858, sept. 2004.
Electromechanical wave propagation in large electric power systems. Circuits and Systems I: Fundamental Theory and Applications. J S Thorp, C E Seyler, A G Phadke, IEEE Transactions on. 456J.S. Thorp, C.E. Seyler, and A.G. Phadke. Electromechanical wave propagation in large electric power systems. Circuits and Systems I: Fundamental Theory and Applications, IEEE Transac- tions on, 45(6):614 -622, June 1998.
Synchronization and transient stability in power networks and non-uniform kuramoto oscillators. F Dörfler, F Bullo, American Control Conference (ACC). F. Dörfler and F. Bullo. Synchronization and transient stability in power networks and non-uniform kuramoto oscillators. In Amer- ican Control Conference (ACC), 2010, pages 930 -937, 30 2010- july 2 2010.
Synchronization in Complex Oscillator Networks and Smart Grids. F Dörfler, M Chertkov, F Bullo, ArXiv e-printsF. Dörfler, M. Chertkov, and F. Bullo. Synchronization in Com- plex Oscillator Networks and Smart Grids. ArXiv e-prints, 2012.
Voltage Collapse and ODE Approach to Power Flows: Analysis of a Feeder Line with Static Disorder in Consumption/Production. M Chertkov, S Backhaus, K Turtisyn, V Chernyak, V Lebedev, M. Chertkov, S. Backhaus, K. Turtisyn, V. Chernyak, and V. Lebe- dev. Voltage Collapse and ODE Approach to Power Flows: Analysis of a Feeder Line with Static Disorder in Consump- tion/Production. http://arxiv.org/abs/1106.5003, 2011.
An interim dynamic induction motor model for stability studies in the wscc. L Pereira, D Kosterev, P Mackin, D Davies, J Undrill, Wenchun Zhu, IEEE Transactions on. 174Power SystemsL. Pereira, D. Kosterev, P. Mackin, D. Davies, J. Undrill, and Wenchun Zhu. An interim dynamic induction motor model for stability studies in the wscc. Power Systems, IEEE Transactions on, 17(4):1108 -1115, nov 2002.
Load modeling transmission research. B Lesieutre, R Bravo, R Yinger, D Chassin, H Huang, N Lu, I Hiskens, G Venkataramanan, LBNL. Technical reportB. Lesieutre, R. Bravo, R. Yinger, D. Chassin, H. Huang, N. Lu, I. Hiskens, and G. Venkataramanan. Load modeling transmission research. Technical report, LBNL, 2010.
Phasor modeling approach for single phase a/c motors. B Lesieutre, D Kosterev, J Undrill, Power and Energy Society General Meeting -Conversion and Delivery of Electrical Energy in the 21st Century. B. Lesieutre, D. Kosterev, and J. Undrill. Phasor modeling ap- proach for single phase a/c motors. In Power and Energy Society General Meeting -Conversion and Delivery of Electrical Energy in the 21st Century, 2008 IEEE, pages 1 -7, july 2008.
System model validation studies in wecc. D Kosterev, D Davies, Power and Energy Society General Meeting. D. Kosterev and D. Davies. System model validation studies in wecc. In Power and Energy Society General Meeting, 2010 IEEE, pages 1 -4, july 2010.
Optimal sizing of capacitors placed on a radial distribution system. Power Delivery. M Baran, F F Wu, IEEE Transactions on. 41M. Baran and F.F. Wu. Optimal sizing of capacitors placed on a radial distribution system. Power Delivery, IEEE Transactions on, 4(1):735-743, Jan 1989.
Optimal capacitor placement on radial distribution systems. Power Delivery. M E Baran, F F Wu, IEEE Transactions on. 41M.E. Baran and F.F. Wu. Optimal capacitor placement on ra- dial distribution systems. Power Delivery, IEEE Transactions on, 4(1):725-734, Jan 1989.
Distributed control of reactive power flow in a radial distribution circuit with high photovoltaic penetration. K Turitsyn, P Sulc, S Backhaus, M Chertkov, Power and Energy Society General Meeting. K. Turitsyn, P. Sulc, S. Backhaus, and M. Chertkov. Distributed control of reactive power flow in a radial distribution circuit with high photovoltaic penetration. In Power and Energy Society Gen- eral Meeting, 2010 IEEE, pages 1 -6, july 2010.
Local control of reactive power by distributed photovoltaic generators. K Turitsyn, P Sulc, S Backhaus, M Chertkov, Smart Grid Communications (SmartGridComm), 2010 First IEEE International Conference on. K. Turitsyn, P. Sulc, S. Backhaus, and M. Chertkov. Local con- trol of reactive power by distributed photovoltaic generators. In Smart Grid Communications (SmartGridComm), 2010 First IEEE International Conference on, pages 79 -84, oct. 2010.
Options for control of reactive power by distributed photovoltaic generators. K Turitsyn, P Sulc, S Backhaus, M Chertkov, Proceedings of the IEEE. 996K. Turitsyn, P. Sulc, S. Backhaus, and M. Chertkov. Options for control of reactive power by distributed photovoltaic generators. Proceedings of the IEEE, 99(6):1063 -1073, june 2011.
Fluctuation Theory of Phase Transitions. A Z Patashinskii, V L Pokrovskii, Pergamon PressA.Z. Patashinskii and V.L. Pokrovskii. Fluctuation Theory of Phase Transitions. Pergamon Press, 1979.
The Stefan Problem. L I Rubinstein, American Mathematical Society -Translation of Mathematical MonographsL.I. Rubinstein. The Stefan Problem. American Mathematical Society -Translation of Mathematical Monographs, 1971.
DistFlow ODE: Modeling, Analyzing and Controlling Long Distribution Feeder. D Wang, K Turitsyn, M Chertkov, Proceedings of CDC 2012. CDC 2012D. Wang, K. Turitsyn, and M. Chertkov. DistFlow ODE: Model- ing, Analyzing and Controlling Long Distribution Feeder. In Pro- ceedings of CDC 2012, http://arxiv.org/abs/1209. 5776.
| []
|
[
"Wetting on a spherical wall: influence of liquid-gas interfacial properties",
"Wetting on a spherical wall: influence of liquid-gas interfacial properties"
]
| [
"Andreas Nold ",
"Alexandr Malijevský ",
"E ",
"Serafim Kalliadasis ",
"\nCenter of Smart Interfaces\nDepartment of Chemical Engineering\nInstitute of Chemical Process Fundamentals of ASCR\nDepartment of Physical Chemistry\nInstitute of Chemical Technology\nDepartment of Chemical Engineering\nHála Laboratory of Thermodynamics\nImperial College London\nPetersenstr. 3264287, SW7 2AZ, 16502, 166 28Darmstadt, Darmstadt, London, Prague 6, Prague, Praha 6TUGermany, United Kingdom, Czech Republic, Czech Republic\n",
"\nImperial College London\nSW7 2AZLondonUnited Kingdom\n"
]
| [
"Center of Smart Interfaces\nDepartment of Chemical Engineering\nInstitute of Chemical Process Fundamentals of ASCR\nDepartment of Physical Chemistry\nInstitute of Chemical Technology\nDepartment of Chemical Engineering\nHála Laboratory of Thermodynamics\nImperial College London\nPetersenstr. 3264287, SW7 2AZ, 16502, 166 28Darmstadt, Darmstadt, London, Prague 6, Prague, Praha 6TUGermany, United Kingdom, Czech Republic, Czech Republic",
"Imperial College London\nSW7 2AZLondonUnited Kingdom"
]
| []
| We study the equilibrium of a liquid film on an attractive spherical substrate for an intermolecular interaction model exhibiting both fluid-fluid and fluid-wall long-range forces. We first re-examine the wetting properties of the model in the zero-curvature limit, i.e. for a planar wall, using an effective interfacial Hamiltonian approach in the framework of the well known sharp-kink approximation (SKA). We obtain very good agreement with a mean-field density functional theory (DFT), fully justifying the use of SKA in this limit. We then turn our attention to substrates of finite curvature and appropriately modify the so called soft-interface approximation (SIA) originally formulated by Napiórkowski and Dietrich [Phys. Rev. B 34, 6469, (1986)] for critical wetting on a planar wall. A detailed asymptotic analysis of SIA confirms the SKA functional form for the film growth. However, in this functional form SKA approximates surface tension with that of a sharp interface. This overestimates the liquid-gas surface tension and thus SKA is only qualitative rather than quantitative. On the other hand, by relaxing the assumption of a sharp interface, with e.g. even a simple "smoothing" of the density profile there, improves the predictive capability of the theory markedly, making it quantitative and showing that the liquid-gas surface tension plays a crucial role when describing wetting on a curved substrate. In addition, we show that in contrast to SKA, SIA predicts the expected mean-field critical exponent of the liquid-gas surface tension. | 10.1103/physreve.84.021603 | [
"https://arxiv.org/pdf/1103.6125v3.pdf"
]
| 20,682,427 | 1103.6125 | 58e196a633b3ef2510b79bc08ad38effd1847873 |
Wetting on a spherical wall: influence of liquid-gas interfacial properties
10 Jun 2011
Andreas Nold
Alexandr Malijevský
E
Serafim Kalliadasis
Center of Smart Interfaces
Department of Chemical Engineering
Institute of Chemical Process Fundamentals of ASCR
Department of Physical Chemistry
Institute of Chemical Technology
Department of Chemical Engineering
Hála Laboratory of Thermodynamics
Imperial College London
Petersenstr. 3264287, SW7 2AZ, 16502, 166 28Darmstadt, Darmstadt, London, Prague 6, Prague, Praha 6TUGermany, United Kingdom, Czech Republic, Czech Republic
Imperial College London
SW7 2AZLondonUnited Kingdom
Wetting on a spherical wall: influence of liquid-gas interfacial properties
10 Jun 2011(Dated: January 18, 2013)numbers: 0520Jj7115Mb6808Bc0570Np
We study the equilibrium of a liquid film on an attractive spherical substrate for an intermolecular interaction model exhibiting both fluid-fluid and fluid-wall long-range forces. We first re-examine the wetting properties of the model in the zero-curvature limit, i.e. for a planar wall, using an effective interfacial Hamiltonian approach in the framework of the well known sharp-kink approximation (SKA). We obtain very good agreement with a mean-field density functional theory (DFT), fully justifying the use of SKA in this limit. We then turn our attention to substrates of finite curvature and appropriately modify the so called soft-interface approximation (SIA) originally formulated by Napiórkowski and Dietrich [Phys. Rev. B 34, 6469, (1986)] for critical wetting on a planar wall. A detailed asymptotic analysis of SIA confirms the SKA functional form for the film growth. However, in this functional form SKA approximates surface tension with that of a sharp interface. This overestimates the liquid-gas surface tension and thus SKA is only qualitative rather than quantitative. On the other hand, by relaxing the assumption of a sharp interface, with e.g. even a simple "smoothing" of the density profile there, improves the predictive capability of the theory markedly, making it quantitative and showing that the liquid-gas surface tension plays a crucial role when describing wetting on a curved substrate. In addition, we show that in contrast to SKA, SIA predicts the expected mean-field critical exponent of the liquid-gas surface tension.
I. INTRODUCTION
The behavior of fluids in confined geometries, in particular in the vicinity of solid substrates, and associated wetting phenomena are of paramount significance in numerous technological applications and natural phenomena. Wetting is also central in several fields, from engineering and materials science to chemistry and biology. As a consequence, it has received considerable attention, both experimentally and theoretically for several decades. Detailed and comprehensive reviews are given in Refs. [1][2][3][4].
Once a substrate (e.g. a solid wall) is brought into contact with a gas, the substrate-fluid attractive forces cause adsorption of some of the fluid molecules on the substrate surface, such that at least a microscopically thin liquid film forms on the surface. The interplay between the fluid-fluid interaction (cohesion) and the fluidwall interaction (adhesion) then determines a particular wetting state of the system. This state can be quantified by the contact angle at which the liquid-gas interface meets the substrate. If the contact angle is non-zero, i.e. a spherical cap of the liquid is formed on the substrate, the surface is called partially wet. In the regime of partial wetting, the cap is surrounded by a thin layer of adsorbed fluid which is of molecular dimension. Upon approaching the critical temperature, the contact angle continuously decreases and eventually vanishes. Beyond this wetting temperature one speaks of complete wetting and the film thickness becomes of macroscopic dimension. The transition between the two regimes can be qualitatively distinguished by the rate of the disappearance of the contact angle, which is discontinuous in the case of a first-order transition or continuous for critical wetting.
From a theoretical point of view, it is much more convenient to take the adsorbed film thickness, ℓ, rather than the contact angle, as an order parameter for wetting transitions and related phenomena. An interfacial Hamiltonian is then minimized with respect to ℓ as is typically the case with the (mesoscopic) Landau-type field theories and (microscopic) density functional theory (DFT) -where ℓ can be easily determined from the Gibbs adsorption, a direct output of DFT.
In this study, we examine the wetting properties of a simple fluid in contact with a spherical attractive wall by using an intermolecular interaction model with fluidfluid and fluid-wall long-range forces. The curved geom-etry of the system prohibits a macroscopic growth of the adsorbed layer (and thus complete wetting), since the free energy contribution due to the liquid-gas interface increases with the film thickness ℓ, and thus for a given radius of a spherical substrate there must be a maximum finite value of ℓ [1,5,6]. For the mesoscopic approaches, the radius of the wall R, is a new field variable that introduces one additional ℓ-dependent term to the effective interface Hamiltonian of the system, compared to the planar geometry, where the only ℓ-dependent term is the binding potential between the wall-liquid and liquidgas interfaces. Furthermore, for a fluid model exhibiting a gas-liquid phase transition, such as ours, it has been found that two regimes of the interfacial behavior should be distinguished: R > R C , in which case the surface tension can be expanded in integer powers of R −1 and R < R C , where the interfacial quantities exhibit a nonanalytic behavior [7]. Moreover, for an intermolecular interaction model with fluid-fluid long-range interactions, there is an additional R −2 log R contribution to the surface tension in the R > R C regime [8]. These striking observations actually challenge all curvature expansion approaches. In addition, a certain equivalence between a system of a saturated fluid on a spherical wall and a system of an unsaturated fluid on a planar wall above the wetting temperature has been found [5,8]. Somewhat surprisingly, DFT computations confirmed this correspondence at the level of the density profiles down to unexpectedly small radii of the wall [8].
Most of these conjectures follow from the so-called sharp-kink approximation (SKA) [1], based on a simple piece-wise constant approximation of a one-body density distribution of the fluid, i.e. a coarse-grained approach providing a link between mesoscopic Hamiltonian theories and microscopic DFT. The simple mathematical form of SKA has motivated many theoretical investigations of wetting phenomena as it makes them analytically tractable. At the same time SKA appears to capture much of the underlying fundamental physics for planar substrates (often in conjugation with exact statistical mechanical sum rules [9]).
However, as we show in this work, SKA is only qualitative for spherical substrates, even though the functional form of the film growth can still be successfully inferred from the theory [8]. We attribute this to the particular approximation of the liquid-gas interface adapted by SKA. In particular, since the ℓ-dependent contribution to the interface Hamiltonian due to the curvature is proportional to the liquid-gas surface tension, the latter plays an important role compared to the planar geometry.
More specifically, the curved geometry induces a Laplace pressure whose value depends on both film thickness and the surface tension and so the two quantities are now coupled, in contrast with the planar geometry where a parallel shift of the liquid-gas dividing surface does not influence the surface contribution to the free energy of the system. We further employ an alternative coarse-grained approach, a modification of the one origi-nally proposed by Napiorowski and Dietrich [10] for the planar geometry, which replaces the jump in the density profile at the liquid-gas interface of SKA by a continuous function restricted by several reasonable constraints. We show that in this "soft-interface approximation" (SIA) the leading curvature correction to the liquid-gas surface tension is O(R −1 ), rather than O(R −2 log R), in line with the Tolman theory. Once a particular approximation for the liquid-gas interface is taken, the corresponding Tolman length can be easily determined. Apart from this, we find that the finite width of the liquid-gas interface significantly improves the prediction of the corresponding surface tension when compared with the microscopic DFT computations, which consequently markedly improves the estimation of the film thickness in a spherical geometry.
In Sec. II we describe our microscopic model and the corresponding DFT formalism. In Sec. III we present results of wetting phenomena on a planar wall obtained from our DFT based on a continuation scheme that allows us to trace metastable and unstable solutions. The results are compared with the analytical prediction as given by a minimization of the interface Hamiltonian based on SKA. We also make a connection between the two approaches by introducing the microscopic model into the interfacial Hamiltonian. In Sec. IV we turn our attention to the main part of our study, a thin liquid film on a spherical wall. We show that the SKA does not perform as well as might be desired, in particular, it does not account for a quantitative description of the liquid-gas surface tension which plays a significant role when the substrate geometry is curved. We then introduce SIA and present an asymptotic analysis with the new approach. Comparison with DFT computations reveals a substantial improvement of the resulting interface Hamiltonian even for very simple approximations of the density distribution at the liquid-vapour interface, indicating the significance of a non-zero width of the interface. We conclude in Sec. V with a summary of our results and discussion. Appendix A describes the continuation method we developed for the numerical solution of DFT. In Appendix B we show derivations of the surface tension and the binding potential for both a planar and a spherical geometry within SKA. Finally, Appendix C shows derivations of the above quantities, including Tolman's length, using SIA.
II. DFT
A. General formalism DFT is based on Mermin's proof [11] that the free energy of an inhomogeneous system at equilibrium can be expressed as a functional of an ensemble averaged onebody density, ρ(r) (see e.g. Ref. [12] for more details). Thus, the free-energy functional F [ρ] contains all the equilibrium physics of the system under consideration.
Clearly, for a 3D fluid model one has to resort to an approximative functional. Here we adopt a simple but rather well established local density approximation
F [ρ] = f HS (ρ (r)) ρ (r) dr+ + 1 2 ρ(r)ρ(r ′ )φ (|r − r ′ |) dr ′ dr,(1)
where f HS (ρ (r)) is the free energy per particle of the hard-sphere fluid (accurately described by the Carnahan-Starling equation of state), including the ideal gas contribution. The contribution due to the long-range van der Waals forces is included in the mean-field manner. To be specific, we consider a full Lennard-Jones 12-6 (LJ) potential to model the fluid-fluid attraction according to the Barker-Henderson perturbative scheme
φ (r) = 0 r < σ 4ε σ r 12 − σ r 6 r ≥ σ ,(2)
where for the sake of simplicity the Lennard-Jones parameter σ is taken equal to the hard-sphere diameter. The free-energy functional, F [ρ], describes the intrinsic properties of a given fluid. The total free energy including also a contribution of the external field is related to the grand potential functional through the Legendre transform
Ω[ρ] = F [ρ] + ρ (r) (V (r) − µ) dr,(3)
where µ is the chemical potential and V (r) is the external field due to the presence of a wall W ⊂ R 3 ,
V (r) = ∞ r ∈ W ρ w W φ w (|r − r ′ |) dr ′ elsewhere,(4)
consisting of the atoms interacting with the fluid particles via the Lennard-Jones potential, φ w (r), with the parameters σ w and ε w , and uniformly distributed throughout the wall with a density ρ w :
φ w (r) = 4ε w σ w r 12 − σ w r 6 .(5)
Applying the variational principle to the grand potential functional, Eq. (3), we attain the Euler-Lagrange equation:
δF HS [ρ] δρ (r) + ρ (r ′ ) φ (|r − r ′ |) dr ′ + V (r) − µ = 0,(6)
where F HS [ρ] denotes the first term in the right-handside of (1). In general, the solution of (6) comprises all extremes of the grand potential Ω[ρ] as given by (3) and not just the global minimum corresponding to the equilibrium state. Here we develop a pseudo arc-length continuation scheme for the numerical computation of (6) that enables us to capture both locally stable and unstable solutions and thus to construct the entire bifurcation diagrams for the isotherms (details of the scheme are given in Appendix A). The excess part of the grand potential functional (3) over the bulk may be expressed in the form
Ω ex [ρ (r)] = − (p (ρ (r)) − p (ρ b )) dr+ + 1 2 ρ (r) (ρ (r ′ ) − ρ (r)) φ(|r ′ − r|)dr ′ dr+ + ρ (r) V (r) dr,(7)
where ρ b is the density of the bulk phase and
−p (ρ) = ρf HS (ρ) + αρ 2 − µρ,(8)
is the negative pressure, or grand potential per unit volume, of a system with uniform density ρ and α ≡ 1 2 φ (|r|) dr = − 16 9 πεσ 3 . In particular, the equilibrium value of the excess grand potential (7) per unit area of a two-phase system of liquid and vapour in the absence of an external field, yields the surface tension between the coexisting phases, γ lg . The prediction of γ lg as given by minimization of (7) agrees fairly well with both computations and experimental data as shown in Fig. 1. [13] with the Percus-Yevick solution [14] for the hard-sphere reference fluid and using the exact hard sphere diameter [15]; circles: Monte Carlo simulations by Lee and Barker [16]; squares: experimental results for Argon by Guggenheim [17]; dashed line: fit of experimental results to equation γ(T ) = γ0 (1 − T /TC ) 1+r by Guggenheim [17]. The resulting coefficients are γ0 = 36.31dyn/cm and r = 2 9 .
B. Translational symmetry: planar wall
If the general formalism outlined above is applied on a particular external field attaining a certain symmetry, it will adopt a significantly simpler form. In the next subsection we will formulate the basic equations resulting from the equilibrium conditions obtained from the minimization of (7), for a spherical model of the external field, i.e. a system with rotational symmetry. But prior to that, it is instructive to discuss the zero-curvature limit of the above model, corresponding to an adsorbed LJ fluid on a planar wall, a system with translational symmetry.
For a planar substrate W = R 2 × R − in Cartesian coordinates, the density profile is only a function of z, so that the Euler-Lagrange equation reads
µ HS (ρ(z)) + ∞ 0 ρ(z ′ )Φ Pla (|z − z ′ |) dz ′ + (9) +V ∞ (z) − µ = 0 ∀z ∈ R + , where µ HS (ρ) = ∂(fHS(ρ)ρ) ∂ρ
is the chemical potential of the hard-sphere system. A fluid particle at a distance z from the wall experiences the wall potential:
In the framework of DFT, the natural order parameter for the wetting transitions is the Gibbs adsorption per unit area:
Γ ∞ [ρ (z)] = ∞ 0 (ρ(z) − ρ b ) dz.(12)
C. Rotational symmetry: spherical wall
If the external field is induced by a spherical wall, W = {r ∈ R 3 : r ≡ |r| < R}, the variational principle yields
µ HS (ρ(r)) + ∞ R ρ(r ′ )Φ Sph (r, r ′ ) dr ′ +(13)+V R (r) − µ = 0, (∀r > R) ,
where Φ Sph (r, r ′ ) is the surface interaction potential per unit density generated by fluid particles uniformly distributed on the surface of the sphere B r ′ centered at the origin at distance r,
Φ Sph (r, r ′ ) = ∂B r ′ φ (|r −r|) dr. (14) = r ′ r (Φ Pla (|r − r ′ |) − Φ Pla (|r + r ′ |))
(see also Appendix B 1). The wall potential in Eq. (4) for the spherical wall W = {r ∈ R 3 : |r| ≤ R} is:
V R (r) = ρ w ε w σ 4 w π 3r σ 8 w 30 r + 9R (r + R) 9 − r − 9R (r − R) 9 + + σ 2 w r − 3R (r − R) 3 − r + 3R (r + R) 3 .(15)
Replacing the distance from the origin r by the radial distance from the wallr = r − R, one can easily see that the external potential (15) reduces to the planar wall potential (10), for R → ∞. Analogously to the planar case, we define the adsorption Γ R as the excess number of particles of the system with respect to the surface of the wall:
Γ R [ρ(r)] = ∞ R r R 2 (ρ(r) − ρ b ) dr.(16)
III. WETTING ON A PLANAR SUBSTRATE
In this section we make a comparison between the numerical solution of DFT and the prediction given by the effective interfacial Hamiltonian according to SKA for the first-order wetting transition on the planar substrate. We consider a planar semi-infinite wall interacting with the fluid according to (10) with the typical parameters ρ w ε w = 0.8ε/σ 3 and σ w = 1.25σ that correspond to the class of intermediate-substrate systems [18] for which prewetting phase transitions can be observed. We note that wetting on planar and spherical walls is a multiparametric problem and hence a full parametric study of the global phase diagram is a difficult task, beyond the scope of this paper.
A. Numerical DFT results of wetting on a planar wall Figure 2 depicts the surface-phase diagram of the considered model in the (∆µ, T ) plane, where ∆µ = µ − µ sat is the departure of the chemical potential from its saturation value. The first-order wetting transition takes place at wetting temperature k B T w = 0.621ε, well bellow the critical temperature of the bulk fluid k B T c = 1.006ε for our model. The prewetting line connects the saturation line at the wetting temperature T w and terminates at the prewetting critical point, k B T pwc = 0.724ε. The slope of the prewetting line is governed by a Clapeyrontype equation [19], which, in particular, states that the prewetting line approaches the saturation line tangentially at T w with
d (∆µ pw ) dT T =Tw = 0,(17)
in line with our numerical computations. Schick and Taborek [20] later showed that the prewetting line scales as −∆µ ∼ (T − T w ) 3/2 . In Ref. [21], this power law was confirmed experimentally, such that
− ∆µ pw (T ) k B T w = C T − T w T w 3/2 ,(18)
with C ≈ 1 2 [21]. A fit of our DFT results with (18) leads to a coefficient C = 0.77, in a reasonable agreement with the experimental data -see Fig. 2. Figure 3 depicts the adsorption isotherm in terms of the thickness of the adsorbed liquid film ℓ as a function of ∆µ for the temperature k B T = 0.7ε and in the interval between the wetting temperature T w and the prewetting critical temperature T pwc . ℓ can be associated with the Gibbs adsorption through
ℓ = Γ R [ρ] ∆ρ ,(19)
for both finite and infinite R, where ∆ρ = ρ sat l − ρ sat g is the difference between the liquid and gas densities at saturation. The isotherm exhibits a van der Waals loop with two turning points depicted as B and C demarcating the unstable branch. Points A and D indicate the equilibrium between thin and thick layers, corresponding to a point on the prewetting line in Fig. 2. The location of the equilibrium points can be obtained from a Maxwell construction. Details of the numerical scheme we developed for tracing the adsorption isotherms are given in Appendix A.
B. SKA for a planar wall
For the sake of clarity and completeness we briefly review the main features of SKA for a planar geometry (details are given in Ref. [1]).
Let us consider a liquid film of a thickness ℓ adsorbed on a planar wall. According to the SKA the density distribution is approximated by a piecewise constant function where ρ g is the density of the gas reservoir and ρ + l is the density of the metastable liquid at the same thermodynamic conditions stabilized by the presence of the planar wall, Eq. (10) and δ ≈ 1 2 (σ + σ w ). The off-coexistence of the two phases induces the pressure difference
ρ SKA ℓ (z) = 0 z < δ , ρ + l δ < z < ℓ , ρ g z > ℓ ,(20)p + (µ) − p(µ) ≈ ∆ρ∆µ,(21)
where p + is the pressure of the metastable liquid and p is the pressure of the gas reservoir, and where we assume that ∆µ = µ − µ sat < 0 is small. The upper graph depicts an ℓ-∆µ bifurcation diagram for kBT = 0.7ε for a wall with ρwεw = 0.8ε/σ 3 and σw = 1.25σ. ∆µ is the deviation of the chemical potential from its saturation value, µsat. The prewetting transition, marked by the dashed line, occurs at chemical potential ∆µpw = −0.022ε. The inset subplots show the density ρσ 3 as a function of the distance z/σ from the wall. The lower graph shows the excess grand potential Ωex/ε as a function of ∆µ/ε in the vicinity of the prewetting transition.
The excess grand potential per unit area A of the system then can be expressed in terms of macroscopic quantities as a function of ℓ
Ω ex (ℓ; µ) A (22) = −∆µ∆ρ(ℓ − δ) + γ SKA wl (µ) + γ SKA lg + w SKA (ℓ; µ),
where γ SKA wl and γ SKA lg are the SKA to the wall-liquid and the liquid-gas surface tensions, respectively, and w SKA (ℓ) is the effective potential between the two interfaces (binding potential). In the following, we will suppress the explicit µ-dependence.
The link with the microscopic theory can be made, if the contributions in the right-hand-side of Eq. (22) are expressed in terms of our molecular model, which, when summed up, give the excess grand potential (7) where we have substituted the ansatz (20):
γ SKA wl = − ρ +2 l 2 0 −∞ ∞ 0 Φ Pla (|z − z ′ |) dz ′ dz+ (23) + ρ + l ∞ δ V ∞ (z)dz = 3 4 πεσ 4 ρ +2 l + π 90δ 8 (σ 6 w − 30δ 6 )σ 6 w ρ w ε w ρ + l . γ SKA lg = − ∆ρ 2 2 0 −∞ ∞ 0 Φ Pla (|z − z ′ |) dz ′ dz = 3 4 πεσ 4 ∆ρ 2 (24) w SKA (ℓ) = ∆ρ ρ + l ∞ ℓ−δ ∞ z Φ Pla (z ′ )dz ′ dz− − ∞ ℓ V ∞ (z)dz (25) = − A 12πℓ 2 1 + 2 + 3 δ ℓ 1 − ρw εwσ 6 w ρ + l εσ 6 δ ℓ + O (δ/ℓ) 3 ,
where we considered the distinguished limit δ ≪ ℓ. A is the Hamaker constant corresponding to the net interaction of an atom and the semi-infinite wall:
A = 4π 2 ∆ρ ρ + l εσ 6 − ρ w ε w σ 6 w .(26)
We note that the Hamaker constant is implicitly temperature dependent and that the attractive contribution of the potential of the wall enables the Hamaker constant to change its sign. Hence, in contrast with the adsorption on a hard wall, where the Hamaker constant is always negative, there may be a temperature below which its sign is positive (large ρ l ) and negative above. Clearly, complete wetting is only possible for A < 0.
Making use of only the leading-order term in (25) the minimization of (22) with respect to ℓ gives:
∆ρ∆µ − A 6πℓ 3 ≈ 0.(27)
Hence, at this level of approximation the equilibrium thickness of the liquid film is:
ℓ eq ≈ A 6π∆ρ∆µ 1/3 .(28)
When substituted into (22), the wall-gas surface tension to leading order reads:
γ SKA wg = γ SKA wl + γ SKA lg + − 9A 16π 1/3 |∆ρ∆µ| 2/3 .(29)
Equation (28) can be confirmed by a comparison against the numerical DFT, see Fig. 4. We observe that the prediction of SKA becomes reliable for |∆µ| < 0.01ε corresponding to a somewhat surprisingly small value of the liquid film, ℓ ≈ 5σ. Beyond this value, the coarsegrained approach looses its validity and also the prewetting transition is approached, both of which cause the curve in Fig. 4 to bend (see also Fig. 3). It is worth noting that the only term in (22) having an ℓ-dependence and thus governing the wetting behavior, is the term related to the undersaturation pressure and the binding potential, w SKA (ℓ). Clearly, γ lg does not come into play in the planar case since the translation of the liquid-gas interface along the z axis does not change the free energy of the system. The situation becomes qualitatively different if the substrate is curved. Nevertheless, at this stage we conclude in line with the earlier studies, that SKA provides a fully satisfactory approach to the first-order wetting transition on a planar wall.
IV. WETTING ON A CURVED SUBSTRATE
A. SKA for the spherical wall For the spherical geometry, SKA adopts the following form:
ρ SKA R,ℓ (r) = 0 r < R + δ, ρ + l R + δ < r < R + ℓ, ρ g R + ℓ < r < ∞ .(30)
The corresponding excess grand potential now reads
Ω ex (µ, R, ℓ) 4πR 2 = −∆µ∆ρ (R + ℓ) 3 −R 3 3R 2 + γ SKA wl (R)+ + γ SKA lg (R + ℓ) 1 + ℓ R 2 + w SKA (ℓ; R),(31)
whereR = R + δ. Within this approximation, the liquidvapour surface tension becomes (see also Appendix B)
γ SKA lg (R) = γ SKA lg (∞) 1 − 2 9 ln(R/σ) (R/σ) 2 + O (σ/R) 2(32)
and an analogous expansion holds for γ SKA wl (R). The ln(R/σ) (R/σ) 2 correction to γ SKA lg (∞) is due to the r −6 decay of our model. We note that short-range potentials lead to different curvature dependence of the surface tension, a point that has been discussed in detail in Refs. [7,8,26]. Interestingly, the O(σ/R) correction to the surface tension, as one would expect from the Tolman theory [27], is missing. It corresponds to a vanishing Tolman length within SKA, as we will explicitly show in the following section. Although the value of the Tolman length is still a subject of some controversy, it is most likely that its value is non-zero, unless the system is symmetric under interchange between the two coexisting phases [28]. This observation has been confirmed numerically in Ref. [8] from a fit of DFT results for the wall-gas surface tension in a non-drying regime for the hard-wall substrate. Thus, the linear term was included by hand into the expansion (32) [8].
Finally, the binding potential within the SKA for the spherical wall yields
w SKA (ℓ; R) = w SKA (ℓ; ∞) 1 + ℓ R(33)
where terms O (δ/ℓ) 3 , δ/R, ln(ℓ/R)
(R/ℓ) 2
have been neglected.
B. SIA for the spherical wall
As an alternative to SKA, Napiórkowski and Dietrich [10] proposed a modified version of the effective Hamiltonian, in which the liquid-gas interface was approximated in a less crude way by a continuous monotonic function, the SIA. Applied for the second-order wetting transition on a planar wall, SIA merely confirmed that SKA provides a reliable prediction for such a system, in complete agreement with SIA. Formulated now for the spherical case, the density profile of the fluid takes FIG. 5. Sketch of the density profile according to SIA for a certain film thickness ℓ. A piecewise function approximation is employed so that except for the interval (R + ℓ − χ/2, R + ℓ + χ/2) the density is assumed to be piecewise constant.
the form:
ρ SIA R,ℓ (r) = (34) = 0 r < R + δ ρ + l R + δ < r < R + ℓ − χ 2 ρ lg (r − R − ℓ) R + ℓ − χ 2 < r < R + ℓ + χ 2 ρ g R + ℓ + χ 2 < r < ∞ .
Thus, a non-zero width of the liquid-vapour interface, χ, is introduced as an additional parameter. The density profile ρ lg (·) in this region is not specified, but the following constraints are imposed:
ρ lg − χ 2 = ρ + l and ρ lg χ 2 = ρ g ,(35)
with an additional assumption of a monotonic behaviour of the function ρ lg (r). An illustrative example of ρ SIA R,ℓ (r) is given in Fig. 5. The corresponding excess grand potential takes the form
Ω ex 4πR 2 = − ∆µ∆ρ (R + ℓ) 3 −R 3 3R 2 + γ SIA wl (R)+ + 1 + ℓ R 2 γ SIA lg (R + ℓ) + w SIA (R, ℓ) ,(36)
taking R + ℓ as the Gibbs dividing surface (so that ℓ is a measure of the number of particles adsorbed at the wall). The binding potential (see also Appendix C 3) is obtained from
w SIA (R, ℓ) = =ρ + l ∞ R+ℓ−χ/2 ρ l − ρ SIA R,ℓ (r) Ψ R+δ (r) r R 2 dr− − ∞ R+ℓ−χ/2 ρ + l − ρ SIA R,ℓ (r) V R (r) r R 2 dr,(37)
where Ψ R (r) = R 0 Φ Sph (r, r ′ )dr ′ -see Appendix B 1 for the explicit form of the last expression.
The wall-liquid surface tension remains unchanged compared to that obtained from SKA, Eq. (24). However, the liquid-gas surface tension now reads (see Appendix C 1)
γ SIA lg (R) = − R+χ/2 R−χ/2 (p(ρ lg (r)) − p(ρ ref (z))) r R 2 dr+ + 1 2 ∞ 0 ρ lg (r) (ρ lg (r ′ ) − ρ lg (r)) × × Φ Sph (r, r ′ ) r R 2 dr ′ dr.(38)
From now on, we neglect the curvature dependence of χ and ρ lg (·), as they would introduce higher-order corrections not affecting the asymptotic results at our level of approximation. This is also in line with previous studies which show that the Tolman length only depends on the density profile in the planar limit [28]. Then (38) can be written as
γ SIA lg (R) = γ SIA lg (∞) 1 − 2δ ∞ R + O ln(R/σ) (R/σ) 2 ,(39)
where ρ ref (z) = ρ + l Θ(−z) + ρ g Θ(z) and δ ∞ is the Tolman length of the liquid-gas surface tension, as given by (Appendix C 2):
δ ∞ = 1 γ SIA lg (∞) χ/2 −χ/2 (p(ρ lg (z)) − p ref ) zdz.(40)
Here, p ref is the pressure at saturation, so that the Tolman length is independent of the choice of the dividing surface. An immediate consequence of Eq. (40) is that within SKA the Tolman length vanishes, since ρ ref (z) corresponds to the SKA of a liquid-gas interface. The equilibrium film thickness then follows from setting the derivative of (36) w.r.t. ℓ equal to zero:
1 4πR 2 dΩ ex dℓ = −∆µ∆ρ 1 + ℓ R 2 +(41)+ 2 R 1 + ℓ R γ SIA lg (R + ℓ) + 1 + ℓ R 2 dγ SIA lg dℓ R+ℓ + + ρ + l R+ℓ+χ/2 R+ℓ−χ/2 ρ ′ lg (r − R − ℓ)Ψ R+δ (r) r R 2 dr− − R+ℓ+χ/2 R+ℓ−χ/2 ρ ′ lg (r − R − ℓ)V R (r) r R 2 dr.
The last two terms of (41) are of the form
χ/2 −χ/2 ρ ′ lg (r)f I,II (R + ℓ + r)dr,(42)
with f I (r) = ρ + l Ψ R+δ (r) r R 2 and f II (r) = V R (r) r R 2 . Since ρ lg (r) is monotonic, i.e. ρ ′ lg does not change sign, the mean value theorem can be employed such that
χ/2 −χ/2 ρ ′ lg (r)f I,II (R + ℓ + r)dr =(43)
− ∆ρf I,II (R + ℓ + ξ I,II ), for some ξ I,II ∈ (−χ/2, χ/2), where we made use of ρ ′ lg (r)dr = −∆ρ. Substituting (43) into (41) and setting the resulting expression equal to zero, we obtain:
∆µ = 1 ∆ρ 2γ SIA lg (R + ℓ) R + ℓ + dγ SIA lg dℓ R+ℓ − − ρ + l Ψ R+δ (R + ℓ + ξ I ) 1 + ξ I R + ℓ 2 + + V R (R + ℓ + ξ II ) 1 + ξ II R + ℓ 2 .(44)
So far, there is no approximation within SIA. Equation (44) can be simplified by appropriately estimating the values of the auxiliary parameters ξ I and ξ II . To this end, we employ a simple linear approximation to the density profile at the liquid-gas interface, taking −ρ ′ lg (r)/∆ρ ≈ 1/χ in (43). Furthermore, we expand f I,II in powers of ℓ/R, σ/ℓ
f I (R + ℓ + r) = − 2πρ + l εσ 6 3 (ℓ + r − δ) 3 1 + ℓ + r + 3δ 2R + +O σ ℓ 6 , ℓ R 2 ,(45)
f II (R + ℓ + r) = − 2πρ w ε w σ 6 w 3 (ℓ + r)
3 1 + ℓ + r 2R + (46) +O σ ℓ 6 , ℓ R 2 ,
where we assumed the distinguished limits r, δ, σ ≪ ℓ ≪ R. Inserting (45) and (46) into (43) yields for ξ i :
ξ i = − χ 2 6ℓ 1 + O δ ℓ , ℓ R , χ ℓ 2 .(47)
From (44), we obtain to leading order,
ρ + l Ψ R+δ (R + ℓ + ξ I ) 1 + ξ I R + ℓ 2 = (48) − 2π 3ℓ 3 ρ + l εσ 6 1 + O δ ℓ , ℓ R , χ ℓ 2 , V R (R + ℓ + ξ II ) 1 + ξ II R + ℓ 2 = (49) − 2π 3ℓ 3 ρ w ε w σ 6 w 1 + O ℓ R , χ ℓ 2 .
Finally, substituting (48) and (49) into (44) we have to leading order:
∆ρ∆µ − 2 R γ SIA lg (∞) ≈ A 6πℓ 3 ,(50)
and hence, to leading order the equilibrium wetting film thickness is:
ℓ SIA eq ≈ A 6π ∆ρ∆µ − 2γ SIA lg,∞ /R 1/3 .(51)
We note that this asymptotic analysis can be extended beyond (51), by including terms O(δ/ℓ), O(ℓ/R) and O((χ/ℓ) 2 ). The latter occurs due to the "soft" treatment of the liquid-vapor interface and is thus not present in SKA.
In Fig. 6 we compare two adsorption isotherms (k B T = 0.7ε) corresponding to wetting on a planar and a spherical wall (R = 100σ). The two curves are mutually horizontally shifted by a practically constant value, in accordance with Eq. (50). This implies that the curve for the spherical wall crosses the saturation line ∆µ = 0 at a finite value of ℓ, and eventually converges to the saturation line as ∆µ −1 from the right, thus the finite curvature prevents complete wetting. The horizontal shift corresponds to the Laplace pressure contribution, ∆µ = 2γ SIA lg (∞)/ (∆ρR), as verified by comparison with the numerical DFT, Fig. 7. All these conclusions are in line with SKA. However, the difference between SKA and SIA consists in a different treatment of γ lg (∞), compare (B4) and (C2). This is quite obvious, since the softness of the interface influences the free energy required to increase the film thickness. We will discuss this point in more detail in the following section.
FIG. 6. Isotherms and density profiles for a planar wall (dashed lines) and a sphere with R = 100σ (solid lines) at kBT = 0.7ε and with wall parameters, ρwεw = 0.8ε/σ 3 and σw = 1.25σ. To directly compare the planar to the spherical case, the film thickness instead of adsorption is used as a measure. The subplots in the inset depict the density ρσ 3 as a function of the distance from the wall z/σ and (r − R)/σ for the planar and the spherical cases, respectively. The points A and A ′ are at the prewetting transitions. Points B, B ′ and C, C ′ correspond to the same film thickness. B is at saturation whereas C is chosen such that the film thickness ℓ is 20σ. Table I. The symbols denote the numerical DFT results.
C. Comparison of SKA and SIA
We now examine the repercussions of the way the liquid-gas interface is treated on the prediction of wetting behaviour on a spherical surface. As already mentioned in Sec. IV B, the linear correction in the curvature to the planar liquid-gas surface tension, ignored within SKA, is properly captured by SIA. Furthermore, the presence of the Laplace pressure suggests that the liquid-gas surface tension plays a strong part in the determination of the equilibrium film thickness. This contrasts to the case of a planar geometry, where the term associated with the liquid-gas surface tension has no impact on the equilibrium configuration.
To investigate this point in detail, we will first compare the approximations of γ lg as obtained by the two approaches. For this purpose, we start with SIA for a given parameterization of the liquid-gas interface. As shown in Table I, we employ linear, cubic and hyperbolic tangent auxiliary functions, where the latter violates condition (35) negligibly. The particular parameters are determined by minimization of a given function with respect to the corresponding parameters. In Table I we display the planar liquid-gas surface tension associated with a particular parameterization and the Tolman length resulting from Eq. (40) for the temperature k B T = 0.7ε. In all three cases the surface tension is close to the one obtained from the numerical solution of DFT and also the predictions of the Tolman length are in a reasonable agreement with the most recent simulation results [29][30][31], with thermodynamic results [32] as well as with results from the van der Waals square gradient theory [33].
It is reasonable to assume that from the set of the (40) and the corresponding parameters for temperature kBT = 0.7ε according to a given auxiliary function approximating the density distribution of the vapour-liquid interface. The parameters are from auxiliary function minimization. The surface tension given by numerical DFT computations is γ lg = 0.517ε/σ 2 andρ = (ρ l + ρg)/2. Note that in the tanh-case, the interface width is implicitly determined by the steepness parameter α. considered auxiliary functions, the tanh-approximation is the most realistic one, although the numerical results as given in Table I suggest that it is mainly the finite width of the liquid-gas interface, rather than the approximation of the density profile at this region, that matters. To illustrate this, we show in Fig. 8 the dependence of the surface tension on the steepness parameter α, determining the shape of the tanh function. Note that the limit α → ∞ corresponds to the surface tension as predicted by SKA, γ SKA lg,∞ = 1.060ε/σ 2 , for k B T = 0.7ε. Such a value contrasts with the result of SIA, which corresponds to the minimum of the function, and yields γ SIA lg,∞ = 0.524ε/σ 2 , in much better agreement with the numerical solution of DFT, γ DFT lg,∞ = 0.517ε/σ 2 . Asymptotic analysis of the film thickness in Eq. (50), reveals that the film thickness for large but finite R remains finite even at saturation with ℓ ∼ R 1 3 in line with earlier studies, e.g. Refs. [5,8]. From Eq. (50) one also recognizes a strong dependence of ℓ on the planar liquidgas surface tension. In Fig. 9 we present the SIA and SKA predictions of the dependence on ℓ as a function of the wall radius. The comparison with the numerical DFT results reveals that for large R SIA is clearly superior, reflecting a more realistic estimation of the liquid-gas surface tension. For small values of R (and ℓ) we observe a deviation between DFT and the SIA results. This indicates a limit of validity of our first-order analysis and the assumption of large film thicknesses. Table I). The dash-dotted line corresponds to Eq. (51) where γ SKA lg (∞) = 1.060ε/σ 2 is used instead of γ SIA lg (∞). The wall parameters are ρwεw = 0.8ε/σ 3 and σw = 1.25σ at kBT = 0.7ε.
Auxiliary function ρ lg (z) γ SIA lg (∞) argument δ∞ ρ − ∆ρ z χ 0.544ε/σ 2 χ = 4.0σ −0.07σ ρ − 3 2 ∆ρ z χ + 2∆ρ z χ 3 0.532ε/σ 2 χ = 5.4σ −0.09σ ρ − ∆ρ 2 tanh (αz/σ) 0.524ε/σ 2 α = 0.66 −0.11σ
The occurrence of the undersaturation pressure and the Laplace pressure on the left-hand-side of Eq. (50) suggests a certain equivalence between the two systems of a planar and a spherical symmetry once the sum of the two pressures is fixed. In Fig. 10 we test this equivalence on the level of a density profile, where DFT results corresponding to the planar and the spherical case are compared, such that ∆ρ|∆µ| = 2γ j lg (∞)/R, with j = {SIA, SKA}. A high value of γ lg (∞) as given by SKA must now be compensated by a fairly large R. As we have seen in Fig. 6, the high value of R means that the saturation line ∆µ = 0 is crossed by the adsorption isotherm at large ℓ, in agreement with the result depicted in Fig. 9. However, for a given R, ℓ as obtained by SKA is underestimated, which follows from (51) with γ lg (∞) = γ SKA lg (∞) which is also consistent with the physical observation that high surface tension inhibits growth of the liquid film.
Note that these results are not in conflict with previous studies [8], where the SKA has been applied for drying on a spherical hard wall and very good agreement was obtained with DFT computations. This is because in Ref. [8] the "exact" (i.e. obtained from DFT computations) liquid-vapor surface tension was implemented into SKA with a view to verify the correctness of its functional form. Here, we show that the coarse-grained effective Hamiltonian approach is capable of a quantitatively reliable prediction of the adsorption phenomena on a spherical wall (for a sufficiently large R), if the restriction of the sharp liquid-gas interface is dropped. However, the price we have to pay for, is one more parameter (compared to SKA) that steps into the theory.
V. SUMMARY AND CONCLUSIONS
We have re-examined the properties of a well known coarse-grained interfacial Hamiltonian approach, originally proposed by Dietrich [1] for the study of wetting phenomena on a planar substrate and based on a sharpkink approximation (SKA). SKA relies on approximating the density profile by a piecewise constant function and has proved to provide significant insight into interfacial phenomena as it is mathematically tractable and gives reliable results for a wide spectrum of problems. This theory is phenomenological in its origin, but a link with a microscopic density functional theory (DFT) can be made, which allows to express all the necessary quantities in terms of fluid-fluid and fluid-substrate interaction parameters. Comparison with numerical DFT reveals that SKA provides a fully satisfactory approach to the theory of complete wetting on a planar surface.
One of the aims of this study was to demonstrate that for a spherical geometry the prediction quality of SKA regarding interfacial properties and wetting characteristics is limited. More specifically, we demonstrated that SKA satisfactorily determines the functional form of the asymptotic behaviour of the film thickness for large radii of the substrate but leads to a significant quantitative disagreement in the prediction of the adsorbed film thickness when compared against numerical DFT. The source of the deviation is the presence of the Laplace pressure that is not quantitatively captured within the framework of SKA. This contribution originates in the dependence of the free energy of the the liquid-gas interface on a position of a dividing surface, a property that is absent in the planar case.
We then showed that the properties of the effective interfacial Hamiltonian approach can be substantially improved if SKA is replaced by a soft-interface approximation (SIA), where the assumption of the sharp liquid-gas interface is replaced by a less restrictive approximation in which the interface is treated as a continuous function of the density distribution. We demonstrated that SIA allows for mathematical scrutiny as it is still analytically tractable, e.g. it provides the curvature expansion of the surface tensions (non-analytic in the wall curvature) with the leading-order term proportional to σ/R. Moreover, it allows to express the corresponding coefficient, the Tolman length, in a fairly simple manner and the values it predicts for the Tolman length are in a reasonable agreement with the latest simulation results. This is in contrast with SKA, where the linear term in the surface tension expansion is missing, i.e. the Tolman length vanishes. This observation is in a full agreement with the conclusion of Fisher and Wortis [28], since SKA treats the fluid in a symmetric way, and thus the Tolman length must disappear as for the Ising-like models. In other words, according to SKA, the surface tension of a large drop is equivalent to the one of a bubble, provided the density profiles of the two systems are perfectly antisymmetric in the planar limit. This is no more true for SIA, due to the asymmetry of the "local" contributions to the surface tension, i.e. the first term on the right-hand-side of Eq. (38).
Furthermore, comparison with our numerical DFT revealed that the SIA results of the film thickness as a function of the wall radius are offer a drastic improvement to the ones obtained from SKA. This follows from the fact that the surface tension of the planar liquid-gas interface according to SKA is overestimated, which in turn underestimates the interface growth.
It should be emphasized that all the theoretical approaches we have considered in this work are of a meanfield character, i.e. do not properly take into account the interfacial fluctuations (capillary waves) at the liquid-gas interface. However, for our fluid model of a power-law interaction these fluctuations are not expected to play any significant role, since the upper critical dimension associated with the considered system is d * c = 2 [34]. Nevertheless, what one has to take into account in order to obtain the correct critical behavior, is the broadening of the interface at the critical region. Evidently, this feature is not provided by SKA. Consequently, within SKA the liquid-gas surface tension vanishes as t = 1 − T Tc [5]. In contrast, the SIA provides the expected mean-field be- havior γ lg (∞) ∼ t 3 2 , as it is able to capture the interface broadening near the critical point (see Fig. 11).
The SIA developed here, can be naturally extended by "softening" the wall-liquid interface in an analogous way as done for the liquid-vapor interface. However, such a modification would have presumably only negligible impact on the prediction of the thickness of the adsorbed liquid film, since the contribution to the excess free energy from the wall-liquid surface tension has no ℓ-dependence and the change of the binding potential is expected to be small. On the other hand, it may be interesting to find the influence of this refinement on quantities such as the density profile at contact with the wall. For this purpose a non-local DFT (e.g. Rosenfeld's fundamental measure theory) would be needed though [8,35,36].
We also note that despite our restriction to a model of spherical symmetry, our conclusions should be relevant for general curved geometries and should capture some of the qualitative aspects of wetting on non-planar substrates. Of particular interest would be the extension of this study to spatially heterogeneous, chemical or topographical substrates. Such substrates have a significant effect on the wetting characteristics of the solid-liquid pair (e.g. Refs. [37][38][39][40][41][42]).
Solving Eq. (A1) will only give one density profile ρ for each chemical potential µ. However, in the case of a prewetting transition, there can be multiple solutions for the same chemical potential. From these solutions, only one is stable, whereas the other solutions are metaor unstable (see also Sec. III A). In order to compute the full bifurcation diagram of the set of density profiles over the chemical potential, a pseudo arc-length continuation scheme is developed similar to the one employed by Salinger and Frink [43].
More specifically, we introduce an arc-length parametrization such that (µ(s), ρ(s)) with s ∈ R is a connected set of solutions of condition (A1) and where we have included the chemical potential µ as an additional variable:
g (µ, ρ) ! =0. (A4)
The main idea of the continuation scheme is to trace the set of solutions along the curve parametrized by s. Assume that a point (µ n , ρ n ) at position s n on the curve is given, where n is the step of the continuation scheme being solved for. First, the tangent vector dµ ds , dρ ds at position s n is computed. This is done by differentiating g(s):=g (µ(s), ρ(s)) with respect to s. From (A4), it is known that g is a constant equal to zero on the curve of solutions (µ(s), ρ(s)). Hence, the differential dg ds vanishes:
dg ds = ∂g ∂µ J · dµ ds dρ ds = 0,(A5)
where J is the Jacobian as defined in (A3) and where, x = (µ, ρ). xT is the tangent vector in x n . By following the curve of solutions in the direction of the tangent vector, the pseudo arc-length continuation scheme is able to trace the curve of solutions through turning points with respect to the parameter µ.
∂g i ∂µ = −1 + dρ g dµ Ψ Pla (z N − z i ) .(
The second term takes into account that ρ g for the density at z > z N depends on the chemical potential. In our computations, we have approximated ∂gi ∂µ by −1. (A5) is the defining equation for the tangent vector (µ n T , ρ n T ) = dµ ds , dρ ds . We remark that this homogeneous system of linear equations leaves one degree of freedom, as we only have N + 1 equations, but N + 2 variables, (µ T , ρ T ). An additional equation is then used to maintain the direction of the tangent vector on the curve of solutions: In a second step, an additional equation for a point at the stepsize θ away from (µ n , ρ n ) and in the direction of the tangent vector µ n−1 T ρ n−1 T T is set up. For this purpose we introduce a scalar product, which takes into account the discretization of the density profile into N intervals of length ∆z:
(µ 1 , ρ 1 ) | (µ 2 , ρ 2 ) :=µ 1 µ 2 + · · · (A7) · · · + ∆z 2 N j=0 (2 − δ j0 − δ jN ) · ρ 1j ρ 2j .
The norm with respect to this scalar product is defined as:
(µ, ρ) := (µ, ρ) | (µ, ρ) 1/2 .
The curve of solutions (µ(s), ρ(s)) is now parameterized by the arc-length with respect to the norm given above, Linearizing the norm around s n and making use of the approximate tangent vector (µ T , ρ T ) at s n , one obtains
(µ n T , ρ n T ) | (µ(s n + θ) − µ(s n ), ρ(s n + θ) − ρ(s n )) = θ,(A10)
where we have made use of the normalized tangent vector such that
(µ T , ρ T ) = 1. (A11)
Inserting µ n+1 , ρ n+1 for (µ(s n + θ), ρ(s n + θ)) into (A10) leads to the additional equation for the next point on the curve of solutions:
K n µ n+1 , ρ n+1 := (A12) (µ n T , ρ n T ) | µ n+1 − µ n , ρ n+1 − ρ n − θ ! = 0,
For a geometric interpretation of Eq. (A12) see Fig. 12.
To obtain µ n+1 , ρ n+1 , (A12) is solved together with (A4). This is done using a Newton scheme. In each Newton step, the following system of linear equations is solved:
µ n T (ρ n T ) T dg dµ J · ∆µ m ∆ρ m = K n (µ n,m , ρ n,m ) g(µ n,m , ρ n,m ) ,(A13)
where we are considering the n-th step of the continuation scheme and the m-th step of the Newton method, such that ∆µ m :=µ n,m+1 − µ n,m and ∆ρ m :=ρ n,m+1 − ρ n,m . Furthermore, we have made use of ρ n T,j :=
∆z 2 (2 − δ j0 − δ jN ) ρ n T,j .
Finally, (A13) is solved using a conjugate gradient method, where the Jacobian (A3) of the system is approximated by introducing a cutoff of 5 molecular diameters for the intermolecular potential Φ Pla .
where V A ∩ V B = 0 and V A ∪ V B = R 3 . The convenience of the expression for the excess grand potential as given by (7) becomes evident now, as for ρ A = ρ l , ρ B = ρ g and no external field, only the second term in (7) matters. One then gets an immediate result for the liquid-gas surface tension,
γ SKA lg = Ω ex A = − (ρ l − ρ g ) 2 A I(V A , V B ) ,(B2)
where
I(V A , V B ) ≡ 1 2 VA VB φ(|r 1 − r 2 |)dr 1 dr 2 .(B3)
For the surface tension of a planar interface we have
V A = V z<0 and V B = V z≥0 such that I(V z<0 , V z≥0 ) A = 1 2 0 −∞ ∞ 0 Φ Pla (|z − z ′ |)dz ′ dz ,
with Φ Pla defined by (11). Thus, for the liquid-gas surface tension we obtain:
γ lg (∞) = − ∆ρ 2 2 0 −∞ ∞ 0 Φ Pla (|z − z ′ |) dz ′ dz = 3 4 π∆ρ 2 εσ 4 .(B4)
In the case of a spherical symmetry, i.e. a drop of liquid of radius R, V A = {r ∈ R 3 : |r| < R} and V B = {r ∈ R 3 : |r| ≥ R} the surface tension becomes
γ lg (R) = − ∆ρ 2 I(V r<R , V r≥R ) 4πR 2 = − ∆ρ 2 2 ∞ R R 0 r R 2 Φ Sph (r, r ′ ) dr ′ dr = − ∆ρ 2 2 ∞ R r R 2 Ψ R (r) dr =γ lg (∞) 1 − 2 9 ln(R/σ) (R/σ) 2 + O (σ/R) 2 ,(B5)
where ∆ρ = ρ l −ρ g and Φ Sph (r, r ′ ) ≡ ∂B r ′ φ (|r − r ′ |) dr ′ can be advantageously expressed in terms of Φ Pla :
Φ Sph (r, r ′ ) = = 2π 0 π 0 φ (|r − r ′ |) r ′2 sin ϑ ′ dϑ ′ dϕ ′ = 2πr ′2 π 0 φ r 2 − 2rr ′ cos ϑ ′ + r ′2 sin ϑ ′ dϑ ′ = π r ′ r (r+r ′ ) 2 (r−r ′ ) 2 φ √ t dt = π r ′ r ∞ (r−r ′ ) 2 φ √ t dt − ∞ (r+r ′ ) 2 φ √ t dt = 2π r ′ r ∞ 0 φ (r − r ′ ) 2 + u 2 udu − ∞ 0 φ (r + r ′ ) 2 + u 2 udu = r ′ r (Φ Pla (|r − r ′ |) − Φ Pla (|r + r ′ |)) ,(B6)
and for r > R Note that expression (B5) gives a vanishing Tolman's length.
Ψ R (r) ≡ R 0 Φ Sph (r, r ′ )dr ′ (B7) = πεσ 4 3r σ 8 30 r+9R (r+R) 9 − r−9R (r−R) 9 + +σ 2 r−3R (r−R) 3 − r+3R (r+R) 3 R + σ < r
Binding potential
The binding potential of a system possessing two interfaces is the surface free energy per unit area of the system minus the contribution due to the surface tensions of the two interfaces. It expresses an effective interaction between the interfaces induced by the attractive forces. If, analogously to the analysis above, we define three disjoint subspaces V W , V A , and V B , such that V W ∪ V A ∪ V B = R 3 , the density distribution of the wall-liquid-gas system within SKA is
ρ(r) = 0, r ∈ V W ρ l , r ∈ V A ρ g r ∈ V B , ,(B8)
which when substituted into (7) gives for the excess grand potential: We now rearrange the terms in (B9), such that
Ω ex =−∆µ∆ρV A − ρ l 2 I(V W ,VΩ ex (ℓ) A = −∆µ∆ρ V A A + γ SKA wl + A ′ A γ SKA lg + w SKA (ℓ) ,
(B10) where A = ∂VW dS is the surface of the wall and A ′ = ∂(VW ∪VA) dS is the surface of the liquid-gas interface. We obtain
γ SKA wl = 1 A −ρ l 2 I(V W , V A ∪ V B ) + ρ l VA∪VB V (r)dr ,(B11)γ SKA lg = − 1 A ′ (∆ρ) 2 I(V W ∪ V A , V B ),(B12)
and the binding potential w SKA involving the remaining contribution
w SKA (ℓ) = 1 A 2ρ l ∆ρI(V W , V B ) − ∆ρ VB V (r)dr .(B13)
Having obtained the expressions of I(X, Y ) for systems possessing translational or spherical symmetry, we can evaluate the binding potential in the planar case by making use of V w = R 2 × (−∞, δ], V A = R 2 × (δ, ℓ) and
V B = R 2 × [ℓ, ∞): w SKA (ℓ) plane = ∆ρ ρ l ∞ ℓ−δ ∞ z Φ Pla (z ′ ) dz ′ dz− − ∞ ℓ V ∞ (z)dz = − A 12πℓ 2 1 + 2 + 3 δ ℓ 1 − ρw εwσ 6 w ρ + l εσ 6 δ ℓ + O (δ/ℓ) 3 .(B14)
In the spherical case we make use of V w = {r ∈ R 3 : |r| ≤ R + δ}, V A = {r ∈ R 3 : R + δ < |r| < R + ℓ} and V B = {r ∈ R 3 : |r| ≥ R + ℓ} to obtain w SKA (ℓ; R) sphere = = w SKA (ℓ; ∞) 1 + ℓ R (B15)
where we have neglected terms O (δ/ℓ) 3 , δ/R, ln(ℓ/R) (R/ℓ) 2 . and taking use of (B6) we can express the double integral in (C4) as
σ 2 ε ∞ 0 ∞ 0 h R (r, r ′ )Φ Sph (r, r ′ ) r R 2 dr ′ dr = σ 2 ε ∞ −R ∞ −R h(r, r ′ ) (Φ Pla (|r − r ′ |)− −Φ Pla (|2R + r − r ′ |)) 1 + r ′ R 1 + r R dr ′ dr = σ 2 ε ∞ −R ∞ −R h(r, r ′ )Φ Pla (|r − r ′ |)× × 1 + r ′ R 1 + r R dr ′ dr + O (σ/R) 2 = σ 2 ε ∞ −R ∞ −R h(r, r ′ )Φ Pla (|r − r ′ |)× × 1 + r + r ′ R dr ′ dr + O ln(R/σ) (R/σ) 2 = σ 2 ε ∞ −∞ ∞ −∞ h(r, r ′ )Φ Pla (|r − r ′ |)× × 1 + r + r ′ R dr ′ dr + O ln(R/σ) (R/σ) 2 . (C6)
Comparison with the double integral in (C2) then yields σ 2 εR where A = 4πR 2 . When this is subtracted from the surface grand potential (7), which can be written as
Ω ex A = 1 A ([V W V A ] + [V W (V AB ∪ V B )]+ + [V A (V AB ∪ V B )] + 1 2 [V AB V AB ] + [V AB V B ]− − VAB (p(ρ lg (r)) − p(ρ ref (r))) dr+ + ρ(r)V (r)dr ,(C10)
one obtains for the binding potential:
w SIA = 1 A ([V W (V AB ∪ V B )] − [V W (V AB ∪ V B )] wl − −[V W (V AB ∪ V B )] lg + V (r) (ρ (r) − ρ wl (r) dr) .
In spherical coordinates, the binding potential reads:
w SIA = R+δ 0 ∞ R+ℓ−χ/2 r R 2 ρ l (ρ l − ρ(r ′ )) × × Φ Sph (r, r ′ ) dr ′ dr+ + ∞ R+ℓ−χ/2 (ρ(r) − ρ l ) V R (r) r R 2 dr. (C11)
FIG. 1 .
1Plots of surface tension as a function of dimensionless temperature, T /Tc. Solid line: numerical DFT results of our model scaled with ε/kB = 119.8K and σ = 3.4Å; triangles: computational results by Toxvaerd for a 12-6 LJ fluid using the Barker-Henderson perturbation theory
FIG. 2 .
2The upper plot shows the deviation of the chemical potential from its saturation value at prewetting (crosses), and at the left (open squares) and right (filled squares) saddle nodes of bifurcation as a function of temperature. The dashed line marks the locus of the chemical potential at saturation for the given temperature, ∆µ = 0. The solid line is a fit to −∆µpw(T )/(kBTw) = C((T − Tw)/Tw) 3/2 where the wetting temperature is kBTw = 0.621ε and the prewetting critical temperature is kBTpwc = 0.724ε. The resulting coefficient is C = 0.77. The lower plot shows scaled prewetting phase diagrams for different systems. The circles are DFT calculations for an attractive wall with σw = 1.25σ and ρwεw = 0.8ε/σ 3 (open circles) and ρwεw = 0.75ε/σ 3 (filled circles
FIG. 3. The upper graph depicts an ℓ-∆µ bifurcation diagram for kBT = 0.7ε for a wall with ρwεw = 0.8ε/σ 3 and σw = 1.25σ. ∆µ is the deviation of the chemical potential from its saturation value, µsat. The prewetting transition, marked by the dashed line, occurs at chemical potential ∆µpw = −0.022ε. The inset subplots show the density ρσ 3 as a function of the distance z/σ from the wall. The lower graph shows the excess grand potential Ωex/ε as a function of ∆µ/ε in the vicinity of the prewetting transition.
FIG. 4 .
4Log-log plot of the film thickness as a function of deviation of the chemical potential from saturation, ∆µ, for kBT = 0.7ε and wall parameters ρwεw = 0.8ε/σ 3 , σw = 1.25σ. The crosses are results from DFT computations. The solid line is the analytical prediction in Eq. (27) obtained from SKA.
FIG. 7 .
7Numerical verification of Eq. (50). The film thickness ℓ is fixed and corresponds to the adsorption ΓR = 3.905/σ 2 . The solid line corresponds to the analytical result, ∆µ − 2γ SIA lg (∞)/ (∆ρR) = Cε, where γ SIA lg (∞) = 0.524ε/σ 2 , see
FIG. 8 .
8Plot of a dimensionless planar liquid-gas surface tension for the liquid-gas interface approximation ρ(z) = ρ l +ρg 2 − ∆ρ 2 tanh (αz/σ) for kBT = 0.7ε as a function of the steepness parameter α. The upper dashed line is the surface tension obtained from SKA, whereas the lower dashed line displays the surface tension obtained from numerical DFT.
FIG. 9 .
9Film thickness at saturation (∆µ = 0) as a function of the wall radius. The symbols correspond to the numerical DFT results. The dashed line shows the prediction according to Eq. (51), where γ SIA lg (∞) = 0.524ε/σ 2 (see
FIG. 10 .
10Density profiles of the fluid adsorbed at the spherical walls of radii R = 104.1σ (dashed) and R = 210.6σ (dashed-dotted) in a saturated state and at the planar wall (solid line) in an undersaturated state, ∆µ = −0.015ε . The wall radii correspond to the equality 2γ j lg,∞ /R = ∆ρ|∆µ| for j = SIA (dashed) and j = SKA (dashed-dotted). For kBT = 0.7ε. The wall parameters are ρwεw = 0.8ε/σ 3 and σw = 1.2σ.
FIG. 11 .
11Plot of liquid-gas surface tension vs. t = 1 − T /Tc. The squares are the result of SIA, where a simple linear interpolant has been used to model the interface density profile. The surface tension has been obtained by minimizing the grand potential with respect to the interface width χ. The solid line is a fit to γ lg (∞)σ 2 /ε = Ct 3/2 , where the resulting coefficient is C = 3.4. The inset shows a plot of the interface width χ/σ over t. The solid line is a fit to χ = Cχt −α , where Cχ = 2.0 and α = 0.57.
FIG. 12 .
12Sketch of one iteration step of the continuation scheme. x n and x n+1 are consecutive points of the iteration,
tangent vector of the previous iteration.
A ) − ρ 2 g I(V W , V B ) − −(∆ρ) 2 I(V A , V B ) + VA∪VB V (r)ρ(r)dr. (B9)
TABLE I .
IPlanar surface tensions (C2), Tolman lengths
. Adsorption isotherms
ACKNOWLEDGMENTSWe are grateful to Bob Evans for valuable comments and suggestions on an early version of the manuscript and for bringing to our attention Ref.[35]. We thank Antonio Pereira for helpful discussions regarding numerical aspects of our work. This work is supported byAppendix A: Numerical methodsFor our computations we employ dimensionless values. We use σ and ε, as the characteristic length and energy scales, respectively.Density profileTo obtain the equilibrium density profiles, the extremal conditions Eqs.(9)and(13)for the planar and the spherical case, respectively, must be solved numerically. As both cases are of dimension one, the same numerical method can be applied and we restrict ourselves to presenting the numerical method for the planar wall,The domain R is restricted to an interval of interest [z 0 , z N ] with boundary conditions ρ(z) = 0 for z < z 0 and ρ(z) = ρ g for z > z N . z 0 ∈ (0, 1) is typically chosen to be 0.6. This can be done due to the repulsive character of the wall. The interval [z 0 , z N ] is then divided in a uniform mesh, z i = z 0 + i · ∆z with i = 0, . . . N , where ∆z = (z N − z 0 )/N is the grid size. Subsequently, the integral in Eq. (9) is discretized using a trapezoidal rule inside the domain [z 0 , z N ], whereas the analytical expressionis used for the integral outside that interval. Hence, we obtain a system of N +1 nonlinear equations with {ρ i , i = 0, . . . , N } as unknowns, namely:where δ ij denotes the Kroenecker-Delta, which we have used in order to consider the grid size at the boundaries. This system of equations is solved using a modified Newton method, where each step ∆ρ is rescaled with a parameter λ such that ρ n+1 = ρ n + λ∆ρ is bounded in (0, 6/π) in order to avoid the singularity of Eq.(8). Note that we have made use of the vector notation ρ:= (ρ 0 , . . . , ρ n ) T . In each Newton step n, the linear system of equationshas to be solved, where the elements of the Jacobian matrix J are given by:Appendix B: Surface tension and binding potential in the sharp-kink approximation (SKA)Surface tensionAccording to Gibbsian thermodynamics, the surface tension is the free energy cost to increase an interface by unit area, i.e. the excess free energy (excess grand potential for an open system) per unit area with respect to the corresponding uniform phases. Within SKA, the liquid-vapour surface tension can be obtained from(7), withAppendix C: Surface tension, binding potential, and the Tolman length in the soft-interface approximation (SIA)Surface tensionThe surface tension of a planar liquid-gas interface in the soft-interface approximationis obtained by substituting (C1) into(7)with V (r) = 0where ρ ref (z) denotes the density of a given bulk phase, i.e. ρ ref (z) = ρ l Θ(−z) + ρ g Θ(z) such that at saturation p (ρ ref (z)) = p ref = const. We note that in the above approximation the contribution due to the excess local pressure is generally non-zero (in contrast to the SKA).In the spherical case, the density profile isand the surface tension of a liquid drop of radius R isTolman lengthHere we calculate the Tolman length as given by SIA by a direct comparison of (C2) and (C4). We first compare the second terms of (C2) and (C4). For this purpose we define h R (r, r ′ ) ≡ ρ lg,R (r) (ρ lg,R (r ′ ) − ρ lg,R (r)) andh(r, r ′ ) ≡ ρ lg,∞ (r) (ρ lg,∞ (r ′ ) − ρ lg,∞ (r)) (C5)In the following, we focus on the asymmetry of the model due to the contribution of the pressure but for simplicity we assume that the density profile is symmetric. In this case, the integrand in the above expression is antisymmetric with respect to the reflection transformation r → −r and r ′ → −r ′ and the term O(σ/R) vanishes. For the difference of the first terms of (C4) and (C2) we obtainyielding a Tolman lengthNote that in line with[28], the Tolman length does not depend on the choice of the dividing surface.Binding potentialThe extension of the expression for the binding potential, Eq. (B15), as given by SKA is rather straightforward. We consider the density distribution as followsfor V W a sphere of radius R + δ,Such a model is relevant for the study of wetting on a spherical (R finite) and on a planar (R → ∞) wall. It should be noted that in contrast to SKA, this density distribution is not piecewise constant, due to the position dependent part of ρ(r) in the region V AB . Furthermore, we define the following operatorswith ρ wl (r) ≡ ρ l χ R 3 \VW (r) and ρ lg (r) ≡ ρ l χ VW ∪V A (r) + ρ lg (r)χ VAB (r) + ρ g χ VB (r), where χ X (r) is the characteristic function of a subset X. Using this convention, the wall-liquid and liquid-gas surface tensions can be respectively expressed as
S Dietrich, Phase Transitions and Critical Phenomena. C. Domb and J. L. LebowitzAcademic Press2S. Dietrich, in Phase Transitions and Critical Phenom- ena, edited by C. Domb and J. L. Lebowitz (Academic Press, 1988) Chap. 1, p. 2.
. D Bonn, J Eggers, J Indekeu, J Meunier, E Rolley, 10.1103/RevModPhys.81.739Rev. Mod. Phys. 81739D. Bonn, J. Eggers, J. Indekeu, J. Meunier, and E. Rolley, Rev. Mod. Phys. 81, 739 (2009).
M Schick, Liquids at Interfaces, Les Houches Session XLVIII. J.-F. J. J. Charvolin and J. Zinn-JustinAmsterdamElsevierM. Schick, in Liquids at Interfaces, Les Houches Session XLVIII, edited by J.-F. J. J. Charvolin and J. Zinn-Justin (Elsevier, Amsterdam, 1990).
. D E Sullivan, M M T Da Gama, ; C A Croxton, Wiley45New YorkD. E. Sullivan and M. M. T. da Gama (C A Croxton (New York: Wiley), 1986) p. 45.
. T Bieker, S Dietrich, Physica A. 25285T. Bieker and S. Dietrich, Physica A 252, 85 (1998).
. R Ho, A Poniewierski, Phys. Rev. B. 365628R. Ho lyst and A. Poniewierski, Phys. Rev. B 36, 5628 (1987).
. R Evans, J R Henderson, R Roth, J. Chem. Phys. 12112074R. Evans, J. R. Henderson, and R. Roth, J. Chem. Phys. 121, 12074 (2004).
. M C Stewart, R Evans, Phys. Rev. E. 7111602M. C. Stewart and R. Evans, Phys. Rev. E 71, 011602 (2005).
J R Henderson, Fundamentals of Inhomogeneous Fluids. New YorkDekkerJ. R. Henderson, Fundamentals of Inhomogeneous Fluids (Dekker, New York, 1992).
. M Napiórkowski, S Dietrich, Phys. Rev. B. 346469M. Napiórkowski and S. Dietrich, Phys. Rev. B 34, 6469 (1986).
. N D Mermin, Phys. Rev. 1371441N. D. Mermin, Phys. Rev. 137, A1441 (1965).
. R Evans, Adv. Phys. 28143R. Evans, Adv. Phys. 28, 143 (1979).
. J A Barker, D Henderson, J. Chem. Phys. 474714J. A. Barker and D. Henderson, J. Chem. Phys. 47, 4714 (1967).
. G J Throop, R J Bearman, J. Chem. Phys. 422408G. J. Throop and R. J. Bearman, J. Chem. Phys. 42, 2408 (1965).
. S Toxvaerd, J. Chem. Phys. 553116S. Toxvaerd, J. Chem. Phys. 55, 3116 (1971).
. J K Lee, L A Barker, J. Chem. Phys. 601976J. K. Lee and L. A. Barker, J. Chem. Phys. 60, 1976 (1974).
. E A Guggenheim, J. Chem. Phys. 13253E. A. Guggenheim, J. Chem. Phys. 13, 253 (1945).
. R Pandit, M Schick, M Wortis, 10.1103/PhysRevB.26.5112Phys. Rev. B. 265112R. Pandit, M. Schick, and M. Wortis, Phys. Rev. B 26, 5112 (1982).
. E H Hauge, M Schick, Phys. Rev. B. 274288E. H. Hauge and M. Schick, Phys. Rev. B 27, 4288 (1983).
. M Schick, P Taborek, Phys. Rev. B. 467312M. Schick and P. Taborek, Phys. Rev. B 46, 7312 (1992).
. D Bonn, D Ross, Rep. Prog. Phys. 641085D. Bonn and D. Ross, Rep. Prog. Phys. 64, 1085 (2001).
. H Kellay, J Meunier, B Binks, Phys. Rev. Lett. 691220H.Kellay, J. Meunier, and B. Binks, Phys. Rev. Lett. 69, 1220 (1992).
. G Mistura, H Lee, M Chan, J. Low. Temp. Phys. 96221G. Mistura, H. Lee, and M. Chan, J. Low. Temp. Phys. 96, 221 (1994).
. J Rutledge, P Taborek, Phys. Rev. Lett. 69937J. Rutledge and P. Taborek, Phys. Rev. Lett. 69, 937 (1992).
. D Ross, J Phillips, J Rutledge, P Taborek, J. Low. Temp. Phys. 10681D. Ross, J. Phillips, J. Rutledge, and P. Taborek, J. Low. Temp. Phys. 106, 81 (1997).
. A O Parry, C Rascon, L Morgan, J. Chem. Phys. 124151101A. O. Parry, C. Rascon, and L. Morgan, J. Chem. Phys. 124, 151101 (2006).
. R C Tolman, J. Chem. Phys. 17333R. C. Tolman, J. Chem. Phys. 17, 333 (1948).
. M P A Fisher, M Wortis, Phys. Rev. B. 296252M. P. A. Fisher and M. Wortis, Phys. Rev. B 29, 6252 (1984).
. J G Sampayo, A Malijevský, E A Muller, E De Miguel, G Jackson, J. Chem. Phys. 132141101J. G. Sampayo, A. Malijevský, E. A. Muller, E. de Miguel, and G. Jackson, J. Chem. Phys. 132, 141101 (2010).
. B J Block, S K Das, M Oettel, P Virnau, K Binder, J. Chem. Phys. 133154702B. J. Block, S. K. Das, M. Oettel, P. Virnau, and K. Binder, J. Chem. Phys. 133, 154702 (2010).
. A E Van Giessen, E M Blokhuis, J. Chem. Phys. 131164705A. E. van Giessen and E. M. Blokhuis, J. Chem. Phys. 131, 164705 (2009).
. L S Bartell, J. Chem. Phys. B. 10511615L. S. Bartell, J. Chem. Phys. B 105, 11615 (2001).
. E M Blokhuis, J Kuipers, J. Chem. Phys. 12474701E. M. Blokhuis and J. Kuipers, J. Chem. Phys. 124, 074701 (2006).
. R Lipowsky, Phys. Rev. Lett. 521429R. Lipowsky, Phys. Rev. Lett. 52, 1429 (1984).
. M C Stewart, R Evans, J. Phys.: Condens. Matter. 173499M. C. Stewart and R. Evans, J. Phys.: Condens. Matter 17, S3499 (2005).
. E M Blokhuis, J Kuipers, J. Chem. Phys. 12654702E. M. Blokhuis and J. Kuipers, J. Chem. Phys. 126, 054702 (2007).
. L W Schwartz, R R Elley, J. Coll. Int. Sci. 202173L. W. Schwartz and R. R. Elley, J. Coll. Int. Sci. 202, 173 (1998).
. C M Gramlich, A Mazouchi, G M Homsy, Phys. Fluids. 161660C. M. Gramlich, A. Mazouchi, and G. M. Homsy, Phys. Fluids 16, 1660 (2004).
D Quéré, Thin Films of Soft Matter. S. Kalliadasis and U. ThieleWien, NYSpringer115D. Quéré, in Thin Films of Soft Matter, edited by S. Kalliadasis and U. Thiele (Springer-Wien, NY, 2007) p. 115.
. N Savva, S Kalliadasis, Phys. Fluids. 2192192N. Savva and S. Kalliadasis, Phys. Fluids 21, 092192 (2009).
. N Savva, S Kalliadasis, G A Pavliotis, Phys. Rev. Lett. 10484501N. Savva, S. Kalliadasis, and G. A. Pavliotis, Phys. Rev. Lett. 104, 084501 (2010).
. H Bohlen, A Parry, E Díaz-Herrera, M Schoen, Eur. Phys. J. E. 25103H. Bohlen, A. Parry, E. Díaz-Herrera, and M. Schoen, Eur. Phys. J. E 25, 103 (2008).
. A Salinger, L D Frink, J. Chem. Phys. 11710385A. Salinger and L. D. Frink, J. Chem. Phys. 117, 10385 (2003).
| []
|
[
"Proton scalar dipole polarizabilities from real Compton scattering data, using fixed-t subtracted dispersion relations and the bootstrap method",
"Proton scalar dipole polarizabilities from real Compton scattering data, using fixed-t subtracted dispersion relations and the bootstrap method"
]
| [
"B Pasquini \nDipartimento di Fisica\nUniversità degli Studi di Pavia\n27100PaviaItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Pavia\n27100PaviaItaly\n",
"P Pedroni \nIstituto Nazionale di Fisica Nucleare\nSezione di Pavia\n27100PaviaItaly\n",
"S Sconfietti \nDipartimento di Fisica\nUniversità degli Studi di Pavia\n27100PaviaItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Pavia\n27100PaviaItaly\n"
]
| [
"Dipartimento di Fisica\nUniversità degli Studi di Pavia\n27100PaviaItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Pavia\n27100PaviaItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Pavia\n27100PaviaItaly",
"Dipartimento di Fisica\nUniversità degli Studi di Pavia\n27100PaviaItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Pavia\n27100PaviaItaly"
]
| []
| We perform a fit of the real Compton scattering (RCS) data below pion-production threshold to extract the electric (αE1) and magnetic (βM1) static scalar dipole polarizabilities of the proton, using fixed-t subtracted dispersion relations and a bootstrap-based fitting technique. The bootstrap method provides a convenient tool to include the effects of the systematic errors on the best values of αE1 and βM1 and to propagate the statistical errors of the model parameters fixed by other measurements. We also implement various statistical tests to investigate the consistency of the available RCS data sets below pion-production threshold and we conclude that there are not strong motivations to exclude any data point from the global set. Our analysis yields αE1 = (12.03 +0.48 −0.54 ) × 10 −4 fm 3 and βM1 = (1.77 +0.52 −0.54 ) × 10 −4 fm 3 , with p-value = 12%. | 10.1088/1361-6471/ab323a | [
"https://arxiv.org/pdf/1903.07952v1.pdf"
]
| 119,342,903 | 1903.07952 | be48f3143aa5270ed5b23bf5cbccec665f922a9b |
Proton scalar dipole polarizabilities from real Compton scattering data, using fixed-t subtracted dispersion relations and the bootstrap method
19 Mar 2019
B Pasquini
Dipartimento di Fisica
Università degli Studi di Pavia
27100PaviaItaly
Istituto Nazionale di Fisica Nucleare
Sezione di Pavia
27100PaviaItaly
P Pedroni
Istituto Nazionale di Fisica Nucleare
Sezione di Pavia
27100PaviaItaly
S Sconfietti
Dipartimento di Fisica
Università degli Studi di Pavia
27100PaviaItaly
Istituto Nazionale di Fisica Nucleare
Sezione di Pavia
27100PaviaItaly
Proton scalar dipole polarizabilities from real Compton scattering data, using fixed-t subtracted dispersion relations and the bootstrap method
19 Mar 2019
We perform a fit of the real Compton scattering (RCS) data below pion-production threshold to extract the electric (αE1) and magnetic (βM1) static scalar dipole polarizabilities of the proton, using fixed-t subtracted dispersion relations and a bootstrap-based fitting technique. The bootstrap method provides a convenient tool to include the effects of the systematic errors on the best values of αE1 and βM1 and to propagate the statistical errors of the model parameters fixed by other measurements. We also implement various statistical tests to investigate the consistency of the available RCS data sets below pion-production threshold and we conclude that there are not strong motivations to exclude any data point from the global set. Our analysis yields αE1 = (12.03 +0.48 −0.54 ) × 10 −4 fm 3 and βM1 = (1.77 +0.52 −0.54 ) × 10 −4 fm 3 , with p-value = 12%.
I. INTRODUCTION
The electric and magnetic static scalar dipole polarizabilities, α E1 and β M1 , respectively, are fundamental structure constants of the proton that can be accessed via real Compton scattering (RCS). In the low-energy expansion of the Compton amplitude, they correspond to the leading-order contributions beyond the structure independent terms that describe the scattering process as if the proton were a pointlike particle with anomalous magnetic moment. When approaching the pion-production threshold, also higher-order terms start competing with the scalar dipole polarizabilities. Therefore, one has to resort to reliable theoretical frameworks for extracting the scalar dipole polarizabilities from experimental data. The most accredited theories, which have been used sofar, are fixed-t dispersion relations (DRs), in the unsubtracted [1][2][3] and subtracted [4][5][6][7][8] formalism, and chiral perturbation theory (χPT) with explicit nucleons and Delta's, in the variant of heavy-baryon χPT (HBχPT) [9][10][11] and manifestly covariant [12,13] χPT (BχPT). Based on these theoretical frameworks, extractions of the scalar dipole polarizabilities have been obtained by fitting different data sets for the unpolarized RCS cross section, and adopting a statistical approach based on the conventional χ 2 -minimization procedure. Recently, a new statistical method has successfully been applied in Ref. [14] to analyze RCS data at low energies and extract values for the energy-dependent scalar dipole dynamical polarizabilities [15,16]. The method is based on the parametric-bootstrap technique, and it is adopted in this work to extract the scalar dipole static polarizabilities, using the updated version of fixed-t subtracted DRs formalism [8] as theoretical framework. Although the bootstrap method is rarely used in nuclear physics [14,[17][18][19][20], it has high potential and advantages [21]. In particular, we will show that it allows us to include the systematic errors in the data analysis in a straightforward way and to efficiently reconstruct the probability distributions of the fitted parameters. We will also pay a special attention to discuss the available sets of RCS data below pion-production threshold. Following recent discussions about the possible presence of outliers in the available data sets [22,23], we perform several tests to judge the data-set consistency.
The manuscript is organized as follows. In Sec. II, we briefly summarize the theoretical framework of fixed-t subtracted DRs. In Sec. III, we describe the main features of the parametric-bootstrap technique, which is applied in Sec. IV to our specific case to fit α E1 and β M1 . We perform the fit in different conditions, i.e., switching on/off the effects of the systematic errors, using the constraint of the Baldin's sum rule for the polarizability sum and including the backward spin polarizability γ π as additional fit parameter. The consistency of the data set is discussed in Sec. V, where we perform different statistical tests to identify the possible presence of outliers. The results of our analysis are summarized in Sec. VI, in comparison with available extractions of the scalar dipole polarizabilities. Our conclusions are drawn in Sec. VII. In App. A, we give the complete list of the existing data sets of RCS below pion-production threshold, and in App. B we discuss the values of the correlations among the fit parameters in all the different conditions discussed in this work.
II. THEORETICAL FRAMEWORK
We consider RCS off the proton, i.e. γ(q) + P (p) → γ(q ′ ) + P (p ′ ), where the variables in brackets denote the four-momenta of the participating particles. The familiar Mandelstam variables are s = (p + q) 2 , u = (q − p ′ ) 2 and t = (q − q ′ ) 2 , and are constrained by s + u + t = 2M 2 , with M the proton mass. The RCS amplitude can be described in terms of 6 Lorentz invariant functions A i (ν, t), which depend on the crossing-symmetric variable ν = (s − u)/4M and t. They are free of kinematical singularities and constraints, and because of the crossing symmetry they obey the relation A i (ν, t) = A i (−ν, t). Assuming analyticity, they satisfy the following fixed-t subtracted DRs (with the subtraction point at ν = 0) [4,6] Re
[A i (ν, t)] = A B i (ν, t) + A i (0, t) − A B i (0, t) + 2 π ν 2 P +∞ ν0 dν ′ Im s [A i (ν ′ , t)] ν ′ (ν ′ 2 − ν 2 ) ,(1)
where ν 0 is the pion-production threshold, and A B i (ν, t) is the Born term, corresponding to the pole diagrams involving a single nucleon exchanged in s-or u-channels and γN N vertices taken in the on-shell regime. In Eq. (1), the subtraction functions A i (0, t) − A B i (0, t) can be determined by once-subtracted DRs in the t channel:
A i (0, t) − A B i (0, t) = A i (0, 0) − A B i (0, 0) + A t−pole i (0, t) − A t−pole i (0, t) + t π +∞ 4m 2 π dt ′ Im t [A i (0, t ′ )] t ′ (t ′ − t) + t π −2m 2 π −4Mmπ −∞ dt ′ Im t [A i (0, t ′ )] t ′ (t ′ − t) ,(2)
where A t−pole i (0, t) represents the contribution of the poles in the t channel, that amounts to the π 0 -pole contribution to the A 2 amplitude. The subtraction constants a i ≡ A i (0, 0) − A B i (0, 0) are directly related to the scalar dipole and leading-order spin polarizabilities, i.e.
α E1 = −a 1 − a 3 − a 6 4π , γ E1E1 = a 2 − a 4 + 2a 5 + a 6 8πM N , γ M1E2 = −a 2 − a 4 − a 6 8πM N , β M1 = a 1 − a 3 − a 6 4π , γ M1M1 = −a 2 − a 4 − 2a 5 + a 6 8πM N , γ E1M2 = a 2 − a 4 − a 6 8πM N ,(3)
with the combination
γ 0 ≡ −γ E1E1 − γ M1M1 − γ E1M2 − γ M1E2 , γ π ≡ −γ E1E1 + γ M1M1 − γ E1M2 + γ M1E2(4)
defining the forward (γ 0 ) and backward (γ π ) spin polarizabilities. We will consider {γ E1E1 , γ M1M1 , γ 0 , γ π } as independent set of spin polarizabilities.
In the actual calculation, the s-channel imaginary parts in Eq. (1) are evaluated using the unitarity relation, taking into account the contribution of the πN intermediate states from the latest version of the MAID pion-photoproduction amplitudes [24] and approximating the contribution from multipion intermediate channels by the inelastic decay channels of the πN resonances, as detailed in Ref. [4]. Furthermore, the t-channel imaginary parts in Eq. (2) are calculated using the γγ → ππ → NN channel as input for the positive-t cut, while the negative-t cut is strongly suppressed for low values of t. The last one can be approximated by the contributions of ∆-resonance and nonresonant πN intermediate states in the s-channel, which are then extrapolated into the unphysical region at ν = 0 by analytical continuation. For more detail in the implementation of the unitarity relations, we refer to the original work [4]. Having determined the contributions of the s-and t-channel integrals, the only remaining unknown are the subtraction constants, i.e. the leading-order static polarizabilities. In principle, all the six leading static polarizabilities can be used as free fit parameters to the Compton observables. However, a simultaneous fit of all them is not feasible at the moment, because of the limited statistics of the available RCS data. In the following, we will limit ourselves to the data sets for unpolarized RCS below pion-production threshold, and consider different variants of fits for two sets of parameters, i.e. {α E1 , β M1 } or {α E1 , β M1 , γ π }. The remaining constants which do not enter the fit are fixed as described in Sec. IV.
III. THE FITTING METHOD
We consider a generic problem, where we have a model prediction T (p) for an observable, which depends on a set p of parameters, and we want to find the optimal setp that better reproduces the available experimental data. We adopt an algorithm based on the parametric bootstrap technique [25], i.e., N Monte Carlo replicas of experimental data are produced and a fit of the set p is performed to every bootstrapped data sample. After every cycle j, the best valuesp j are stored, to obtain N outcomes of the (unknown) probability distribution of p.
In our case we assume that:
1. every data point is Gaussian distributed with a mean equal to the measured value and a standard deviation given by the experimental (statistical) error;
2. data points are affected by systematic errors given by different rescaling factors of the data in each subset;
3. when not explicitly stated otherwise by the experimental groups, every source of systematic error follows an uniform distribution and the published value gives the full estimated interval. If there are more sources, we take the product of such random uniform variables;
4. the sample in every data subset is independent from the other subsets.
This sampling method can then be written in general as
S ij = (1 + δ ij )(E i + γ ij σ i ),(5)
where S ij is a generic bootstrapped point with the index i running over the number of data point (n data ) and j running over the number of replicas (N ). E i is the generic experimental point having an uncertainty σ i , γ ij is the Gaussian normal variable needed for the statistical sampling and δ ij is a box distributed variable that quantifies the effect of the systematic uncertainties for each data subset independently. Considering a generic subset, labeled with k (k runs from 1 to the number of the different data subsets n set ) and composed of n k data points, we take δ ij ∈ U[−∆ k , ∆ k ] for i = 1, . . . , n k , where ±∆ k is the published systematic error and nset k=1 n k = n data . If there are n s different and independent sources of systematic uncertainties, δ ij is the product of all the n s box distributed variables, i.e.,
δ ij = ns f =1 U[−∆ f , ∆ f ].
The systematic sources can be easily excluded from this procedure by just imposing δ ij ≡ 0 in Eq. (5).
The minimization function at the j th iteration is given by
χ 2 b,j = n data i=1 S ij − T i (p) σ ij 2 ,(6)
where
σ ij = (1 + δ ij )σ i .(7)
The minimum in the parameter space can be defined aŝ
χ 2 b,j = n data i=1 S ij − T i (p j ) σ ij 2 ,(8)
wherep j are the best values of the fit parameters p at the j th bootstrap cycle. Repeating this minimization for N cycles, the empirical distribution P(p j ) of thep j random variables gives an estimate of the true probability distribution P(p) that includes the propagation of both statistical and systematic errors of the experimental data. The best value and the standard deviation of p can be then simply obtained as:
p ≡ 1 N N j=1p j , σ p ≡ 1 N − 1 N j=1 (p j −p) 2 1/2 .(9)
The goodness of this fit procedure can be estimated in the same way as in the standard case, using the valueχ 2 of the so-called χ 2 -variable, defined as 1 :
χ 2 = n data i=1 E i − T i (p) σ i 2 .(10)
It is worthwhile to notice here thatχ 2 is distributed according to the χ 2 distribution only when δ ij = 0, i.e. when all the E i are independent random gaussian variables.
Within the bootstrap framework, it is also possible to evaluate the expected theoretical probability distribution associated toχ 2 by replacing S ij in Eq. (6) with
M ij = (1 + δ ij )(T i (p) + γ ij σ i ),(11)
and by finding, at each bootstrap cycle, the minimum valueχ 2 th,j of the following function
χ 2 th,j = n data i=1 M ij − T i (p j ) σ ij 2 .(12)
After N bootstrap iterations, we are able to empirically reconstruct the probability distribution P(χ 2 th ) and then to evaluate the final p-value associated to the fit.
It can be easily demonstrated (see [26]) that, when δ ij = 0 in Eq. (11), P(χ 2 th ) coincides with the χ 2 distribution, as expected. In any case, we stress that the bootstrap method allows us to obtain a p-value forχ 2 directly from the evaluated P(χ 2 th ) distribution, also when systematic errors are taken into account in the fit procedure.
A. Uncertainties on additional model parameters
In the most generic situation, the model T may depend on an additional set of parameters f besides the fit parameters p, i.e., T ≡ T (p, f ). The χ b,j 2 variable of Eq. (6) is consequently modified as
χ b,j 2 = n data i=1 S ij − T i (p, f ) σ ij 2 .(13)
Suppose the values of the parameters f are derived from experimental data and are known within an experimental uncertainty σ f . Within the bootstrap framework, we can easily evaluate how the uncertainties σ f affect the values of the fit parametersp, without using the error-propagation procedure that would require performing numerical derivatives ∂T /∂f . At each j th bootstrap cycle, we can sample the value f j of the model parameters from their known probability distribution, which in the following will be considered to be a Gaussian defined as G[f, σ 2 f ]. Then, we can repeat the procedure described above by replacing T i (p j ) with T i (p j , f j ) in Eq. (8), and evaluate all the relevant fit parameters.
IV. FIT TO RCS DATA
In this section, we apply the fitting method introduced in Sec. III to analyze available RCS data below pionproduction threshold. We use fixed-t subtracted DRs for the model predictions, which contain the leading-order static polarizabilities as free parameters, as explained in Sec. II. We discuss two data sets, corresponding to the FULL and TAPS data sets, as described in App. A. Furthermore, we consider different fit conditions, switching on/off the systematic errors and using two sets of free parameters: i) the scalar dipole polarizabilities, with and without the constraint of the Baldin's sum rule for the polarizability sum α E1 + β M1 , and ii) the scalar dipole polarizabilities constrained by the Baldin's sum rule along with the backward spin polarizability γ π . For the Baldin's sum rule, we use the weighted average over the available evaluations reported in Ref. [27], which coincides also with the value used in the fit of Refs. [11,28,29], i.e., α E1 + β M1 = 13.8 ± 0.4. The remaining parameters of fixed-t subtracted DRs are fixed to the experimental values extracted from double polarization RCS [29], i.e. γ E1E1 = −3.5±1.2 and γ M1M1 = 3.16±0.85, and from the GDH experiments [30,31], i.e. γ 0 = −1.01 ± 0.08 ± 0.10 2 When the backward spin polarizability is not used as fit parameter, we fixed it to the weighted average of the values extracted at MAMI [3], i.e. γ π = −8.0 ± 1.8. Here and in the following, we used the standard convention to exclude the t-channel π 0 -pole contribution from the spin polarizabilities. These contributions amount to γ π π 0 −pole = −46.7 [6],
γ π 0 −pole M1M1 = −γ π 0 −pole E1E1 = 1 4 γ π 0 −pole π
, while they vanish in the case of the forward spin polarizability. Finally, for each fitting configuration, we discuss the probability distributions of the fitted parameters and the p-values of theχ 2 variable. Here and in the following, we use the units of 10 −4 fm 3 for the scalar dipole polarizabilities and 10 −4 fm 4 for the spin polarizabilities.
A.
Handling the experimental and model errors We apply the method described above using N = 10000 bootstrap replicas. Within this framework and following the method outlined in Sec. III A, we take into account the uncertainties of the model parameters on the values of the polarizabilities not treated as free parameters in the fit procedure. In particular, we take γ 0 ∈ G[−1.01, 0.13 2 ] 3 , γ E1E1 ∈ G[−3.5, 1.2 2 ] and γ M1M1 ∈ G[3.16, 0.85 2 ]. When keeping fixed the backward spin polarizability, we propagate the error of γ π using γ π ∈ G[8.0, 1.8 2 ]. Furthermore, the Baldin's sum rule constraint is implemented using α E1 +β M1 ∈ G[13.8, 0.4 2 ]. The uncertainties on the fitted α E1 and β M1 thus automatically include the propagation of the errors of the spin polarizabilities and the Baldin's sum rule. The statistical and systematic uncertainties of the experimental data are taken into account as described in Sec. III, except for the TAPS data points [28]. As discussed in Ref. [23], they are affected by a 5% point-to-point systematic error, and, accordingly, the statistical error of each point is modified as follows
σ i,T AP S → σ 2 i,T AP S + 5 100 E i,T AP S 2 1/2 .(14)
B. Results
We discuss in this section the results of the fit, performed under several configurations:
• Fit 1: with Baldin's sum rule, and systematic errors excluded: α E1 − β M1 as free parameter;
• Fit 1 ′ : with Baldin's sum rule, and systematic errors included: α E1 − β M1 as free parameter;
• Fit 2: without Baldin's sum rule, and systematic errors excluded: α E1 and β M1 as free parameters;
• Fit 2 ′ : without Baldin's sum rule, and systematic errors included: α E1 and β M1 as free parameters;
• Fit 3: with Baldin's sum rule, and systematic errors excluded: α E1 − β M1 and γ π as free parameters;
• Fit 3 ′ : with Baldin's sum rule, and systematic errors included: α E1 − β M1 and γ π as free parameters.
All these different fits are performed using both the FULL and TAPS data sets. The corresponding results are summarized in Table I and shown in Figs. 1-3. In all the cases, the probability distributions of the fit parameters are very similar to Gaussian functions. A few comments are in order:
• the values of the fitted α E1 and β M1 depend on the choice of the data set, but are all consistent within the uncertainties;
• the sum of the values of α E1 and β M1 from the Baldin-unconstrained fit is well compatible, within the fit errors, with the Baldin's sum rule value;
• the inclusion of systematic errors does not change the central values of the fitted parameters, but increases their uncertainties. This effect is mostly visible for the TAPS data set fitted in the Fit 2 and Fit 2 ′ conditions, while it is reduced for the FULL data set, where the effects of the systematic errors in the different subsets are, at least partially, compensated (see Figs. 1-3).
• when the systematic errors are taken into account, the central values of theχ 2 /dof do not change. However, the corresponding p-values significantly change for the FULL data set since higher values ofχ 2 /dof are more likely to occur. This effect is clearly visible from the cumulative distribution functions (CDFs) ofχ 2 shown in Figs. 4 and 5. When we fit a single data set, as in the case of the TAPS data set, the systematic error becomes a common scale factor for all the data points and it does not change the p-value. Therefore, the main effect of the systematic-error propagation is the increase of the statistical errors on the fitted parameters (see Fig. 3); • the fitted values of γ π in the Fit 3 ′ conditions and with the additional contribution from the π 0 -pole, i.e. γ tot π = γ π + γ π 0 −pole π , are in very good agreement with the values extracted within the fixed-t unsubtracted DR analysis [3,28,34,35]:
LARA [34] : γ tot π = −40.9 ± 0.4 ± 2.2, SENECA [35] : γ tot π = −39.1 ± 1.2 ± 0.8 ± 1.5, TAPS [28] : γ tot π = −35.9 ± 2.3,
Fit 3 ′ (FULL) : γ tot π = −38.11 +2.85 −2.94 .(15)
The results of Refs. [34,35] are obtained using data above the pion-production threshold, while the result of Ref. [28] is extracted from the complete TAPS data set, ranging up to photon energies of 165 MeV.
From all the results we conclude that the inclusion of the systematic errors in the fitting procedure is very important since, in this case, the p-value associated to theχ 2 changes significantly (see Fig. 4) while the uncertainty on the fitted parameters changes in a much less pronounced way. This behavior can be observed only thanks to the bootstrap method, since it is not possible to compute the correct p-values without resorting to the Monte Carlo replicas, once included the systematic errors.
As mentioned before, theχ 2 parameter obtained in the different fitting conditions is never distributed like a χ 2 probability function. This effect is due to the correlation present among all the points in each subset, and is clearly visible from the CDFs shown in Fig. 4.
V. CONSISTENCY CHECKS ON THE AVAILABLE DATA SET
The scientific community has not reach so far a common agreement on the definition of the data set of proton RCS below pion-production threshold [14,22,23]. As pointed out in Ref. [22] and in Sec. IV B of this work, the values obtained from a fit of α E1 and β M1 strongly depend on the choice of the data set. In this section, we apply a few basic statistical tests to investigate the consistency of the data set and the possible occurrence of outliers. We will discuss the FULL data set, the TAPS data set and the SELECTED data set, as an example of selection from the complete data set, all below the pion-production threshold as detailed in App. A. In this section, we will use the standard minimization technique and we will not take systematic errors into account, in order to work in a well-established fitting condition and investigate the pure statistical features of the experimental data. We set the Fit 1 condition (i.e., we use α E1 − β M1 as free parameter and we neglect the systematic errors) and we use the conventional χ 2 of Eq. (10) as minimization function, without implementing the bootstrap procedure. Therefore, the errors on the fixed spin polarizabilities are not included in the results of the tests, while the uncertainties of α E1 + β M1 affect the electric and magnetic polarizabilities errors as ǫ αE1,βM1 = (ǫ 2 αE1+βM1 + ǫ 2 αE1−βM1 )/2. We will refer to this fitting configuration as test-fit. The result of the test-fit applied to the FULL data set leads to the best values α E1 = 11.99 ± 0.31 and β M1 = 1.81 ± 0.31, which are almost identical to the values given in Table I, with the Fit 1 condition applied to the FULL data set. The tiny difference in the central values and the different statistical errors are due to the propagation of the uncertainties of the polarizabilities that are not treated as free parameters in the bootstrap fit.
A. The Jackknife resampling
A possible strategy to discuss the consistency of the data set is the Jackknife, a resampling technique that can be considered as a particular case of the non-parametric bootstrap technique. Given a data set D = {d i }, i = 1, . . . , n, composed by n points, we can define n data subsets by removing one datum at a time, i.e., D k = D \ {d k }, where k = 1, . . . , n. We then fit the model T (p) to every D k data set, obtaining a best value of the parametersp k for each set. From the n-tuple ofp k , we can compute the averagep Jack and its sample standard deviation σ Jack . An outlier k is expected to give a a result far from the average value, i.e., |p k −p Jack σ Jack | ≫ 1. Instead, if there are no evident outliers, we expect that all the variablesp k follow, at least approximately, Gaussian confidence levels [36]. In this way, we can identify possible deviations of a data subset from the other ones.
We apply the Jackknife to the FULL, TAPS and SELECTED data sets: the best values of α E1 and β M1 versus the index k of the excluded point in each subset are plotted in Fig. 6. In the case of the FULL data set, we note that the statistical fluctuations are well in agreement with the expected Gaussian confidence levels (∼ 95% of the occurrences within the 2σ range). We can then conclude that there is no clear evidence of outliers.
In the case of the SELECTED data set, we obtain very similar results, with less pronounced fluctuations (∼ 98% of the occurrences within the 2σ range). This does not necessarily implies that there is an improvement in the data set. Instead, this behavior may simply reflect the fact that the data points excluded from the set are not "close enough to" the model predictions.
The same test applied to the TAPS data set shows a clear dependence of the values of α E1 and β M1 on the scattering angle. This feature is due to the fact that the data are ordered by increasing scattering angles: when a single datum is removed in the backward region, the value of β M1 decreases (and α E1 increases), since the sensitivity of the unpolarized RCS cross section to α E1 − β M1 is higher in that angular region.
B. Residual analysis
In order to cross-check the stability of the FULL data set, we performed the analysis of the residuals, defined as
ξ i ≡ E i −T i σ i ,(16)
where E i is the i th experimental datum with the uncertainty σ i , andT i is the model prediction obtained with the best values of the fitted parameters. If the model is able to correctly describe the experimental datum, the value E i can be considered a possible outcome of the probability distribution of T i . In this case, the variable ξ i of Eq. (16) is Gaussian distributed as G[0, 1]. The residual analyses for the FULL and SELECTED data sets are shown in Fig. 7, together with the q-q plots, representing the CDF(ξ i ) vs CDF(z), with z a Gaussian distributed variable according to N [0, 1]. The variable ξ i has mean value and standard deviation in good agreement with the expectations. In the case of the SELECTED data set, we observe again less pronounced statistical fluctuations mostly due to the exclusions of the subsets 1 [37] and 7 [38]. This is also shown by the fact that the CDF(ξ i ) for the SELECTED data set approaches the maximum value of unity faster than in the case of the FULL data set. Anyhow, in both cases we do not observe any significant deviation with respect to the results expected in the case of a normal distribution.
Since there is not a clearly identified source of possible experimental problems that could affect the excluded data, we prefer not to exclude any point and to deal with these cases using the approach outlined below. Given a data set composed by subsets with n set points, we can define for each subset the following variable
χ 2 set ≡ 1 n set nset i=1 E i −T i σ i 2 .(17)
If the modelT i is able to well describe the data, all the χ 2 set values should be fairly close to one. Viceversa, if χ 2 set ≫ 1, we cannot automatically deduce that a data subset should be excluded. This parameter is evaluated using the particular model used in the fit procedure and a bias may be introduced by using large values of χ 2 set as criterion to exclude data sets.
We applied this kind of analysis to the FULL data set and the results are shown in Fig. 8. We can notice that most of the subsets have χ 2 set ≈ 1, while the subsets 1 [37] and 7 [38] give higher χ 2 set values. As mentioned before, these subsets are indeed excluded in the definition of the SELECTED data set. However, both data sets have only 4 points each and with this small number of points we can not exclude the occurrence of pure statistical fluctuations.
An alternative method, first suggested in [39], is to rescale the statistical errors of the points of each data subset by a factor χ 2 set and to repeat again the fit procedure (see also [40]). This relies on the assumption that a large χ 2 set value indicates underestimated measurement uncertainties that should be equally attributed to all the points of a given subset. We then obtain new values for the fitted parametersp ′ with the minimum of the χ 2 function equal to 1, by construction. This strategy is again model dependent, but it can be used as an indication for the identification of outliers. If there are no data subsets that behave as outliers and then could determine very different values for the fitted parameters, we would expect thatp ′ ≃p.
In our case, the values of the fitted parameters obtained from the FULL data set with and without rescaling of the statistical errors are consistent within the (large) fit errors, i.e. no rescaling :
α E1 − β M1 = 10.17 ± 0.47,(18)χ 2 set rescaling : α E1 − β M1 = 9.36 ± 0.50.(19)
Moreover, if we exclude from the fit the subsets 1 and 7, without rescaling the errors, we obtain the value α E1 − β M1 = 9.01 ± 0.50, which is very similar to the result in Eq. (19) obtained with the rescaling method. Given all these findings, we conclude that there is no clear evidence that these sets are outliers that should be excluded from the fit.
D. Behavior of the minimization function
In order to investigate the effect on the fit results of the exclusion of some data points, we examined the behavior of the minimization function versus the values of the fit parameters α E1 and β M1 . The results for the FULL data set (with and without rescaling the statistical error by a factor χ 2 set ), for the TAPS data set and for the SELECTED data set [23] are shown in Fig. 9.
When outliers are discarded from the fit, we would expect for the χ 2 function a significant reduction of the minimum as well as a more pronounced convexity, corresponding to smaller errors for the fitted parameters. In the case of the SELECTED data set, we indeed observe that the minimum value of the reduced χ 2 function is closer to 1, but the shape of the minimization function is the same as in the case of the FULL data set, i.e., the errors on the fitted parameters remain ultimately the same.
This simple analysis gives another additional hint that there is no clear evidence of the presence of outliers and all the data points of the FULL data set should be included in the fit.
E. Summary of the tests
All the previous consistency tests led us to the conclusion that there are no strong motivations for the exclusion of any data point from the global RCS data set below pion-production threshold. We observed deviations for a few data points at the backward scattering angles. We suggest to handle these points by rescaling the statistical errors by a factor χ 2 set rather than exclude them from the data set. As a matter of fact, the RCS unpolarized cross section has large sensitivity to α E1 − β M1 in the backward scattering region, and excluding points in this region it can lead to biased results.
We conclude that the main reason of the sizeable uncertainties that are present at the moment in the extraction of the scalar polarizabilities and especially of β M1 are mainly due to the intrinsic limitations (poor accuracy and scarcity) of the data set at our disposal.
VI. AVAILABLE EXTRACTIONS OF RCS SCALAR DIPOLE STATIC POLARIZABILITIES
In Fig. 10, we collect the available results for the extraction of the scalar dipole static polarizabilities from RCS at low energies. The red solid curve show the results from this work, obtained from the bootstrap-based fit using the FULL data set with the constraint of the Baldin's sum rule and taking into account the effects of the systematic errors of the experimental data and the propagation of the statistical errors of the fixed polarizabilities α E1 + β M1 , γ 0 , γ π , γ E1E1 and γ M1M1 (Fit 1 ′ conditions). Within our fitting technique, we are able to evaluate the correlation coefficient ρ αE1−βM1 among α E1 and β M1 : this determines the ellipse-shape in Fig. 10. All the other correlation terms are given in App. B. Numerically, we obtain the following best values
α E1 = 12.03 +0.48 −0.53 , β M1 = 1.77 +0.52 −0.54 ,χ 2 = 1.25 (p-value = 12%), ρ αE1−βM1 = −0.72,(20)
that are in very good agreement with the result obtained using a traditional χ 2 fitting procedure in a fixed-t subtracted DRs framework [7]. The experimental fits shown by black curves have been obtained within unsubtracted DRs [28,41,42]. The light-green band shows the experimental constraint on the difference α E1 − β M1 from Zieger et al. [43]. The green solid curve shows the BχPT predictions of Ref. [12]. The blue solid curve corresponds to the 68% ellipse of the Baldin constrained fit of Ref. [11,44], using the SELECTED data set and the HBχPT framework. These results are in excellent agreement also with the fit within BχPT of Ref. [45]. We also show the latest value from PDG [46] (solid black disk):
α E1 = 11.2 ± 0.4, β M1 = 2.5 ± 0.4.(21)
They differ from the 2012 and earlier editions by inclusion of the data fit analysis within HBχPT [11]. We note that there is a discrepancy between the values obtained in the framework of effective field theories [11,12,23] and the results obtained using DRs, even if they are compatible within the 2σ-range. In order to shed some light on the origin of the difference between the results from the extraction within HBχPT and fixed-t subtracted DRs, we performed some test-fits, in the condition described in Sec. V, using fixed-t subtracted DRs with input from the central values of HBχPT predictions for the spin polarizabilities. The results for the leading-order spin polarizabilities in HBχPT read [11,44] γ E1E1 = −1.1 ± 1.9, γ M1M1 = 2.2 ± 0.5(stat) ± 0.6, γ 0 = −2.6 ± 0.5(stat) ± 1.8, and γ π = 5.6 ± 0.5(stat) ± 1.8, and are quite different from the experimental values used in our DR analysis. On top of that, we noticed a different evaluation for the π 0 -pole contribution calculated in Ref. [11], which is −45.9 for γ π . In Table II, we compare the test-fit values for α E1 and β M1 in the case we use the results of the spin polarizabilities and the π 0 pole from the experimental extraction [29] or the corresponding values from HBχPT [11,44], with the π 0 -pole contribution reported in [11] (results in brackets). This analysis has been performed for both the FULL and SELECTED data sets, in order to investigate the dependence of the results not only on the values of the spin polarizabilities, but also on the choice of the data set (see Ref. [22] for a more comprehensive discussion). If we focus on the central values of β M1 , we notice that the different input for the spin polarizabilities affects the results by 20-30%, while the choice of the data set leads to a 40-50% increasing. It is certainly too simplistic to estimate the model dependence of the two extractions with the different values of the spin polarizabilities. However, in the energy range below pion production threshold, this gives a rather good indication of the main effects due to the model dependence.
The results for the RCS differential cross section obtained with the values of Eq. (20) for the scalar dipole polarizabilities and the experimental values of Ref. [29] for the leading-order spin polarizabilities are shown in Fig. 11 as a function of the lab photon energy E γ and the lab scattering angle θ lab , in comparison with the experimental data of the FULL data set. The grey bands correspond to the 1-σ error range, computed in the bootstrap framework. For each values of E γ and θ lab , we calculate the differential cross section dσ/dΩ as function of the best values of α E1 and [28] (solid black curve). The green solid curve is the BχPT prediction from Ref. [12], while the blue solid curve shows the fit within HBχPT from Refs. [11,44]. The solid black circle shows the PDG results [46]. The solid red curve is the extraction from this work, using fixed-t subtracted DRs. [29] and the values predicted in HBχPT [23] (results in brackets).
β M1 obtained at every bootstrap cycle. We then have N = 10000 values for dσ/dΩ, from which we can reconstruct its probability distribution and the 68% confidence level range.
VII. CONCLUSIONS
We performed a fit of the electric α E1 and magnetic β M1 polarizabilities to the proton RCS unpolarized cross section data below pion-production threshold, using subtracted fixed-t DRs and a bootstrap-based statistical analysis. Within the subtracted DR formalism, all the leading-order static polarizabilities enter as subtraction constants to be fitted to the data. However, due to the limited statistic of the RCS data, a simultaneous fit of all of them is not achievable at the moment. We then have restricted ourselves to fit the sets {α E1 , β M1 } or {α E1 , β M1 , γ π }, which mainly affect the unpolarized RCS cross section below pion-production threshold. The remaining spin polarizabilities have been fixed to [29] for the leading-order spin polarizabilities, as function of the lab photon energy (E γ ) and lab scattering angle (θ lab ). The gray bands correspond to the 1-σ error band obtained in the bootstrap framework (see text for more detail). The experimental data are from the FULL data set, with the labels reported in Table III of App. A. In the last figure for θ lab = 155 • ,we also show the two data points at θ lab = 180 • of Ref. [43] .
the available experimental information [3,[29][30][31]. Furthermore, we consider different fit conditions, switching on/off the systematic errors and with/without the constraint of the Baldin's sum rule for the polarizability sum α E1 + β M1 .
We summarized the main features of the parametric-bootstrap method, in particular the advantages of taking into account both the effect of the systematic errors of the experimental data and the propagation of the statistical errors of the polarizability values not not treated as free parameters in the fit procedure.
We showed that the inclusion of the sources of systematic errors in the data analysis changes significantly the expected theoretical probability distribution of the finalχ 2 /d.o.f. variable and we were able to give realistic p-values for every fitting condition. We also presented a critical discussion of the data set consistency. We showed some simple but meaningful tests, which led us to conclude that there are no strong motivations for the exclusion of any data point from the global RCS data set below pion-production threshold. We observed sizeable deviations between our fit model and two data subsets. However, there is not a clearly identified source of possible experimental problems for these data. Therefore, instead of excluding them from the fit, we propose to handle them with a suitable rescaling factor of the statistical error bar.
The bootstrap fit using fixed-t subtracted DRs and the global RCS data set below pion-production threshold yields α E1 = (12.03 +0. 48 −0.54 ) × 10 −4 fm 3 and β M1 = (1.77 +0.52 −0.54 ) × 10 −4 fm 3 , with p-value = 12%. The results are in agreement with previous analysis obtained with different variants of DRs and the traditional χ 2 fitting procedure. They differ from the extractions using the χPT frameworks, even if they are compatible within the 2σ range. This discrepancy can be traced back to the different data sets used in the analyses and, partially, also to the different theoretical estimates of the higher-order contributions beyond the scalar dipole polarizabilities to the RCS cross section.
Future measurements planned by the A2 collaboration at MAMI below pion-production threshold [47,48] hold the promise to improve the accuracy and the statistic of the available data set and will help to extract with better precision the values of the proton scalar dipole polarizabilities. IV: Correlation coefficients ρ among the fit parameters in the different fitting conditions described in Sect. IV B. The columns 2-4 correspond, from the left to the right, to the correlation coefficients between α E1 and β M1 , α E1 and γ π , β M1 and γ π .
behavior was already observed in the extraction of the scalar dipole dynamical polarizabilities in Ref. [14]. We also note a large and negative (positive) correlation between γ π and β M1 (α E1 ). This behavior is mainly a consequence of low sensitivity of the existing data to the γ π polarizability.
FIG. 1 :FIG. 2 :
12Probability distributions of the fitted scalar dipole static polarizabilities α E1 (left panels) and β M1 (right panels) in the Fit 1 (black curve) and Fit 1 ′ (red curve) conditions. The results are obtained using the FULL data set (upper panels) and the TAPS data set (lower panels). Probability distributions of the fitted scalar dipole static polarizabilities α E1 (left panels) and β M1 (right panels) in the Fit 2 (black curve) and Fit 2 ′ (red curve) conditions. The results are obtained using the FULL data set (upper panels) and the TAPS data set (lower panels).
FIG. 3 :FIG. 4 :
34Probability distributions of the fitted static polarizabilities α E1 (left panels) and β M1 (central panels) and the backward spin polarizability γ π (right panels) in the Fit 3 (black curve) and Fit 3 ′ (red curve) conditions. The results are obtained using the FULL data set (upper panels) and the TAPS data set (lower panels). Cumulative distribution functions for the variableχ 2 /dof in the case of Fit 1 (left panel, black curve), Fit 1 ′ (left panel, red curve), Fit 3 (right panel, black curve), Fit 3 ′ (right panel, red curve), using the FULL data set. The dashed-blue curves are the cumulative distribution functions of a pure reduced χ 2 .
FIG. 5 :
5The same as inFig. 4but neglecting the errors on the polarizability values not treated as free parameters in the fit procedure.
FIG. 6 :
6Results from the Jackknife (blue line) for α E1 (left panels) and β M1 (right panels). The red (yellow) lines correspond to the 1 σ p (2 σ p ) sample standard deviations. From top to bottom: results for the FULL data set, TAPS data set and the SELECTED data set.C. The χ 2 per set
FIG. 7 :
7Residual analysis applied to the FULL (top panels) and SELECTED (lower panels) data set . The left panels show the values of ξ i (blue curves), with their mean value (black curves) and their sample standard deviation band (red curves). The right panels are the q-q plots of ξ i compared with the results expected in the case of a normal distribution (diagonal blue line). The dark (light) blue band shows the 1σ (2σ) uncertainty region due to the data set dimension. The labels of the data sets are described in App. A.
FIG. 8 : χ 2 FIG. 9 :
829set term for each data sub set of the FULL set. The labels of the data sets are described in App. A. The χ 2 profile as function of α E1 (a) and β M1 (b). The black curves are the results for the original FULL data set, while the yellow curves correspond to the results for the FULL data set with the χ 2 set rescaling of the statistical errors. The purple and red curves show, respectively, the results for the TAPS and the SELECTED data set.
FIG. 11 :
11The RCS differential cross section (blue line), evaluated with the scalar dipole polarizabilities of Eq. (20) and the experimental values of Ref.
ρ α E1 −β M 1 ρα E1 −γπ ρ β M 1 ρ α E1 −β M 1 ρα E1 −γπ ρ β M 1
TABLE I :
IResults of the fits for the static polarizabilities α E1 , β M1 and γ π using the FULL and TAPS data sets and different fit conditions, together with the correspondingχ 2 /dof and p-values.
FIG. 10: Results for α E1 vs β M1 obtained in different frameworks. The light-green band shows the experimental constraint on the difference α E1 − β M1 from Zieger et al.[43], while the orange band is the average over the available Baldin's sum rule evaluations[27]. The experimental extractions are from Federspiel et al.[42] (straight black line), obtained from the fit of α E1 − β M1 constrained by α E1 + β M1 = 14.0, MacGibbon et al.[41] (short-dashed black curve), TAPSZiegler et al.
Baldin SR
PDG
Olmos de Leon et al.
Federspiel et al.
Mc Govern et al.
Mc Gibbon et al.
Lensky et al.
our work
TABLE II :
IIResults for α E1 and β M1 from the test-fit of the FULL and the SELECTED data set, and taking different values for the for the leading-order spin polarizabilities: the experimental results from Ref.
set label Ref.first author points number θ lab ( • ) Eγ (MeV) symbol1
[37]
Oxley
4
70 − 150
≃ 60
2
[49]
Hyman
12
50, 90
55 − 95
3
[50]
Goldansky
5
75 − 150 55 − 80
4
[54]
Bernardini
2
≃ 135
≃ 140
5
[51]
Pugh
16
50 − 135 40 − 120
6
[38, 55]
Baranov
3
90, 150 80 − 110
7
[38, 55]
Baranov
4
90, 150 80 − 110
8
[42]
Federspiel
16
60, 135
30 − 90
9
[43]
Zieger
2
180
100, 130
10
[53]
Hallin
13
45 − 135 130 − 150
11
[41]
MacGibbon
8
90, 135 95 − 145
12
[41]
MacGibbon
10
90, 135 95 − 145
13
[28] Olmos de Leon
55
60 − 155 60 − 150
TABLE III :
IIIAngular and energy coverage of the available experimental data on unpolarized cross section for proton RCS.
TABLE
This value is consistent with the fitting conditions adopted for the extraction of the spin polarizability in Ref.[29]. We note that recent reevaluations[32,33] of γ 0 give a slightly smaller central values, with uncertainties consistent with the value used in Ref.[29].
The uncertainty value 0.13 2 is the sum of the squares of the statistical and systematic errors.
VIII. ACKNOWLEDGMENTSWe are grateful to V. Bertone and A. Rotondi for a careful reading of the manuscript and useful comments. We thank D. Phillips for stimulating discussions and useful suggestions on the fitting procedure, and H. Griesshammer, J. Mc Govern and V. Lensky for the help on the correct representation of the results of χPT.Appendix A: Data setsInTable III, we list all the available data sets for RCS in the energy range below pion production threshold (∼ 150 MeV in lab frame). For the sets[37,49,50]and[51], we use the Baranov data-selection[52]. Furthermore, as done also in Ref.[11,23], we discard the data fromTable Iin the Hallin paper[53], because it is not clear if they are really independent from the data given inTable IIof the same work. The data sets used in our analysis are:• FULL, which includes all the available data sets below pion-production threshold listed inTable III, for a total of 150 data points.• SELECTED, which is based on the data selection proposed in Ref.[11,23], corresponding to the FULL data set except for the data from Ref.[37,38,54], a single point (θ lab = 133 • , E γ = 108 MeV) from Ref.[28]and a single point (θ lab = 135 • , E γ = 44 MeV) from Ref.[42], for a total of 137 data points.• TAPS, which is the most comprehensive available subset with 55 data points below pion-production threshold[28].The sets 6 and 7 from Ref.[38,55]are from the same experimental measurements, but they differ for the values of the systematic errors. The same for the sets 11 and 12 from Ref.[41].Appendix B: Correlation coefficients among fit parametersIn the bootstrap framework, the correlation coefficients ρ among the fit parameters are obtained from the reconstructed probability distribution in the parameters space. InTable IV, we list these coefficients for all the different fitting conditions used in this work.In the Baldin-constrained fits, we do not obtain ρ αE1−βM1 = −1, due to the fact that α E1 + β M1 is not fixed to its central value, but is sampled within its uncertainty with a Gaussian distribution, as explained in Sec. IV A. This
. A I L'vov, V A Petrun'kin, M Schumacher, 10.1103/PhysRevC.55.359Phys. Rev. C. 55359A. I. L'vov, V. A. Petrun'kin, and M. Schumacher, Phys. Rev. C 55, 359 (1997).
. D Babusci, G Giordano, A L'vov, G Matone, A Nathan, 10.1103/PhysRevC.58.1013arXiv:hep-ph/9803347Phys. Rev. C. 581013hep-phD. Babusci, G. Giordano, A. L'vov, G. Matone, and A. Nathan, Phys. Rev. C 58, 1013 (1998), arXiv:hep-ph/9803347 [hep-ph].
M Schumacher, 10.1016/j.ppnp.2005.01.033Progress in Particle and Nuclear Physics. 55567M. Schumacher, Progress in Particle and Nuclear Physics 55, 567 (2005).
. D Drechsel, M Gorchtein, B Pasquini, M Vanderhaeghen, 10.1103/PhysRevC.61.015204arXiv:hep-ph/9904290Phys. Rev. C. 6115204hep-phD. Drechsel, M. Gorchtein, B. Pasquini, and M. Vanderhaeghen, Phys. Rev. C 61, 015204 (1999), arXiv:hep-ph/9904290 [hep-ph].
. B R Holstein, D Drechsel, B Pasquini, M Vanderhaeghen, 10.1103/PhysRevC.61.034316arXiv:hep-ph/9910427Phys. Rev. C. 6134316hep-phB. R. Holstein, D. Drechsel, B. Pasquini, and M. Vanderhaeghen, Phys. Rev. C 61, 034316 (2000), arXiv:hep-ph/9910427 [hep-ph].
. B Pasquini, D Drechsel, M Vanderhaeghen, 10.1103/PhysRevC.76.015203arXiv:0705.0282Phys. Rev. C. 7615203hep-phB. Pasquini, D. Drechsel, and M. Vanderhaeghen, Phys. Rev. C 76, 015203 (2007), arXiv:0705.0282 [hep-ph].
. D Drechsel, B Pasquini, M Vanderhaeghen, 10.1016/S0370-1573(02)00636-1arXiv:hep-ph/0212124Phys. Rept. 37899hep-phD. Drechsel, B. Pasquini, and M. Vanderhaeghen, Phys. Rept. 378, 99 (2003), arXiv:hep-ph/0212124 [hep-ph].
. B Pasquini, M Vanderhaeghen, 10.1146/annurev-nucl-101917-020843arXiv:1805.10482Ann. Rev. Nucl. Part. Sci. 68hep-phB. Pasquini and M. Vanderhaeghen, Ann. Rev. Nucl. Part. Sci. 68, 75 (2018), arXiv:1805.10482 [hep-ph].
. V Bernard, N Kaiser, U.-G Meissner, 10.1142/S0218301395000092arXiv:hep-ph/9501384Int. J. Mod. Phys. 4hep-phV. Bernard, N. Kaiser, and U.-G. Meissner, Int. J. Mod. Phys. E4, 193 (1995), arXiv:hep-ph/9501384 [hep-ph].
. S R Beane, M Malheiro, J A Mcgovern, D R Phillips, U Van Kolck, 10.1016/j.physletb.2003.06.040,10.1016/j.physletb.2004.12.069arXiv:nucl-th/0209002Erratum: Phys. Lett. 567200Phys. Lett.. nucl-thS. R. Beane, M. Malheiro, J. A. McGovern, D. R. Phillips, and U. van Kolck, Phys. Lett. B567, 200 (2003), [Erratum: Phys. Lett.B607,320(2005)], arXiv:nucl-th/0209002 [nucl-th].
. J A Mcgovern, D R Phillips, H W Griesshammer, 10.1140/epja/i2013-13012-1arXiv:1210.4104Eur. Phys. J. A. 4912nucl-thJ. A. McGovern, D. R. Phillips, and H. W. Griesshammer, Eur. Phys. J. A 49, 12 (2013), arXiv:1210.4104 [nucl-th].
. V Lensky, J Mcgovern, V Pascalutsa, 10.1140/epjc/s10052-015-3791-0arXiv:1510.02794Eur. Phys. J. C. 75604hep-phV. Lensky, J. McGovern, and V. Pascalutsa, Eur. Phys. J. C 75, 604 (2015), arXiv:1510.02794 [hep-ph].
. V Lensky, V Pascalutsa, 10.1140/epjc/s10052-009-1183-zarXiv:0907.0451Eur. Phys. J. 65hep-phV. Lensky and V. Pascalutsa, Eur. Phys. J. C65, 195 (2010), arXiv:0907.0451 [hep-ph].
. B Pasquini, P Pedroni, S Sconfietti, 10.1103/PhysRevC.98.015204Phys. Rev. C. 9815204B. Pasquini, P. Pedroni, and S. Sconfietti, Phys. Rev. C 98, 015204 (2018).
. H W Griesshammer, T R Hemmert, 10.1103/PhysRevC.65.045207arXiv:nucl-th/0110006Phys. Rev. C. 6545207nucl-thH. W. Griesshammer and T. R. Hemmert, Phys. Rev. C 65, 045207 (2002), arXiv:nucl-th/0110006 [nucl-th].
. R P Hildebrandt, H W Griesshammer, T R Hemmert, B Pasquini, 10.1140/epja/i2003-10144-9arXiv:nucl-th/0307070Eur. Phys. J. A. 20293nucl-thR. P. Hildebrandt, H. W. Griesshammer, T. R. Hemmert, and B. Pasquini, Eur. Phys. J. A 20, 293 (2004), arXiv:nucl-th/0307070 [nucl-th].
. R Navarro Prez, J Lei, arXiv:1812.05641nucl-thR. Navarro Prez and J. Lei, (2018), arXiv:1812.05641 [nucl-th].
. R Navarro Prez, J E Amaro, E. Ruiz Arriola, 10.1016/j.physletb.2014.09.035arXiv:1407.3937Phys. Lett. 738155nucl-thR. Navarro Prez, J. E. Amaro, and E. Ruiz Arriola, Phys. Lett. B738, 155 (2014), arXiv:1407.3937 [nucl-th].
. J Nieves, E. Ruiz Arriola, 10.1007/s10050-000-4511-0arXiv:hep-ph/9906437Eur. Phys. J. 8hep-phJ. Nieves and E. Ruiz Arriola, Eur. Phys. J. A8, 377 (2000), arXiv:hep-ph/9906437 [hep-ph].
. G F Bertsch, D Bingham, 10.1103/PhysRevLett.119.252501arXiv:1703.08844Phys. Rev. Lett. 119252501nucl-thG. F. Bertsch and D. Bingham, Phys. Rev. Lett. 119, 252501 (2017), arXiv:1703.08844 [nucl-th].
. A Pastore, arXiv:1810.05585nucl-thA. Pastore, (2018), arXiv:1810.05585 [nucl-th].
. N Krupina, V Lensky, V Pascalutsa, 10.1016/j.physletb.2018.04.066arXiv:1712.05349Phys. Lett. B. 78234nucl-thN. Krupina, V. Lensky, and V. Pascalutsa, Phys. Lett. B 782, 34 (2018), arXiv:1712.05349 [nucl-th].
. H W Griesshammer, J A Mcgovern, D R Phillips, G Feldman, 10.1016/j.ppnp.2012.04.003arXiv:1203.6834Prog. Part. Nucl. Phys. 67841nucl-thH. W. Griesshammer, J. A. McGovern, D. R. Phillips, and G. Feldman, Prog. Part. Nucl. Phys. 67, 841 (2012), arXiv:1203.6834 [nucl-th].
. D Drechsel, S S Kamalov, L Tiator, 10.1140/epja/i2007-10490-6arXiv:0710.0306Eur. Phys. J. A. 3469nucl-thD. Drechsel, S. S. Kamalov, and L. Tiator, Eur. Phys. J. A 34, 69 (2007), arXiv:0710.0306 [nucl-th].
A C Davidson, D V Hinkley, Bootstrap Methods and their Application. Cambridge University PressA. C. Davidson and D. V. Hinkley, Bootstrap Methods and their Application (Cambridge University Press, 1997).
. P Pedroni, S Sconfietti, in preparationP. Pedroni, S. Sconfietti, et al., in preparation.
. F Hagelstein, R Miskimen, V Pascalutsa, 10.1016/j.ppnp.2015.12.001arXiv:1512.03765Prog. Part. Nucl. Phys. 88nucl-thF. Hagelstein, R. Miskimen, and V. Pascalutsa, Prog. Part. Nucl. Phys. 88, 29 (2016), arXiv:1512.03765 [nucl-th].
. V Olmos De Leon, 10.1007/s100500170132Eur. Phys. J. A. 10207V. Olmos de Leon et al., Eur. Phys. J. A 10, 207 (2001).
. P P Martel, A210.1103/PhysRevLett.114.112501arXiv:1408.1576Phys. Rev. Lett. 114112501nucl-exP. P. Martel et al. (A2), Phys. Rev. Lett. 114, 112501 (2015), arXiv:1408.1576 [nucl-ex].
. J Ahrens, GDH, A210.1103/PhysRevLett.87.022003arXiv:hep-ex/0105089Phys. Rev. Lett. 8722003hep-exJ. Ahrens et al. (GDH, A2), Phys. Rev. Lett. 87, 022003 (2001), arXiv:hep-ex/0105089 [hep-ex].
. H Dutz, GDH10.1103/PhysRevLett.91.192001Phys. Rev. Lett. 91192001H. Dutz et al. (GDH), Phys. Rev. Lett. 91, 192001 (2003).
. B Pasquini, P Pedroni, D Drechsel, 10.1016/j.physletb.2010.03.007arXiv:1001.4230Phys. Lett. B. 687160hep-phB. Pasquini, P. Pedroni, and D. Drechsel, Phys. Lett. B 687, 160 (2010), arXiv:1001.4230 [hep-ph].
. O Gryniuk, F Hagelstein, V Pascalutsa, 10.1103/PhysRevD.94.034043arXiv:1604.00789Phys. Rev. 9434043nucl-thO. Gryniuk, F. Hagelstein, and V. Pascalutsa, Phys. Rev. D94, 034043 (2016), arXiv:1604.00789 [nucl-th].
. S Wolf, 10.1007/s100500170031arXiv:nucl-ex/0109013Eur. Phys. J. 12nucl-exS. Wolf et al., Eur. Phys. J. A12, 231 (2001), arXiv:nucl-ex/0109013 [nucl-ex].
. M Camen, 10.1103/PhysRevC.65.032202arXiv:nucl-ex/0112015Phys. Rev. 6532202nucl-exM. Camen et al., Phys. Rev. C65, 032202 (2002), arXiv:nucl-ex/0112015 [nucl-ex].
. D V Hinkley, 10.2307/2335765Biometrika. 6421D. V. Hinkley, Biometrika 64, 21 (1977).
. C L Oxley, 10.1103/PhysRev.110.733Phys. Rev. 110733C. L. Oxley, Phys. Rev. 110, 733 (1958).
. P S Baranov, G M Buinov, V G Godin, V A Kuznetsova, V A Petrunkin, L S Tatarinskaya, V S Shirchenko, L N Shtarkov, V V Yurchenko, Yu P Yanulis, Yad. Fiz. 21689P. S. Baranov, G. M. Buinov, V. G. Godin, V. A. Kuznetsova, V. A. Petrunkin, L. S. Tatarinskaya, V. S. Shirchenko, L. N. Shtarkov, V. V. Yurchenko, and Yu. P. Yanulis, Yad. Fiz. 21, 689 (1975).
. R T Birge, 10.1103/PhysRev.40.207Phys. Rev. 40207R. T. Birge, Phys. Rev. 40, 207 (1932).
O Behnke, K Kroeninger, G , Data Analysis in High Energy Physics: A Practical Guide to Statistical Methods. Schott, and T. Schoerner-SadeniusWiley-VCHO. Behnke, K. Kroeninger, G. Schott, and T. Schoerner-Sadenius (eds.), Data Analysis in High Energy Physics: A Practical Guide to Statistical Methods (Wiley-VCH, 2013).
. B E Macgibbon, G Garino, M A Lucas, A M Nathan, G Feldman, B Dolbilkin, 10.1103/PhysRevC.52.2097arXiv:nucl-ex/9507001Phys. Rev. C. 522097nucl-exB. E. MacGibbon, G. Garino, M. A. Lucas, A. M. Nathan, G. Feldman, and B. Dolbilkin, Phys. Rev. C 52, 2097 (1995), arXiv:nucl-ex/9507001 [nucl-ex].
. F J Federspiel, R A Eisenstein, M A Lucas, B E Macgibbon, K Mellendorf, A M Nathan, A O'neill, D P Wells, 10.1103/PhysRevLett.67.1511Phys. Rev. Lett. 671511F. J. Federspiel, R. A. Eisenstein, M. A. Lucas, B. E. MacGibbon, K. Mellendorf, A. M. Nathan, A. O'Neill, and D. P. Wells, Phys. Rev. Lett. 67, 1511 (1991).
. A Zieger, R Van De Vyver, D Christmann, A De Graeve, C Van Den Abeele, B Ziegler, Phys. Lett. B. 27834A. Zieger, R. Van de Vyver, D. Christmann, A. De Graeve, C. Van den Abeele, and B. Ziegler, Phys. Lett. B 278, 34 (1992).
. H W Griesshammer, J A Mcgovern, D R Phillips, 10.1140/epja/i2016-16139-5arXiv:1511.01952Eur. Phys. J. A52. 139nucl-thH. W. Griesshammer, J. A. McGovern, and D. R. Phillips, Eur. Phys. J. A52, 139 (2016), arXiv:1511.01952 [nucl-th].
. V Lensky, J A Mcgovern, 10.1103/PhysRevC.89.032202arXiv:1401.3320Phys. Rev. 8932202nucl-thV. Lensky and J. A. McGovern, Phys. Rev. C89, 032202 (2014), arXiv:1401.3320 [nucl-th].
. C Patrignani, Particle Data Group10.1088/1674-1137/40/10/100001Chin. Phys. C. 40100001C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40, 100001 (2016).
. V Sokhoyan, 10.1140/epja/i2017-12203-0arXiv:1611.03769Eur. Phys. J. A53. 14nucl-exV. Sokhoyan et al., Eur. Phys. J. A53, 14 (2017), arXiv:1611.03769 [nucl-ex].
. E J Downie, MAMI-A2/04-16E. J. Downie and et al., Proposal MAMI-A2/04-16 (2016).
. L G Hyman, R Ely, D H Frisch, M A Wahlig, 10.1103/PhysRevLett.3.93Phys. Rev. Lett. 393L. G. Hyman, R. Ely, D. H. Frisch, and M. A. Wahlig, Phys. Rev. Lett. 3, 93 (1959).
. V Goldansky, O Karpukhin, A Kutsenko, V Pavlovskaya, 10.1016/0029-5582(60)90418-1Nuclear Physics. 18473V. Goldansky, O. Karpukhin, A. Kutsenko, and V. Pavlovskaya, Nuclear Physics 18, 473 (1960).
. G E Pugh, R Gomez, D H Frisch, G S Janes, 10.1103/PhysRev.105.982Phys. Rev. 105982G. E. Pugh, R. Gomez, D. H. Frisch, and G. S. Janes, Phys. Rev. 105, 982 (1957).
. P S Baranov, A I L'vov, V A Petrunkin, L N Shtarkov, Fiz. Elem. Chast. Atom. 32Phys. Part. Nucl.P. S. Baranov, A. I. L'vov, V. A. Petrunkin, and L. N. Shtarkov, Phys. Part. Nucl. 32, 376 (2001), [Fiz. Elem. Chast. Atom. Yadra32,699(2001)].
. E L Hallin, 10.1103/PhysRevC.48.1497Phys. Rev. C. 481497E. L. Hallin et al., Phys. Rev. C 48, 1497 (1993).
G Bernardini, A O Hanson, A C Odian, T Yamagata, L B Auerbach, I Filosofo, Il Nuovo Cimento. 181203G. Bernardini, A. O. Hanson, A. C. Odian, T. Yamagata, L. B. Auerbach, and I. Filosofo, Il Nuovo Cimento (1955-1965) 18, 1203 (1960).
. P Baranov, G Buinov, V Godin, V Kuznetzova, V Petrunkin, V Tatarinskaya, L Shirthenko, V Shtarkov, Yu Yurtchenko, Yanulis, 10.1016/0370-2693(74)90736-9Phys. Lett. 52122P. Baranov, G. Buinov, V. Godin, V. Kuznetzova, V. Petrunkin, Tatarinskaya, V. Shirthenko, L. Shtarkov, V. Yurtchenko, and Yu. Yanulis, Phys. Lett. 52B, 122 (1974).
| []
|
[
"Extraordinary Disordered Hyperuniform Multifunctional Composites Journal Title XX(X):1-13",
"Extraordinary Disordered Hyperuniform Multifunctional Composites Journal Title XX(X):1-13"
]
| [
"Salvatore Torquato "
]
| []
| []
| A variety of performance demands are being placed on material systems, including desirable mechanical, thermal, electrical, optical, acoustic and flow properties. The purpose of the present article is to review the emerging field of disordered hyperuniform composites and their novel multifunctional characteristics. Disordered hyperuniform media are exotic amorphous states of matter that are characterized by an anomalous suppression of large-scale volumefraction fluctuations compared to those in "garden-variety" disordered materials. Such unusual composites can have advantages over their periodic counterparts, such as unique or nearly optimal, direction-independent physical properties and robustness against defects. It will be shown that disordered hyperuniform composites and porous media can be endowed with a broad spectrum of extraordinary physical properties, including photonic, phononic, transport, chemical and mechanical characteristics that are only beginning to be discovered. | 10.1177/00219983221116432 | [
"https://arxiv.org/pdf/2204.11345v1.pdf"
]
| 248,377,369 | 2204.11345 | 666e417b124baa319c4698debcd4bb01b7d9c362 |
Extraordinary Disordered Hyperuniform Multifunctional Composites Journal Title XX(X):1-13
Salvatore Torquato
Extraordinary Disordered Hyperuniform Multifunctional Composites Journal Title XX(X):1-13
10.1177/ToBeAssignedSAGEDisordered compositeshyperuniformitymultifunctionality
A variety of performance demands are being placed on material systems, including desirable mechanical, thermal, electrical, optical, acoustic and flow properties. The purpose of the present article is to review the emerging field of disordered hyperuniform composites and their novel multifunctional characteristics. Disordered hyperuniform media are exotic amorphous states of matter that are characterized by an anomalous suppression of large-scale volumefraction fluctuations compared to those in "garden-variety" disordered materials. Such unusual composites can have advantages over their periodic counterparts, such as unique or nearly optimal, direction-independent physical properties and robustness against defects. It will be shown that disordered hyperuniform composites and porous media can be endowed with a broad spectrum of extraordinary physical properties, including photonic, phononic, transport, chemical and mechanical characteristics that are only beginning to be discovered.
Introduction
Increasingly, a variety of performance demands are being placed on material systems. In aerospace and space applications these requirements include lightweight component structures that have desirable mechanical, thermal, electrical, optical, acoustic and flow properties. Structural components should be able to carry mechanical loads while having other beneficial performance characteristics. Desirable thermal properties include high thermal conductivity to dissipate heat and thermal expansion characteristics that match the attached components. In the case of porous cellular solids, heat dissipation can be improved by forced convection through the material, but in these instances the fluid permeability of the porous material must be large enough to minimize power requirements for convection. Desirable optical and acoustic properties include materials that can control the propagation of light and sound waves through them. It is difficult to find single homogeneous materials that possess these multifunctional characteristics.
By contrast, composite materials are ideally suited to achieve multifunctionality, since the best features of different materials can be combined to form a new material that has a broad spectrum of desired properties. [1][2][3][4][5] These materials may simultaneously perform as ultralight load-bearing structures, enable thermal and/or electrical management, ameliorate crash or blast damage, and have desirable optical and acoustic characteristics. A general goal is the design of composite materials with N different effective properties or responses, which we denote by K (1) e , K (2) e , . . . , K (N ) e , given the individual properties of the phases. In principle, one desires to know the region (set) in the multidimensional space of effective properties in which all composites must lie (see Fig. 1 for a two-dimensional (2D) illustration). The size and shape of this region depends on the prescribed phase properties as well as how much microstructural information is specified, For example, the set of composites with unspecified volume fractions is clearly larger than the set in which the the volume fractions are specified.
The determination of the allowable region is generally a highly complex problem. Cross-property bounds 3,6-13 can aid to identify the boundary of the allowable region and numerical topology optimization methods [14][15][16][17][18] can then be used to find specific microstructures that lie on the boundary, which are extremal solutions. These methods often bias the solutions to be periodic structures with high crystallographic symmetries. As we will see below, it can be very beneficial to constrain the optimal solution set to microstructures possessing "correlated disorder", 19 which can have advantages over periodic media, especially an exotic type of disorder within the so-called hyperuniformity class. 20,21 The purpose of the present article is to review the emerging field of disordered hyperuniform composites and their novel multifunctional characteristics. (2) e , as adapted from Ref. 16. Importantly, this allowable region depends on the type of microstructural information that is specified.
The hyperuniformity concept was introduced and studied nearly two decades ago in the context of many-particle systems. 20 Hyperuniform systems are characterized by an anomalous suppression of large-scale density fluctuations compared to "garden-variety" disordered systems. The hyperuniformity concept generalizes the traditional notion of long-range order in many-particle systems to not only include all perfect crystals and perfect quasicrystals, but also exotic amorphous states of matter. Disordered hyperuniform materials can have advantages over crystalline ones, such as unique or nearly optimal, direction-independent physical properties and robustness against defects. 19,[22][23][24][25][26][27][28][29][30][31][32][33][34] The hyperuniformity concept was generalized to twophase heterogeneous media in d-dimensional Euclidean space R d , 21,35 which include composites, cellular solids and porous media. A two-phase medium in R d is hyperuniform if its local volume-fraction variance σ 2 V (R) associated with a spherical observation window of radius R decays in the large-R limit faster than the inverse of the window volume, i.e., 1/R d ; (see Sec. 2 for mathematical details). This behavior is to be contrasted with those of "typical" disordered two-phase media for which the variance decays like the inverse of the window volume (see Fig. 2).
In this article, we review progress that has been made to generate and characterize multifunctional disordered twophase composites and porous media. In Sec. 2, we collect basic definitions and background on hyperuniform and nonhyperuniform two-phase media. In Secs. 3 and 4, we review developments in generating disordered hyperuniform two-phase media using forward and inverse approaches, respectively. In Sec. 5, we describe order metrics that enable a rank ordering of disordered hyperuniform two-phase media. In Sec. 6, we review the current knowledge about the extraordinary multifunctional characteristics of disordered hyperuniform composites and porous media. Finally, In Sec. 7, we make concluding remarks and discuss the outlook for the field.
Structural Characterization of Disordered Hyperuniform Composites
For two-phase heterogeneous media in d-dimensional Euclidean space R d , hyperuniformity can be defined by the following infinite-wavelength condition on the spectral densityχ V (k), 21,35 i.e.,
lim |k|→0χ V (k) = 0,(1)
where k is the wavevector. The spectral densityχ V (k) is the Fourier transform of the autocovariance function
χ V (r) ≡ S (i) 2 (r) − φ i 2 ,
where φ i is the volume fraction of phase i, and S (i) 2 (r) gives the probability of finding two points separated by r in phase i at the same time. 3 . This twopoint descriptor in Fourier space can be easily obtained for general microstructures either theoretically, computationally, or via scattering experiments. 36 The distinctions between the spectral densities for examples of 2D hyperuniform and nonhyperuniform media can be vividly seen in the top panel of Fig. 3.
Hyperuniformity of two-phase media can be also defined in terms of the local volume-fraction variance σ 2 V (R) associated with a spherical window of radius R. Specifically, a hyperuniform two-phase system is one in which σ 2 V (R) decays faster than R −d in the large-R regime, 21,35 i.e.,
lim R→∞ R d σ 2 V (R) = 0.(2)
In addition to having a direct-space representation, 37 the local variance σ 2 V (R) has the following Fourier representation in terms of the spectral densityχ V (k) 21,35 :
σ 2 V (R) = 1 v 1 (R)(2π) d R dχ V (k)α 2 (k; R)dk,(3)
where v 1 (R) = π d/2 R d /Γ(d/2 + 1) is the volume of a ddimensional sphere of radius R, Γ(x) is the gamma function,
α 2 (k; R) ≡ 2 d π d/2 Γ(d/2 + 1) J d/2 (kR) 2 k d ,(4)
is the Fourier transform of the scaled intersection volume of two spheres of radius R whose centers are separated by a distance r, 20 and k ≡ |k| is the wavenumber. The bottom panel of Fig. 3 depicts the local variances corresponding to the spectral densities shown in the top panel.
As in the case of hyperuniform point configurations, 20,35 there are three different scaling regimes (classes) that describe the associated large-R behaviors of the volumefraction variance when the spectral density goes to zero with the following power-law scaling: 21,35,38
χ V (k) ∼ |k| α (k → 0),(5)
namely,
σ 2 V (R) ∼ R −(d+1) , α > 1 (Class I) R −(d+1) ln R, α = 1 (Class II), R −(d+α) , 0 < α < 1 (Class III)(6)
where the exponent α is a positive constant. Classes I and III are the strongest and weakest forms of hyperuniformity, respectively. Stealthy hyperuniform media are also of class I and are defined to be those that possess zero-scattering intensity for a set of wavevectors around the origin, 38 i.e., By contrast, for any nonhyperuniform two-phase system, it is straightforward to show, using a similar analysis as for point configurations, 39 that the local variance has the following large-R scaling behaviors:
χ V (k) = 0 for 0 ≤ |k| ≤ K.(7)σ 2 V (R) ∼ R −d , α = 0 (typical nonhyperuniform) R −(d+α) , −d < α < 0 (antihyperuniform).(8)
For a "typical" nonhyperuniform system,χ V (0) is bounded. 21 In antihyperuniform systems,
χ V (0) is unbounded, i.e., lim |k|→0χ V (k) = +∞,(9)
and hence are diametrically opposite to hyperuniform systems. Antihyperuniform systems include systems at thermal critical points (e.g., liquid-vapor and magnetic critical points), 40,41 fractals, 42 disordered non-fractals, 43 and certain substitution tilings. 44
Forward Approaches to Generating Disordered Hyperuniform Two-Phase Media
Here we describe "forward" (direct) approaches that have yielded disordered hyperuniform particulate media. Jammed as well as unjammed states are briefly reviewed. Torquato and Stillinger 20 suggested that certain defectfree strictly jammed (i.e., mechanically stable) packings of identical spheres are hyperuniform. Specifically, they conjectured that any strictly jammed saturated infinite packing of identical spheres is hyperuniform. A saturated packing of hard spheres is one in which there is no space available to add another sphere. This conjecture was confirmed by Donev et al. 46 via a numerically generated maximally random jammed (MRJ) packing 47,48 of 10 6 hard spheres in three dimensions. Subsequently, the hyperuniformity of other MRJ hardparticle packings, including nonspherical particle shapes, was established across dimensions. [49][50][51][52][53][54][55][56][57][58][59][60] Jammed athermal soft-sphere models of granular media, 61,62 jammed thermal colloidal packings, 63,64 and jammed bidisperse emulsions 65 were also shown to be effectively hyperuniform. The singular transport and electromagnetic properties of MRJ packings of spheres 66 and superballs 60 have also been investigated.
Periodically driven colloidal suspensions were observed to have a phase transition in terms of the reversibility of the dynamics one decade ago 67 . Random organization models capture the salient physics of how driven systems can selforganize. 68 A subsequent study of random organization models of monodisperse (i.e., identical) spherical particles have shown that a hyperuniform state is achievable when a granular system goes through an absorbing phase transition to a critical state. 69 Many variants of such models and systems have been studied numerically. [70][71][72][73][74] To what extent is hyperuniformity is preserved when the model is generalized to particles with a size distribution and/or nonspherical shapes? This question was probed in a recent study 45 by examining disks with a size distribution, needlelike shapes and squares in two dimensions and it was demonstrated that their critical states are hyperuniform as two-phase media (see Fig. 4). These results suggest that general particle systems subject to random organization can be a robust way to fabricate a wide class of hyperuniform states of matter by tuning the structures via different particlesize and -shape distributions. This tunability capacity in turn potentially enables the creation of multifunctional hyperuniform materials with desirable optical, transport, and mechanical properties.
While there has been growing interest in disordered hyperuniform materials states, an obstacle has been an inability to produce large samples that are perfectly hyperuniform due to practical limitations of conventional numerical and experimental methods. To overcome these limitations, a general theoretical methodology has been developed to construct perfectly hyperuniform packings in d-dimensional Euclidean space R d . 75,76 Specifically, beginning with an initial general tessellation of space by disjoint cells that meets a "bounded-cell" condition, hard particles are placed inside each cell such that the local-cell particle packing fractions are identical to the global packing fraction; see Fig. 5. It was proved that the constructed packings with a polydispersity in size in R d are perfectly hyperuniform of class I in the infinitesample-size limit, even though the initial point configuration that underlies the Voronoi tessellation is nonhyperuniform. Implementing this methodology in sphere tessellations of space (requiring spheres down to the infinitessimally small), establishes the hyperuniformity of the classical Hashin-Shtrikman multiscale coated-spheres structures, which are known to be two-phase media microstructures that possess optimal effective transport and elastic properties. 77,78 Figure 6 shows portions of 2D and 3D hyperuniform polydisperse packings that were converted from the corresponding Voronoi tessellations of nonhyperuniform random sequential addition (RSA) packings 79 . These computationally-designed microstructures can be fabricated via either photolithographic and 3D-printing techniques. [80][81][82][83]
Inverse Approaches to Generating Disordered Hyperuniform Two-Phase Media
Here, we describe inverse optimization techniques that enable one to design microstructures with targeted spectral densities. These procedures include the capacity to tune the value of the power-law exponent α > 0, defined by relation (5), for nonstealthy hyperuniform media as well as to design stealthy hyperuniform media, defined by relation (7).
The Yeong-Torquato stochastic optimization procedure 84,85 is a popular algorithm that has been employed to construct or reconstruct digitized multi-phase media from a prescribed set of different correlation functions in physical (direct) space. [86][87][88][89][90][91][92][93] A fictitious "energy" is defined to be a sum of squared differences between the target and simulated correlation function. The Yeong-Torquato procedure treats the construction or reconstruction task as an energyminimization problem that it solves via simulated annealing. The Yeong-Torquato procedure was generalized to construct disordered hyperuniform materials with desirable effective macroscopic properties but from targeted structural information in Fourier (reciprocal) space, namely, the spectral densityχ V (k). 30 Specifically, the fictitious "energy" E of the system in d-dimensional Euclidean space R d is defined as the following sum over wavevectors:
E = k [ χ V (k)/l d − χ V,0 (k)/l d ] 2 ,(10)
where the sum is over discrete wave vectors k, χ V,0 (k) and χ V (k) are the spectral densities of the target and (re)constructed microstructures, respectively, d is the space dimension, and l is the relevant characteristic length of the system used to scale the spectral densities such that they are dimensionless. As in the standard Yeong-Torquato procedure, 84,85 the simulated-annealing method is used to minimize the energy (10). It was demonstrated that one can design nonstealthy hyperuniform media and stealthy hyperuniform media with this Fourier-based inverse technique 30 . Such in-silico designed microstructures can be readily realized by 3D printing and lithographic technologies. 81 Figure 7 shows designed realizations of digitized nonstealthy hyperuniform media with prescribed values of the power-law exponent α > 0, defined by relation (5), at different values of the phase volume fraction φ. 30 It is seen that these designed materials possess a variety of morphologies: as φ increases for fixed α, the microstructures transition from particulate media consisting of isolated "particles" to labyrinth-like microstructures. Moreover, as α increases for fixed φ, short-range order increases in these hyperuniform materials.
The "collective-coordinate" optimization procedure represents a powerful reciprocal-space-based approach to generate disordered stealthy hyperuniform point configurations 94-96 as well as nonstealthy hyperuniform point configurations 97,98 in d-dimensional Euclidean space R d . In the case of the former, their degree of the stealthiness can be tuned by varying a dimensionless parameter χ, which measures the relative number of independently constrained degrees of freedom for wavenumbers up to the cut-off value K. For the range 0 < χ < 1/2, the stealthy states are disordered, and degree of short-range order increases as χ approaches 1/2. 99 Such stealthy point patterns can be decorated by nonoverlapping spheres, enabling the generation of nondigitized stealthy sphere packings, which have been used to model disordered two-phase composites that are both stealthy and hyperuniform, 27,34 as defined by relation (7). As we will see in Sec. 6, stealthy hyperuniform composites are endowed with novel multifunctional characteristics.
Order Metrics for Disordered Hyperuniform Two-Phase Media
An outstanding open problem is the determination of appropriate "order metrics" to characterize the degree of large-scale order of both hyperuniform and nonhyperuniform media. This task is a highly challenging due to the infinite variety of possible two-phase microstructures (geometries and topologies). To begin such a program, the local variance σ 2 V (R) was recently studied for a certain subset of class I hyperuniform media, including 2D periodic cellular networks as well as 2D periodic and disordered/irregular packings, some of which maximize their effective transport and elastic properties. 100 In particular, Kim and Torquato 100 evaluated the local variance σ 2 V (R) as a function of the window radius R. They also computed the hyperuniformity order metric B V , i.e., the implied coefficient multiplying R −(d+1) in (6), for all of these class I 2D models to rank them according to their degree of order at a fixed volume fraction. The smaller is the value of B V , the more ordered is the microstructure with respect to large-scale volume-fraction fluctuations. Among the cellular networks considered, the honeycomb networks have the minimal values of the hyperuniformity order metrics B V across all volume fractions. Among all structures studied there, triangular-lattice packings of circular disks have the minimal values of the order metric for almost all volume fractions.
The extension of the aforementioned work to other 2D hyperuniform microstructures as well as to 3D hyperuniform media are important avenues for future research. Such investigations of a wider array of hyperuniform media would be expected to lead to improved order metrics and facilitate materials discovery.
Novel Multifunctional Disordered Hyperuniform Composites and Porous Media
By mapping relatively large 2D disordered stealthy hyperuniform point configurations, obtained via the collectivecoordinate optimization procedure, 94,95 to certain 2D trivalent dielectric networks via a Delaunay centroidal tessellation, 22 what was thought to be impossible at the time became possible. Specifically, the first disordered network solids to have large complete (both polarizations and blocking all directions) photonic band gaps comparable in size to those in photonic crystals were identified, but with the additional advantage of perfect isotropy 22 . The computational designs consist of trivalent networks of cell walls with circular cylinders at the nodes. The band structure was computed as a function of the degree of stealthiness χ (left panel of Fig. 8) and the case of χ nearly equal to 0.5 (with an accompanying substantial degree of shortrange order) leads to the maximal complete band-gap size in disordered hyperuniform dielectric networks. This numerical investigation enabled the design and fabrication of disordered cellular solids with the predicted photonic bandgap characteristics for the microwave regime (right panel of Fig. 8), enabling unprecedented free-form waveguide geometries unhindered by crystallinity and anisotropy, and robust to defects. 101,102 Subsequently, stealthy hyperuniform materials were shown to have novel electromagnetic and elastic wave propagation characteristics, including transparency to long-wavelength radiation, 25,29,34,103,104 tunable diffusive and localization regimes, 29 enhanced absorption of waves, 105 and singular phononic band gaps. 28,106,107 The effective thermal or electrical conductivities and elastic moduli of various 2D ordered and disordered hyperuniform cellular networks were studied. 108 The multifunctionality of a class of such low-density networks was established by demonstrating that they maximize or virtually maximize the effective conductivities and elastic moduli. This was accomplished by using the machinery of homogenization theory, including optimal bounds and cross-property bounds, and statistical mechanics. It was rigorously proved that anisotropic networks consisting of sets of intersecting parallel channels in the low-density limit, ordered or disordered, possess optimal effective conductivity tensors. For a variety of different disordered networks, it was shown that when short-range and long-range order increases, there is an increase in both the effective conductivity and elastic moduli of the network. Moreover, it was demonstrated that the effective conductivity and elastic moduli of various disordered networks (derived from disordered "stealthy" hyperuniform point patterns), such as the one shown in the left panel of Fig. 9), possess virtually optimal values. Interestingly, the optimal networks for conductivity are also optimal for the fluid permeability associated with slow viscous flow through the channels as well as the mean survival time associated with diffusion-controlled reactions in the channels. 3D disordered hyperuniform networks, such as the one shown in right panel of Fig. 9, have been shown to have sizable photonic band gaps. 109 In summary, 2D and 3D disordered hyperuniform low-weight cellular networks are multifunctional with respect to transport (e.g., heat dissipation and fluid transport), mechanical and electromagnetic properties, which can be readily fabricated using 2D lithographic and 3D printing technologies [80][81][82][83] . The theoretical problem of estimating the effective properties of multiphase composite media is an outstanding one and dates back to work by some of the luminaries of science, including Maxwell, 110 Einstein. 112 The preponderance of previous theoretical studies have focused on the determination of static effective properties (e.g., dielectric constant, elastic moduli and fluid permeability) using a variety of methods, including approximation schemes, 110,113-115 bounding techniques, 3,4,77,[116][117][118] and exact series-expansion procedures. [119][120][121][122] Much less is known about the theoretical prediction of the effective dynamic dielectric constant tensor ε e (k I ), where k I is wavevector of the incident radiation. The strong-contrast formalism has recently been used to derive exact nonlocal expansions for ε e (k I ) that exactly account for complete microstructural information and hence multiple scattering to all orders for the range of wavenumbers for which our extended homogenization theory applies, i.e., 0 ≤ |k I | 1 (where is a characteristic heterogeneity length scale). 104 Due to the fast-convergence properties of such expansions, their lower-order truncations yield accurate closed-form approximate formulas for ε e (k I ) that depend on the spectral densityχ V (k). It was shown that disordered stealthy hyperuniform particulate composites exhibit novel wave characteristics, including the capacity to act as low-pass filters that transmit waves "isotropically" up to a selected wavenumber or refractive indices that abruptly change over a narrow range of wavenumbers. The aforementioned nonlocal formulas can now be used to accelerate the discovery of novel electromagnetic composites by appropriate tailoring of the spectral densities.
Cross-property relations for two-phase composite media were recently obtained that link effective elastic and electromagnetic wave characteristics to one another, including effective wave speeds and attenuation coefficients. 34 This was achieved by deriving accurate formulas for the effective elastodynamic properties 103 as well as effective electromagnetic properties, 104 each of which depend on the microstructure via the spectral density. Such formulas enable one to explore the wave characteristics of a broad class of disordered microstructures, including exotic disordered hyperuniform varieties. It was specifically demonstrated that disordered stealthy hyperuniform/nonhyperuniform microstructures exhibit novel elastic wave characteristics that have the potential for future applications, e.g., narrow-band or narrow-band-pass filters that absorb or transmit elastic waves isotropically for a narrow spectrum of frequencies, respectively. These cross-property relations for effective electromagnetic and elastic wave characteristics can be applied to design multifunctional composites (Fig. 10), such as exterior components of spacecrafts or building materials that require both excellent stiffness and electromagnetic absorption, and heat-sinks for CPUs that have to efficiently emit thermal radiation and suppress mechanical vibrations, and nondestructive evaluation of the mechanical strength of materials from the effective dielectric response.
The effective transport characteristics of fluid-saturated porous media have been studied using certain rigorous microstructure-property relations. 123 Of particular interest were the predictions of the formation factor F, mean survival time τ , principal NMR (diffusion) relaxation time T 1 , principal viscous relaxation time Θ 1 , and fluid permeability k for hyperuniform and nonhyperuniform models of porous media. Among other results, a Fourier representation of a classic rigorous upper bound on the fluid permeability was derived that depends on the spectral densityχ V (k) to infer how the permeabilities of hyperuniform porous media perform relative to those of nonhyperuniform ones; see Fig. 11. It was found that the velocity fields in nonhyperuniform porous media are generally much more localized over the pore space compared to those in their hyperuniform counterparts, which has certain implications for their permeabilities. Rigorous bounds on the transport properties F, τ , T 1 and Θ 1 suggest a new approximate formula for the fluid permeability that provides reasonably accurate permeability predictions of a certain class of hyperuniform and nonhyperuniform porous media. These comparative studies shed new light on the microstructural characteristics, such as pore-size statistics, in determining the transport properties of general porous media. In a more recent study, the second moment of the pore-size probability density function was shown to be correlated with the critical pore radius, which contains crucial connectivity information about the pore space. 124 All of these findings have important implications for the design of porous materials with desirable transport properties. A new dynamic probe of the microstructure of twophase media has been introduced called the spreadability S(t), which is a measure of the spreadability of diffusion information as a function of time t in any Euclidean space dimension d. 125 It is assumed that a solute at t = 0 is uniformly distributed throughout phase 2 with volume fraction φ 2 , and completely absent from phase 1 with volume fraction φ 1 , and each phase has same diffusion coefficient D. The spreadability is the fraction of the total amount of solute present that has diffused into phase 1 at time t. In particular, a three-dimensional formula due to Prager 126 was generalized to any dimension in direct space and its Fourier representation was derived. The latter is an exact integral relation for the spreadability S(t) that depends only on the spectral densityχ V (k): For hyperuniform media, it was shown that the "excess" spreadability, S(∞) − S(t), decays to its long-time behavior exponentially faster than that of any nonhyperuniform medium, the "slowest" being antihyperuniform media, as illustrated in Fig. 12. It was also shown that there is a remarkable link between the spreadability and nuclear magnetic resonance (NMR) pulsed field gradient spinecho amplitude and diffusion MRI. 125 Elsewhere, this new theoretical/experimental tool was applied to characterize many different models and a porous-medium sample. 60,127
S(∞) − S(t) = 1 (2π) d φ 2 R dχ V (k) exp[−k 2 Dt]dk ≥ 0,(11)
Conclusions and Outlook
We have seen that the exotic hybrid crystal-liquid structural attributes of disordered hyperuniform composites can be endowed with an array of extraordinary physical properties, including photonic, phononic, transport and mechanical characteristics that are only beginning to be discovered. Disordered hyperuniform media can have advantages over their periodic counterparts, such as unique or nearly optimal, direction-independent physical properties and robustness against defects. The field of hyperuniformity is still in its infancy, though, and a deeper fundamental understanding of these unusual states of matter is required in order to realize their full potential for next-generation materials. Future challenges include the further development of forward and inverse computational approaches to generate disordered hyperuniform structures, formulation of improved order metrics to rank order them, and identifying their desirable multifunctional characteristics. These computational designs can subsequently be combined with the 2D lithographic fabrication techniques 83 and 3D additive manufacturing techniques [80][81][82] to accelerate the discovery of novel multifunctional hyperuniform two-phase materials.
Cross-property "maps" have recently been introduced to connect combinations of pairs of effective static transport and elastic properties of general particulate media via analytical structure-property formulas. 128 Cross-property maps and their extensions will facilitate the rational design of composites with different desirable multifunctional characteristics. In future work, it would be valuable to formulate cross-property maps for the various physical properties described in Sec. 6 using the corresponding analytical estimates of these properties in order to aid in the multifunctional design of disordered hyperuniform composites.
To complement rigorous approaches to estimate the macroscopic properties of heterogeneous media from the microstructure, data-driven methodologies to establish structure-property relationships are increasingly being employed. 92,[129][130][131] The rapid increase in computational resources facilitates the calculation of effective properties for very large data sets (thousands or more) of different microstructures, including those obtained experimentally via 2D and 3D high-resolution imaging techniques. 3,[132][133][134][135] As a result, it has become manageable to generate large numbers of realistic virtual microstructures, and using those to perform exploratory computational screening of structure-property relationships. The application of machine-learning and other data-driven approaches for the discovery of multifunctional disordered hyperuniform composites has yet to be undertaken and hence is a promising avenue for future research.
Figure 1 .
1Schematic illustrating the allowable region in which all composites with specified phase properties must lie for the case of two different effective properties, K
Figure 2 .Figure 3 .
23Schematics indicating a circular observation window of radius R in two dimensions and its centroid x0 for a "typical" disordered nonhyperuniform (a), periodic (b), and disordered hyperuniform (c) media. In each of these examples, the phase volume fraction within the window will fluctuate as the window position varies. Whereas the local variance σ 2 V (R) for the nonhyperuniform medium decays like 1/R 2 for large R, it decays like 1/R 3 in both the periodic and disordered hyperuniform examples. In space dimension d and for large R, σ 2 V (R) scales like 1/R d and 1/R d+1 for nonhyperuniform and hyperuniform media, respectively. Examples of such media are periodic packings of spheres as well as unusual disordered sphere packings derived from stealthy point patterns. (a) Spectral densities versus wavenumber k for 2D nonhyperuniform and hyperuniform media (b) Corresponding local variances (multiplied by R 2 ) versus window radius R.
Figure 4 .
4Representative images of 2D hyperuniform absorbing-state particle configurations, as adapted from Ref. 45. (a) Disks with continuous size distribution. (b) Identical needles. (c) Identical squares.
Figure 5 .
5(a) A Voronoi tesselation of a nonhyperuniform disordered point configuration. (b) The disordered hyperuniform packing of spheres with a size distribution that results by adding particles in the Voronoi tesselation while ensuring that the local cell packing fraction is equal to global packing fraction. (c) A tessellation of space by spheres. (d) The Hashin-Shtrikman composite sphere assemblage that results by adding particles in the sphere tessellation while ensuring that the local cell packing fraction is equal to global packing fraction. These images are adapted from those in Ref. 75.
Figure 6 .
6(a) A portion of a hyperuniform disk packing that was converted from a 2D RSA packing with the packing fraction φinit = 0.41025. (b) A portion of a hyperuniform sphere packing that was converted from a 3D saturated RSA packing with the packing fraction φinit = 0.288. This figure is adapted from Ref.75
Figure 7 .
7Realizations of disordered hyperuniform two-phase materials for different values of the volume fraction φ and the positive exponent α, defined by the spectral-density scaling law (5). This figure is adapted from Ref. 30.
Figure 8 .
8(a) Band structure for stealthy hyperuniform networks as a function of χ, as predicted from the computational study in Ref. 22. From left to right, χ = 0.1, 0.2, 0.3, 0.4 and 0.5. The relative band-gap size, measured by ∆ω/ωC takes on the largest value of 10.26% for the rightmost case of χ = 0.5. (b) 3D fabrication of the computationally-designed maximal band-gap structure looking down from the top, as adapted from Ref. 102. The solid phase is aluminum oxide.
Figure 9 .
9(a) 2D disordered hyperuniform trivalent network, as adapted from Ref. 108. (b) 3D disordered hyperuniform tetrahedrally-coordinated network, as adapted from Ref. 109.
Figure 10 .
10Schematics illustrating elastic and electromagnetic waves at two different wavenumbers (a) kI and (b) kII incident to, inside of and transmitted from a two-phase heterogeneous material (a large ellipse) consisting of a matrix phase (shown in yellow) and a dispersed phase (shown in cyan). Parallel lines and sinusoidal curves represent elastic and electromagnetic waves, respectively. (a) For an elastic wave with a wavenumber kI , while the wavefronts inside this material experience microscopic disturbances, they effectively behave like a plane wave inside a homogeneous material with an effective wavenumber (ke)I and effective elastic moduli Ke and Ge. Analogously, for an electromagnetic wave, this material behaves like a homogeneous material with an effective dielectric constant e. For instance, both elastic and electromagnetic waves are attenuated due to scattering if this composite has a non-zero scattering intensity at kI . (b) For waves (red) of a wavenumber kII , this composite can be effectively transparent, if it has a zero-scattering intensity at kII . This figure is adapted from Ref.34.
Figure 11 .
11Images of the void space after applying a "dilation" operation to three different sphere packings, as adapted from Ref. 123. (a) A nonhyperuniform equilibrium packing. (b) A hyperuniform MRJ packing. (c) A disordered stealthy packing.
where S(∞) = φ 1 . Importantly, the short-, intermediateand long-time behaviors of S(t) contain crucial small-, intermediate-and large-scale structural characteristics.
Figure 12 .
12Excess spreadabilities versus dimensionless time Dt/a 2 for antihyperuniform media (top curve), Debye random media (middle curve), and disordered hyperuniform media (bottom curve) for d = 3 and φ2 = 0.5, as adapted from Ref. 125. The long-time inverse power-law scalings of S(∞) − S(t) for each of these models is indicated. Here a is a characteristic length scale for each model, as defined in Ref. 125.
Department of Chemistry, Department of Physics, Princeton Institute
Prepared using sagej.cls
AcknowledgementsThe author thanks Jaeuk Kim
Mechanics of Composite Materials. R M Christensen, WileyNew YorkChristensen RM. Mechanics of Composite Materials. New York: Wiley, 1979.
Hori M Nemat-Nasser S, Micromechanics, Overall Properties of Heterogeneous Materials. Amsterdam; North-HollandNemat-Nasser S and Hori M. Micromechanics: Overall Properties of Heterogeneous Materials. Amsterdam: North- Holland, 1993.
Random Heterogeneous Materials: Microstructure and Macroscopic Properties. S Torquato, Springer-VerlagNew YorkTorquato S. Random Heterogeneous Materials: Microstruc- ture and Macroscopic Properties. New York: Springer- Verlag, 2002.
The Theory of Composites. G W Milton, Cambridge University PressCambridge, EnglandMilton GW. The Theory of Composites. Cambridge, England: Cambridge University Press, 2002.
Heterogeneous Materials I: Linear Transport and Optical Properties. M Sahimi, Springer-VerlagNew YorkSahimi M. Heterogeneous Materials I: Linear Transport and Optical Properties. New York: Springer-Verlag, 2003.
The dielectric constant of a composite material-A problem in classical physics. D J Bergman, Phys Rep C. 43Bergman DJ. The dielectric constant of a composite material- A problem in classical physics. Phys Rep C 1978; 43: 377- 407.
On the effective conductivity of polycrystals and a three-dimensional phaseinterchange inequality. M Avellaneda, A V Cherkaev, K A Lurie, J Appl Phys. 63Avellaneda M, Cherkaev AV, Lurie KA et al. On the effective conductivity of polycrystals and a three-dimensional phase- interchange inequality. J Appl Phys 1988; 63: 4989-5003.
Relationship between permeability and diffusioncontrolled trapping constant of porous media. S Torquato, Phys Rev Lett. 64Torquato S. Relationship between permeability and diffusion- controlled trapping constant of porous media. Phys Rev Lett 1990; 64: 2644-2646.
Diffusion and reaction in heterogeneous media: Pore size distribution, relaxation times, and mean survival time. S Torquato, M Avellaneda, J Chem Phys. 95Torquato S and Avellaneda M. Diffusion and reaction in heterogeneous media: Pore size distribution, relaxation times, and mean survival time. J Chem Phys 1991; 95: 6477-6489.
Rigorous link between fluid permeability, electrical conductivity, and relaxation times for transport in porous media. M Avellaneda, S Torquato, Phys Fluids A. 3Avellaneda M and Torquato S. Rigorous link between fluid permeability, electrical conductivity, and relaxation times for transport in porous media. Phys Fluids A 1991; 3: 2529-2540.
Rigorous link between the conductivity and elastic moduli of fibre-reinforced composite materials. L V Gibiansky, S Torquato, Phil Trans Royal Soc Lond A. 353Gibiansky LV and Torquato S. Rigorous link between the conductivity and elastic moduli of fibre-reinforced composite materials. Phil Trans Royal Soc Lond A 1995; 353: 243-278.
Connection between the conductivity and bulk modulus of isotropic composite materials. Gibiansky Lv, S Torquato, Proc R Soc Lond A. 452Gibiansky LV and Torquato S. Connection between the conductivity and bulk modulus of isotropic composite materials. Proc R Soc Lond A 1996; 452: 253-283.
Thermal expansion of isotropic multiphase composites and polycrystals. L V Gibiansky, S Torquato, J Mech Phys Solids. 45Gibiansky LV and Torquato S. Thermal expansion of isotropic multiphase composites and polycrystals. J Mech Phys Solids 1997; 45: 1223-1252.
Generating optimal topologies in structural design using a homogenization method. M P Bendsøe, N Kikuchi, Comput Methods Appl Mech Eng. 71Bendsøe MP and Kikuchi N. Generating optimal topologies in structural design using a homogenization method. Comput Methods Appl Mech Eng 1988; 71: 197-224.
Design of materials with extreme thermal expansion using a three-phase topology optimization method. Sigmund O Torquato, S , J Mech Phys Solids. 45Sigmund O and Torquato S. Design of materials with extreme thermal expansion using a three-phase topology optimization method. J Mech Phys Solids 1997; 45: 1037-1067.
Multifunctional composites: Optimizing microstructures for simultaneous transport of heat and electricity. S Torquato, Hyun S Donev, A , Phys Rev Lett. 89266601Torquato S, Hyun S and Donev A. Multifunctional composites: Optimizing microstructures for simultaneous transport of heat and electricity. Phys Rev Lett 2002; 89: 266601.
Systematic design of phononic band-gap materials and structures by topology optimization. Sigmund O Sondergaard, J J , Phil Trans R Soc Lond A. 100Sigmund O and Sondergaard JJ. Systematic design of phononic band-gap materials and structures by topology optimization. Phil Trans R Soc Lond A 2003; 100: 1001- 1019.
Optimal design of heterogeneous materials. S Torquato, Ann Rev Mater Res. 40Torquato S. Optimal design of heterogeneous materials. Ann Rev Mater Res 2010; 40: 101-129.
Engineered disorder in photonics. S Yu, C W Qiu, Y Chong, Nature Rev Mater. 6Yu S, Qiu CW, Chong Y et al. Engineered disorder in photonics. Nature Rev Mater 2021; 6: 226-243.
Local density fluctuations, hyperuniform systems, and order metrics. S Torquato, F H Stillinger, Phys Rev E. 6841113Torquato S and Stillinger FH. Local density fluctuations, hyperuniform systems, and order metrics. Phys Rev E 2003; 68: 041113.
Hyperuniform states of matter. S Torquato, Phys Rep. 745Torquato S. Hyperuniform states of matter. Phys Rep 2018; 745: 1-95.
Designer disordered materials with large complete photonic band gaps. M Florescu, S Torquato, P J Steinhardt, Proc Nat Acad Sci. 106Florescu M, Torquato S and Steinhardt PJ. Designer disordered materials with large complete photonic band gaps. Proc Nat Acad Sci 2009; 106: 20658-20663.
Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem. Y Jiao, T Lau, H Hatzikirou, Phys Rev E. 8922721Jiao Y, Lau T, Hatzikirou H et al. Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem. Phys Rev E 2014; 89: 022721.
Toward hyperuniform disordered plasmonic nanostructures for reproducible surface-enhanced Raman spectroscopy. De Rosa, C Auriemma, F Diletto, C , Phys Chem Chem Phys. 17De Rosa C, Auriemma F, Diletto C et al. Toward hyperuni- form disordered plasmonic nanostructures for reproducible surface-enhanced Raman spectroscopy. Phys Chem Chem Phys 2015; 17: 8061-8069.
High-density hyperuniform materials can be transparent. O Leseur, R Pierrat, R Carminati, Optica. 3Leseur O, Pierrat R and Carminati R. High-density hyperuniform materials can be transparent. Optica 2016; 3: 763-767.
3D printed hollow-core terahertz optical waveguides with hyperuniform disordered dielectric reflectors. T Ma, H Guerboukha, M Girard, Adv Optical Mater. 4Ma T, Guerboukha H, Girard M et al. 3D printed hollow-core terahertz optical waveguides with hyperuniform disordered dielectric reflectors. Adv Optical Mater 2016; 4: 2085-2094.
geometrical and topological properties of stealthy disordered hyperuniform two-phase systems. G Zhang, F H Stillinger, Torquato S Transport, J Chem Phys. 145244109Zhang G, Stillinger FH and Torquato S. Transport, geometrical and topological properties of stealthy disordered hyperuniform two-phase systems. J Chem Phys 2016; 145: 244109.
Hyperuniform disordered phononic structures. G Gkantzounis, T Amoah, M Florescu, Phys Rev B. 9594120Gkantzounis G, Amoah T and Florescu M. Hyperuniform disordered phononic structures. Phys Rev B 2017; 95: 094120.
Transport Phase Diagram and Anderson Localization in Hyperuniform Disordered Photonic Materials. L S Froufe-Pérez, M Engel, José Sáenz, J , Proc Nat Acad Sci. 114Froufe-Pérez LS, Engel M, José Sáenz J et al. Transport Phase Diagram and Anderson Localization in Hyperuniform Disordered Photonic Materials. Proc Nat Acad Sci 2017; 114: 9570--9574.
Designing disordered hyperuniform two-phase materials with novel physical properties. Chen D Torquato, S , Acta Mater. 142Chen D and Torquato S. Designing disordered hyperuniform two-phase materials with novel physical properties. Acta Mater 2018; 142: 152-161.
Experimental demonstration of luneburg lens based on hyperuniform disordered media. H Zhang, H Chu, H Giddens, Appl Phys Lett. 11453507Zhang H, Chu H, Giddens H et al. Experimental demonstration of luneburg lens based on hyperuniform disordered media. Appl Phys Lett 2019; 114: 053507.
Engineered hyperuniformity for directional light extraction. S Gorsky, W A Britton, Y Chen, APL Photonics. 4110801Gorsky S, Britton WA, Chen Y et al. Engineered hyperuniformity for directional light extraction. APL Photonics 2019; 4: 110801.
Absorption of scalar waves in correlated disordered media and its maximization using stealth hyperuniformity. A Sheremet, R Pierrat, R Carminati, Phys Rev A. 10153829Sheremet A, Pierrat R and Carminati R. Absorption of scalar waves in correlated disordered media and its maximization using stealth hyperuniformity. Phys Rev A 2020; 101: 053829.
Multifunctional composites for elastic and electromagnetic wave propagation. J Kim, S Torquato, Proc Nat Acad Sci. 117Kim J and Torquato S. Multifunctional composites for elastic and electromagnetic wave propagation. Proc Nat Acad Sci 2020; 117: 8764-8774.
Hyperuniformity in point patterns and two-phase heterogeneous media. Zachary Ce, S Torquato, J Stat Mech: Theory & Exp. 12015Zachary CE and Torquato S. Hyperuniformity in point patterns and two-phase heterogeneous media. J Stat Mech: Theory & Exp 2009; 2009: P12015.
Scattering by an inhomogeneous solid. II. The correlation function and its applications. P Debye, Anderson Hr, H Brumberger, J Appl Phys. 28Debye P, Anderson HR and Brumberger H. Scattering by an inhomogeneous solid. II. The correlation function and its applications. J Appl Phys 1957; 28: 679-683.
Local volume fraction fluctuations in heterogeneous media. Lu Bl, S Torquato, J Chem Phys. 93Lu BL and Torquato S. Local volume fraction fluctuations in heterogeneous media. J Chem Phys 1990; 93: 3452-3459.
Disordered hyperuniform heterogeneous materials. S Torquato, J Phys: Cond Mat. 28414012Torquato S. Disordered hyperuniform heterogeneous materials. J Phys: Cond Mat 2016; 28: 414012.
Structural characterization of many-particle systems on approach to hyperuniform states. S Torquato, Phys Rev E. 10352126Torquato S. Structural characterization of many-particle systems on approach to hyperuniform states. Phys Rev E 2021; 103: 052126.
Introduction to Phase Transitions and Critical Phenomena. H E Stanley, Oxford University PressNew YorkStanley HE. Introduction to Phase Transitions and Critical Phenomena. New York: Oxford University Press, 1987.
The Theory of Critical Phenomena: An Introduction to the Renormalization Group. J J Binney, N J Dowrick, A J Fisher, Oxford University PressOxford, EnglandBinney JJ, Dowrick NJ, Fisher AJ et al. The Theory of Critical Phenomena: An Introduction to the Renormalization Group. Oxford, England: Oxford University Press, 1992.
The fractal geometry of nature. B B Mandelbrot, W. H. FreemanNew YorkMandelbrot BB. The fractal geometry of nature. New York: W. H. Freeman, 1982.
Local number fluctuations in hyperuniform and nonhyperuniform systems: Higher-order moments and distribution functions. S Torquato, Kim J Klatt, M A , Phys Rev X. 1121028Torquato S, Kim J and Klatt MA. Local number fluctuations in hyperuniform and nonhyperuniform systems: Higher-order moments and distribution functions. Phys Rev X 2021; 11: 021028.
Hyperuniformity and anti-hyperuniformity in one-dimensional substitution tilings. E C Oguz, Jes Socolar, P J Steinhardt, Acta Cryst Section A: Foundations & Advances. 75Oguz EC, Socolar JES, Steinhardt PJ et al. Hyperuniformity and anti-hyperuniformity in one-dimensional substitution tilings. Acta Cryst Section A: Foundations & Advances 2019; A75: 3-13.
Hyperuniformity of generalized random organization models. Z Ma, S Torquato, Phys Rev E. 9922115Ma Z and Torquato S. Hyperuniformity of generalized random organization models. Phys Rev E 2019; 99: 022115.
Unexpected density fluctuations in disordered jammed hard-sphere packings. A Donev, F H Stillinger, S Torquato, Phys Rev Lett. 9590604Donev A, Stillinger FH and Torquato S. Unexpected density fluctuations in disordered jammed hard-sphere packings. Phys Rev Lett 2005; 95: 090604.
Is random close packing of spheres well defined. S Torquato, T M Truskett, P G Debenedetti, Phys Rev Lett. 84Torquato S, Truskett TM and Debenedetti PG. Is random close packing of spheres well defined? Phys Rev Lett 2000; 84: 2064-2067.
Basic understanding of condensed phases of matter via packing models. S Torquato, Perspective, J Chem Phys. 14920901Torquato S. Perspective: Basic understanding of condensed phases of matter via packing models. J Chem Phys 2018; 149: 020901.
Packing hyperspheres in high-dimensional Euclidean spaces. M Skoge, A Donev, F H Stillinger, Phys Rev E. 7441127Skoge M, Donev A, Stillinger FH et al. Packing hyperspheres in high-dimensional Euclidean spaces. Phys Rev E 2006; 74: 041127.
Distinctive features arising in maximally random jammed packings of superballs. Y Jiao, F H Stillinger, S Torquato, Phys Rev E. 8141304Jiao Y, Stillinger FH and Torquato S. Distinctive features arising in maximally random jammed packings of superballs. Phys Rev E 2010; 81: 041304.
Hyperuniform longrange correlations are a signature of disordered jammed hardparticle packings. C E Zachary, Y Jiao, S Torquato, Phys Rev Lett. 106178001Zachary CE, Jiao Y and Torquato S. Hyperuniform long- range correlations are a signature of disordered jammed hard- particle packings. Phys Rev Lett 2011; 106: 178001.
Maximally random jammed packings of Platonic solids: Hyperuniform long-range correlations and isostaticity. Y Jiao, S Torquato, Phys Rev E. 8441309Jiao Y and Torquato S. Maximally random jammed packings of Platonic solids: Hyperuniform long-range correlations and isostaticity. Phys Rev E 2011; 84: 041309.
Nonequilibrium static diverging length scales on approaching a prototypical model glassy state. A B Hopkins, F H Stillinger, S Torquato, Phys Rev E. 8621505Hopkins AB, Stillinger FH and Torquato S. Nonequilibrium static diverging length scales on approaching a prototypical model glassy state. Phys Rev E 2012; 86: 021505.
Equilibrium phase behavior and maximally random jammed state of truncated tetrahedra. D Chen, Y Jiao, S Torquato, J Phys Chem B. 118Chen D, Jiao Y and Torquato S. Equilibrium phase behavior and maximally random jammed state of truncated tetrahedra. J Phys Chem B 2014; 118: 7981-7992.
A geometric-structure theory for maximally random jammed packings. J Tian, Y Xu, Y Jiao, Sci Rep. 516722Tian J, Xu Y, Jiao Y et al. A geometric-structure theory for maximally random jammed packings. Sci Rep 2015; 5: 16722.
Characterization of maximally random jammed sphere packings. II. Correlation functions and density fluctuations. M A Klatt, S Torquato, Phys Rev E. 9422152Klatt MA and Torquato S. Characterization of maximally random jammed sphere packings. II. Correlation functions and density fluctuations. Phys Rev E 2016; 94: 022152.
Critical slowing down and hyperuniformity on approach to jamming. S Atkinson, G Zhang, A B Hopkins, Phys Rev E. 9412902Atkinson S, Zhang G, Hopkins AB et al. Critical slowing down and hyperuniformity on approach to jamming. Phys Rev E 2016; 94: 012902.
Static structural signatures of nearly jammed disordered and ordered hardsphere packings: Direct correlation function. S Atkinson, F H Stillinger, S Torquato, Phys Rev E. 9432902Atkinson S, Stillinger FH and Torquato S. Static structural signatures of nearly jammed disordered and ordered hard- sphere packings: Direct correlation function. Phys Rev E 2016; 94: 032902.
Hard convex lens-shaped particles: metastable, glassy and jammed states. G Cinacchi, S Torquato, Soft matter. 14Cinacchi G and Torquato S. Hard convex lens-shaped particles: metastable, glassy and jammed states. Soft matter 2018; 14: 8205-8218.
Characterization of void space, large-scale structure, and transport properties of maximally random jammed packings of superballs. C E Maher, F H Stillinger, S Torquato, Phys Rev Mater. 625603Maher CE, Stillinger FH and Torquato S. Characterization of void space, large-scale structure, and transport properties of maximally random jammed packings of superballs. Phys Rev Mater 2022; 6: 025603.
Long-wavelength structural anomalies in jammed systems. L E Silbert, M Silbert, Phys Rev E. 8041304Silbert LE and Silbert M. Long-wavelength structural anomalies in jammed systems. Phys Rev E 2009; 80: 041304.
Suppressed compressibility at large scale in jammed packings of sizedisperse spheres. L Berthier, P Chaudhuri, C Coulais, Phys Rev Lett. 106120601Berthier L, Chaudhuri P, Coulais C et al. Suppressed compressibility at large scale in jammed packings of size- disperse spheres. Phys Rev Lett 2011; 106: 120601.
Incompressibility of polydisperse random-close-packed colloidal particles. R Kurita, E R Weeks, Phys Rev E. 8430401Kurita R and Weeks ER. Incompressibility of polydisperse random-close-packed colloidal particles. Phys Rev E 2011; 84: 030401.
Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres. R Dreyfus, Y Xu, T Still, Phys Rev E. 9112302Dreyfus R, Xu Y, Still T et al. Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres. Phys Rev E 2015; 91: 012302.
Optimizing hyperuniformity in self-assembled bidisperse emulsions. J Ricouvier, R Pierrat, R Carminati, Phys Rev Lett. 119208001Ricouvier J, Pierrat R, Carminati R et al. Optimizing hyperuniformity in self-assembled bidisperse emulsions. Phys Rev Lett 2017; 119: 208001.
Characterization of maximally random jammed sphere packings. III. Transport and electromagnetic properties via correlation functions. M A Klatt, S Torquato, Phys Rev E. 9712118Klatt MA and Torquato S. Characterization of maximally random jammed sphere packings. III. Transport and electromagnetic properties via correlation functions. Phys Rev E 2018; 97: 012118.
Chaos and threshold for irreversibility in sheared suspensions. D J Pine, J P Gollub, J F Brady, Nature. 438Pine DJ, Gollub JP, Brady JF et al. Chaos and threshold for irreversibility in sheared suspensions. Nature 2005; 438: 997- 1000.
Random organization in periodically driven systems. C Laurent, P M Chaikin, J P Gollub, Nature Phys. 4Laurent C, Chaikin PM, Gollub JP et al. Random organization in periodically driven systems. Nature Phys 2008; 4: 420- 424.
Hyperuniformity of critical absorbing states. D Hexner, D Levine, Phys Rev Lett. 114110602Hexner D and Levine D. Hyperuniformity of critical absorbing states. Phys Rev Lett 2015; 114: 110602.
. D Hexner, D Levine, Diffusion Noise, Hyperuniformity , Phys Rev Lett. 11820601Hexner D and Levine D. Noise, Diffusion, and Hyperuniformity. Phys Rev Lett 2017; 118: 020601.
Hyperuniform density fluctuations and diverging dynamic correlations in periodically driven colloidal suspensions. E Tjhung, L Berthier, Phys Rev Lett. 114148301Tjhung E and Berthier L. Hyperuniform density fluctuations and diverging dynamic correlations in periodically driven colloidal suspensions. Phys Rev Lett 2015; 114: 148301.
Particle-density fluctuations and universality in the conserved stochastic sandpile. R Dickman, S D Da Cunha, Phys Rev E. 9220104Dickman R and da Cunha SD. Particle-density fluctuations and universality in the conserved stochastic sandpile. Phys Rev E 2015; 92: 020104.
Screening, hyperuniformity, and instability in the sedimentation of irregular objects. T Goldfriend, H Diamant, T A Witten, Phys Rev Lett. 118158005Goldfriend T, Diamant H and Witten TA. Screening, hyperuniformity, and instability in the sedimentation of irregular objects. Phys Rev Lett 2017; 118: 158005.
Hyperuniformity with no fine tuning in sheared sedimenting suspensions. J Wang, J M Schwarz, J D Paulsen, Nat Comm. 9Wang J, Schwarz JM and Paulsen JD. Hyperuniformity with no fine tuning in sheared sedimenting suspensions. Nat Comm 2018; 9: 1-7.
Methodology to construct large realizations of perfectly hyperuniform disordered packings. J Kim, S Torquato, Phys Rev E. 9952141Kim J and Torquato S. Methodology to construct large realizations of perfectly hyperuniform disordered packings. Phys Rev E 2019; 99: 052141.
New tessellation-based procedure to design perfectly hyperuniform disordered dispersions for materials discovery. J Kim, S Torquato, Acta Mater. 168Kim J and Torquato S. New tessellation-based procedure to design perfectly hyperuniform disordered dispersions for materials discovery. Acta Mater 2019; 168: 143-151.
A variational approach to the theory of the effective magnetic permeability of multiphase materials. Z Hashin, S Shtrikman, J Appl Phys. 33Hashin Z and Shtrikman S. A variational approach to the theory of the effective magnetic permeability of multiphase materials. J Appl Phys 1962; 33: 3125-3131.
A variational approach to the elastic behavior of multiphase materials. Z Hashin, S Shtrikman, J Mech Phys Solids. 4Hashin Z and Shtrikman S. A variational approach to the elastic behavior of multiphase materials. J Mech Phys Solids 1963; 4: 286-295.
Random sequential addition of hard spheres in high Euclidean dimensions. S Torquato, O U Uche, F H Stillinger, Phys Rev E. 7461308Torquato S, Uche OU and Stillinger FH. Random sequential addition of hard spheres in high Euclidean dimensions. Phys Rev E 2006; 74: 061308.
A review of additive manufacturing. K V Wong, A Hernandez, Int Scholarly Res Notices. Wong KV and Hernandez A. A review of additive manufacturing. Int Scholarly Res Notices 2012; 2012.
A review on 3D micro-additive manufacturing technologies. M Vaezi, H Seitz, Yang S , Int J Adv Manuf Technol. 67Vaezi M, Seitz H and Yang S. A review on 3D micro-additive manufacturing technologies. Int J Adv Manuf Technol 2013; 67: 1721-1754.
A review on powder-based additive manufacturing for tissue engineering: Selective laser sintering and inkjet 3d printing. Sfs Shirazi, S Gharehkhani, M Mehrali, Sci Tech Adv Mater. 1633502Shirazi SFS, Gharehkhani S, Mehrali M et al. A review on powder-based additive manufacturing for tissue engineering: Selective laser sintering and inkjet 3d printing. Sci Tech Adv Mater 2015; 16: 033502.
Assembly of colloidal particles in solution. K Zhao, T G Mason, Rep Prog Phys. 81126601Zhao K and Mason TG. Assembly of colloidal particles in solution. Rep Prog Phys 2018; 81: 126601.
Reconstructing random media. Cly Yeong, S Torquato, Phys Rev E. 57Yeong CLY and Torquato S. Reconstructing random media. Phys Rev E 1998; 57: 495-506.
Reconstructing random media: II. Three-dimensional media from two-dimensional cuts. Cly Yeong, S Torquato, Phys Rev E. 58Yeong CLY and Torquato S. Reconstructing random media: II. Three-dimensional media from two-dimensional cuts. Phys Rev E 1998; 58: 224-233.
Modeling heterogeneous materials via two-point correlation functions: Basic principles. Y Jiao, F H Stillinger, S Torquato, Phys Rev E. 7631110Jiao Y, Stillinger FH and Torquato S. Modeling heterogeneous materials via two-point correlation functions: Basic principles. Phys Rev E 2007; 76: 031110.
A superior descriptor of random textures and its predictive capacity. Y Jiao, F H Stillinger, S Torquato, Proc Nat Acad Sci. 106Jiao Y, Stillinger FH and Torquato S. A superior descriptor of random textures and its predictive capacity. Proc Nat Acad Sci 2009; 106: 17634-17639.
Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media. L M Pant, Mitra Sk, M Secanell, Phys Rev E. 9263303Pant LM, Mitra SK and Secanell M. Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media. Phys Rev E 2015; 92: 063303.
Microstructure and mechanical properties of hyperuniform heterogeneous materials. Y Xu, S Chen, P E Chen, Phys Rev E. 9643301Xu Y, Chen S, Chen PE et al. Microstructure and mechanical properties of hyperuniform heterogeneous materials. Phys Rev E 2017; 96: 043301.
Hierarchical optimization: Fast and robust multiscale stochastic reconstructions with rescaled correlation functions. M V Karsanina, K M Gerke, Phys Rev Lett. 121265501Karsanina MV and Gerke KM. Hierarchical optimization: Fast and robust multiscale stochastic reconstructions with rescaled correlation functions. Phys Rev Lett 2018; 121: 265501.
On the importance of simulated annealing algorithms for stochastic reconstruction constrained by loworder microstructural descriptors. P Čapek, Trans Porous Media. 121Čapek P. On the importance of simulated annealing algorithms for stochastic reconstruction constrained by low- order microstructural descriptors. Trans Porous Media 2018; 121: 59-80.
A transfer learning approach for microstructure reconstruction and structureproperty predictions. X Li, Y Zhang, H Zhao, Sci Rep. 813461Li X, Zhang Y, Zhao H et al. A transfer learning approach for microstructure reconstruction and structure- property predictions. Sci Rep 2018; 8: 13461.
Understanding degeneracy of two-point correlation functions via debye random media. M Skolnick, S Torquato, Phys Rev E. 10445306Skolnick M and Torquato S. Understanding degeneracy of two-point correlation functions via debye random media. Phys Rev E 2021; 104: 045306.
Constraints on collective density variables: Two dimensions. O U Uche, F H Stillinger, S Torquato, Phys Rev E. 7046122Uche OU, Stillinger FH and Torquato S. Constraints on collective density variables: Two dimensions. Phys Rev E 2004; 70: 046122.
Classical disordered ground states: Super-ideal gases, and stealth and equiluminous materials. R D Batten, F H Stillinger, S Torquato, J Appl Phys. 10433504Batten RD, Stillinger FH and Torquato S. Classical disordered ground states: Super-ideal gases, and stealth and equi- luminous materials. J Appl Phys 2008; 104: 033504.
Ground states of stealthy hyperuniform potentials: I. Entropically favored configurations. G Zhang, F Stillinger, S Torquato, Phys Rev E. 9222119Zhang G, Stillinger F and Torquato S. Ground states of stealthy hyperuniform potentials: I. Entropically favored configurations. Phys Rev E 2015; 92: 022119.
Collective coordinates control of density distributions. O U Uche, S Torquato, F H Stillinger, Phys Rev E. 7431104Uche OU, Torquato S and Stillinger FH. Collective coordinates control of density distributions. Phys Rev E 2006; 74: 031104.
The perfect glass paradigm: Disordered hyperuniform glasses down to absolute zero. G Zhang, F H Stillinger, S Torquato, Sci Rep. 636963Zhang G, Stillinger FH and Torquato S. The perfect glass paradigm: Disordered hyperuniform glasses down to absolute zero. Sci Rep 2016; 6: 36963.
Ensemble theory for stealthy hyperuniform disordered ground states. S Torquato, G Zhang, F H Stillinger, Phys Rev X. 521020Torquato S, Zhang G and Stillinger FH. Ensemble theory for stealthy hyperuniform disordered ground states. Phys Rev X 2015; 5: 021020.
Characterizing the hyperuniformity of ordered and disordered two-phase media. J Kim, S Torquato, Phys Rev E. 10312123Kim J and Torquato S. Characterizing the hyperuniformity of ordered and disordered two-phase media. Phys Rev E 2021; 103: 012123.
Optical cavities and waveguides in hyperuniform disordered photonic solids. M Florescu, P J Steinhardt, S Torquato, Florescu M, Steinhardt PJ and Torquato S. Optical cavities and waveguides in hyperuniform disordered photonic solids.
. Phys Rev B. 87165116Phys Rev B 2013; 87: 165116.
Isotropic band gaps and freeform waveguides observed in hyperuniform disordered photonic solids. W Man, M Florescu, E P Williamson, Proc Nat Acad Sci. 110Man W, Florescu M, Williamson EP et al. Isotropic band gaps and freeform waveguides observed in hyperuniform disordered photonic solids. Proc Nat Acad Sci 2013; 110: 15886-15891.
Effective elastic wave characteristics of composite media. J Kim, S Torquato, New J Phys. 22123050Kim J and Torquato S. Effective elastic wave characteristics of composite media. New J Phys 2020; 22: 123050.
Nonlocal effective electromagnetic wave characteristics of composite media: Beyond the quasistatic regime. S Torquato, J Kim, Phys Rev X. 1121002Torquato S and Kim J. Nonlocal effective electromagnetic wave characteristics of composite media: Beyond the quasistatic regime. Phys Rev X 2021; 11: 021002.
Enhanced absorption of waves in stealth hyperuniform disordered media. F Bigourdan, R Pierrat, R Carminati, Optics Express. 27Bigourdan F, Pierrat R and Carminati R. Enhanced absorption of waves in stealth hyperuniform disordered media. Optics Express 2019; 27: 8666-8682.
Stealth acoustic materials. V Romero-García, N Lamothe, G Theocharis, Phys Rev Appl. 1154076Romero-García V, Lamothe N, Theocharis G et al. Stealth acoustic materials. Phys Rev Appl 2019; 11: 054076.
Impact of particle size and multiple scattering on the propagation of waves in stealthy-hyperuniform media. A Rohfritsch, J M Conoir, T Valier-Brasier, Phys Rev E. 10253001Rohfritsch A, Conoir JM, Valier-Brasier T et al. Impact of particle size and multiple scattering on the propagation of waves in stealthy-hyperuniform media. Phys Rev E 2020; 102: 053001.
Multifunctional hyperuniform cellular networks: optimality, anisotropy and disorder. S Torquato, D Chen, Multifunc Mater. 115001Torquato S and Chen D. Multifunctional hyperuniform cellular networks: optimality, anisotropy and disorder. Multifunc Mater 2018; 1: 015001.
Gap sensitivity reveals universal behaviors in optimized photonic crystal and disordered networks. M A Klatt, P J Steinhardt, S Torquato, Phys Rev Lett. 12737401Klatt MA, Steinhardt PJ and Torquato S. Gap sensitivity reveals universal behaviors in optimized photonic crystal and disordered networks. Phys Rev Lett 2021; 127: 037401.
J C Maxwell, Treatise on Electricity and Magnetism. OxfordClarendon PressMaxwell JC. Treatise on Electricity and Magnetism. Oxford: Clarendon Press, 1873.
On the influence of obstacles arranged in a rectangular order upon the properties of medium. J W Strutt, Phil Mag. 34Strutt JW. On the influence of obstacles arranged in a rectangular order upon the properties of medium. Phil Mag 1892; 34: 481-502.
Eine neue Bestimmung der Moleküldimensionen. A Einstein, Ann Phys. 19Einstein A. Eine neue Bestimmung der Moleküldimensionen. Ann Phys 1906; 19: 289-306.
Berechnung verschiedener Physikalischer Konstanten von heterogenen Substanzen. D Bruggeman, Ann Physik (Liepzig). 24Bruggeman D. Berechnung verschiedener Physikalischer Konstanten von heterogenen Substanzen. Ann Physik (Liepzig) 1935; 24: 636-679.
A calculation of the viscous force exerted by a flowing fluid on a dense swarm of particles. H C Brinkman, Appl Sci Res. 1Brinkman HC. A calculation of the viscous force exerted by a flowing fluid on a dense swarm of particles. Appl Sci Res 1947; A1: 27-34.
On the elastic moduli of some heterogeneous materials. B Budiansky, J Mech Phys Solids. 13Budiansky B. On the elastic moduli of some heterogeneous materials. J Mech Phys Solids 1965; 13: 223-227.
Viscous flow through porous media. S Prager, Phys Fluids. 4Prager S. Viscous flow through porous media. Phys Fluids 1961; 4: 1477-1482.
Use of the variational approach to determine bounds for the effective permittivity in random media. M Beran, Nuovo Cimento. 38Beran M. Use of the variational approach to determine bounds for the effective permittivity in random media. Nuovo Cimento 1965; 38: 771-782.
Optimal bounds for the effective energy of a mixture of isotropic, incompressible elastic materials. R V Kohn, R Lipton, Arch Rational Mech Analysis. 102Kohn RV and Lipton R. Optimal bounds for the effective energy of a mixture of isotropic, incompressible elastic materials. Arch Rational Mech Analysis 1988; 102: 331-350.
Solid mixture permittivities. W F Brown, J Chem Phys. 23Brown WF. Solid mixture permittivities. J Chem Phys 1955; 23: 1514-1517.
Cluster expansion for the dielectric constant of a polarizable suspension. B U Felderhof, G W Ford, Egd Cohen, J Stat Phys. 28Felderhof BU, Ford GW and Cohen EGD. Cluster expansion for the dielectric constant of a polarizable suspension. J Stat Phys 1982; 28: 135-164.
Effective conductivity of anisotropic two-phase composite media. A K Sen, S Torquato, Phys Rev B. 39Sen AK and Torquato S. Effective conductivity of anisotropic two-phase composite media. Phys Rev B 1989; 39: 4504- 4515.
Exact expression for the effective elastic tensor of disordered composites. S Torquato, Phys Rev Lett. 79Torquato S. Exact expression for the effective elastic tensor of disordered composites. Phys Rev Lett 1997; 79: 681-684.
Predicting transport characteristics of hyperuniform porous media via rigorous microstructureproperty relations. S Torquato, Adv Water Resour. 140103565Torquato S. Predicting transport characteristics of hyperuniform porous media via rigorous microstructure- property relations. Adv Water Resour 2020; 140: 103565.
Critical pore radius and transport properties of disordered hard-and overlappingsphere models. M A Klatt, R M Ziff, S Torquato, Phys Rev E. 10414127Klatt MA, Ziff RM and Torquato S. Critical pore radius and transport properties of disordered hard-and overlapping- sphere models. Phys Rev E 2021; 104: 014127.
Diffusion spreadability as a probe of the microstructure of complex media across length scales. S Torquato, Phys Rev E. 10454102Torquato S. Diffusion spreadability as a probe of the microstructure of complex media across length scales. Phys Rev E 2021; 104: 054102.
Diffusion and viscous flow in concentrated suspensions. S Prager, Physica. 29Prager S. Diffusion and viscous flow in concentrated suspensions. Physica 1963; 29: 129-139.
Dynamic measure of hyperuniformity and nonhyperuniformity in heterogeneous media via the diffusion spreadability. H Wang, S Torquato, Phys Rev Appl. 1734022Wang H and Torquato S. Dynamic measure of hyperunifor- mity and nonhyperuniformity in heterogeneous media via the diffusion spreadability. Phys Rev Appl 2022; 17: 034022.
Multifunctionality of particulate composites via cross-property maps. S Torquato, D Chen, Phys Rev Mater. 2995603Torquato S and Chen D. Multifunctionality of particulate composites via cross-property maps. Phys Rev Mater 2018; 2(9): 095603.
Machine learning framework for analysis of transport through complex networks in porous, granular media: A focus on permeability. J H Van Der Linden, Narsilio Ga, A Tordesillas, Phys Rev E. 9422904van der Linden JH, Narsilio GA and Tordesillas A. Machine learning framework for analysis of transport through complex networks in porous, granular media: A focus on permeability. Phys Rev E 2016; 94: 022904.
Quantifying the influence of microstructure on effective conductivity and permeability: virtual materials testing. M Neumann, O Stenzel, F Willot, Int J Solids Struct. 184Neumann M, Stenzel O, Willot F et al. Quantifying the influence of microstructure on effective conductivity and permeability: virtual materials testing. Int J Solids Struct 2020; 184: 211-220.
Predicting permeability via statistical learning on higher-order microstructural information. M Röding, Ma Z Torquato, S , Scientific Rep. 10Röding M, Ma Z and Torquato S. Predicting permeability via statistical learning on higher-order microstructural information. Scientific Rep 2020; 10: 1-17.
Morphology and physical properties of Fontainebleau sandstone via a tomographic analysis. D A Coker, S Torquato, J H Dunsmuir, J Geophys Res. 101Coker DA, Torquato S and Dunsmuir JH. Morphology and physical properties of Fontainebleau sandstone via a tomographic analysis. J Geophys Res 1996; 101: 17497- 17506.
Quantitative analysis of three-dimensional-resolved fiber architecture in heterogeneous skeletal muscle tissue using nmr and optical imaging methods. V J Napadow, Q Chen, V Mai, Biophys J. 80Napadow VJ, Chen Q, Mai V et al. Quantitative analysis of three-dimensional-resolved fiber architecture in heterogeneous skeletal muscle tissue using nmr and optical imaging methods. Biophys J 2001; 80: 2968-2975.
Pore-scale imaging and modelling. M J Blunt, B Bijeljic, H Dong, Adv Water Resources. 51Blunt MJ, Bijeljic B, Dong H et al. Pore-scale imaging and modelling. Adv Water Resources 2013; 51: 197-216.
Multi-resolution data fusion for super resolution imaging. E Reid, G T Buzzard, L F Drummy, IEEE Trans Comput Imaging. 8Reid E, Buzzard GT, Drummy LF et al. Multi-resolution data fusion for super resolution imaging. IEEE Trans Comput Imaging 2022; 8: 81-95.
| []
|
[
"Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials",
"Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials"
]
| [
"Haruya Mizutani "
]
| []
| []
| The present paper is concerned with Schrödinger equations with variable coefficients and unbounded electromagnetic potentials, where the kinetic energy part is a long-range perturbation of the flat Laplacian and the electric (resp. magnetic) potential can grow subquadratically (resp. sublinearly) at spatial infinity. We prove sharp (local-in-time) Strichartz estimates, outside a large compact ball centered at origin, for any admissible pair including the endpoint. Under the nontrapping condition on the Hamilton flow generated by the kinetic energy, global-in-space estimates are also studied. Finally, under the nontrapping condition, we prove Strichartz estimates with an arbitrarily small derivative loss without asymptotic flatness on the coefficients. d j,k=1with some c > 0. Moreover, we suppose the following condition:2010 Mathematics Subject Classification. Primary 35Q41,35B45; Secondary 35S30, 81Q20. | 10.2140/apde.2013.6.1857 | [
"https://arxiv.org/pdf/1202.5201v3.pdf"
]
| 55,263,762 | 1202.5201 | e23b676b5e4b26cda5d237d46c7107d3766b853b |
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials
10 Mar 2012
Haruya Mizutani
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials
10 Mar 2012arXiv:1202.5201v2 [math.AP]
The present paper is concerned with Schrödinger equations with variable coefficients and unbounded electromagnetic potentials, where the kinetic energy part is a long-range perturbation of the flat Laplacian and the electric (resp. magnetic) potential can grow subquadratically (resp. sublinearly) at spatial infinity. We prove sharp (local-in-time) Strichartz estimates, outside a large compact ball centered at origin, for any admissible pair including the endpoint. Under the nontrapping condition on the Hamilton flow generated by the kinetic energy, global-in-space estimates are also studied. Finally, under the nontrapping condition, we prove Strichartz estimates with an arbitrarily small derivative loss without asymptotic flatness on the coefficients. d j,k=1with some c > 0. Moreover, we suppose the following condition:2010 Mathematics Subject Classification. Primary 35Q41,35B45; Secondary 35S30, 81Q20.
Introduction
In this paper, we study sharp (local-in-time) Strichartz estimates for Schrödinger equations with variable coefficients and unbounded electromagnetic potentials. More precisely, we consider the following Schrödinger operator:
H = 1 2 d j,k=1 (−i∂ j − A j (x))g jk (x)(−i∂ k − A k (x)) + V (x), x ∈ R d ,
where d ≥ 1 is the spatial dimension. Throughout the paper we assume that g jk , V and A j are smooth and real-valued functions on R d and that (g jk (x)) j,k is symmetric and positive definite: Assumption 1.1. There exists µ ≥ 0 such that for any α ∈ Z d + ,
|∂ α x (g jk (x) − δ jk )| ≤ C α x −µ−|α| , |∂ α x A j (x)| ≤ C α x 1−µ−|α| , |∂ α x V (x)| ≤ C α x 2−µ−|α| , x ∈ R d .
Then, it is well known that H admits a unique self-adjoint realization on L 2 (R d ), which we denote by the same symbol H. By the Stone theorem, H generates a unique unitary propagator e −itH on L 2 (R d ) such that the solution to the Schrödinger equation:
i∂ t u(t) = Hu(t), t ∈ R; u| t=0 = ϕ ∈ L 2 (R d ), is given by u(t) = e −itH ϕ.
In order to explain the purpose of the paper more precisely, we recall some known results. Let us first recall well known properties of the free propagator e −itH 0 , where H 0 = −∆/2. The distribution kernel of e −itH 0 is given explicitly by (2πit) −d/2 e i|x−y| 2 /(2t) and e −itH 0 ϕ thus satisfies the dispersive estimate:
||e −itH 0 ϕ|| L ∞ (R d ) ≤ C|t| −d/2 ||ϕ|| L 1 (R d ) , t = 0.
Moreover, e −itH 0 enjoys the following (global-in-time) Strichartz estimates:
||e −itH 0 ϕ|| L p (R;L q (R d )) ≤ C||ϕ|| L 2 (R d ) ,
where (p, q) satisfies the following admissible condition:
p ≥ 2, 2 p = d 1 2 − 1 q
, (d, p, q) = (2, 2, ∞).
(1.1)
Strichartz estimates imply that, for any ϕ ∈ L 2 , e −itH 0 ϕ ∈ q∈Q d L q for a.e. t ∈ R, where Q 1 = [2, ∞], Q 2 = [2, ∞) and Q d = [2, 2d/(d − 2)] for d ≥ 3. These estimates hence can be regarded as L p -type smoothing properties of Schrödinger equations, and have been widely used in the study of nonlinear Schrödinger equations (see, e.g., [8]). Strichartz estimates for e −itH 0 were first proved by Strichartz [32] for a restricted pair of (p, q) with p = q = 2(d + 2)/d, and have been generalized for (p, q) satisfying (1.1) and p = 2 by [15]. The endpoint estimate (p, q) = (2, 2d/(d − 2)) for d ≥ 3 was obtained by [20]. For Schrödinger operators with electromagnetic potentials, i.e., H = 1 2 (−i∂ x − A) 2 + V , (short-time) dispersive and (local-in-time) Strichartz estimates have been extended with potentials decaying at infinity [34] or growing at infinity [13,35]. In particular, it was shown by [13,35] that if g jk = δ jk , V and A satisfy Assumption 1.1 with µ ≥ 0 and all derivatives of the magnetic field B = dA are of short-range type, then e −itH ϕ satisfies (short-time) dispersive estimates:
||e −itH ϕ|| L ∞ (R d ) ≤ C|t| −d/2 ||ϕ|| L 1 (R d ) ,
for sufficiently small t = 0. Local-in-time Strichartz estimates, which have the forms ||e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T ||ϕ|| L 2 (R d ) , T > 0, are immediate consequences of this estimate and the T T * -argument due to Ginibre-Velo [12] (see for the endpoint estimate). For the case with singular electric potentials or with supercritical electromagnetic potentials, we refer to [34,36,38,9]. We mention that globalin-time dispersive and Strichartz estimates for scattering states have been also studied under suitable decaying conditions on potentials and assumptions for zero energy; see [19,37,30,12,10] and reference therein. We also mention that there is no result on sharp global-in-time dispersive estimates for magnetic Schrödinger equations.
On the other hand, the influence of the geometry on the behavior of solutions to linear and nonlinear partial differential equations has been extensively studied. From this geometric viewpoint, sharp local-in-time Strichartz estimates for Schrödinger equations with variable coefficients (or, more generally, on manifolds) have recently been investigated by many authors under several conditions on the geometry ; see, e.g., [31,6,26,16,4,3,7,24] and reference therein. In [31], [26], [4] authors studied the case on the Euclidean space with nontrapping asymptotically flat metrics. The case on the nontrapping asymptotically conic manifold was studied by [16] and [24]. In [3] the author considered the case of nontrapping asymptotically hyperbolic manifold. For the trapping case, it was shown in [6] that Strichartz estimates with a loss of derivative 1/p hold on any compact manifolds without boundaries. They also proved that the loss 1/p is optimal in the case on S d . In [4], [3] and [24], authors proved sharp Strichartz estimates, outside a large compact set, without the nontrapping condition. More recently, it was shown in [7] that sharp Strichartz estimates still hold for the case with hyperbolic trapped trajectories of sufficiently small fractal dimension. We mention that there are also several works on global-in-time Strichartz estimates in the case of long-range perturbations of the flat Laplacian on R d ( [5,33,23]).
While (local-in-time) Strichartz estimates are well studied subjects for both of these two cases (at least under the nontrapping condition), the literature is more sparse for the mixed case. In this paper we give a unified approach to a combination of these two kinds of results. More precisely, under Assumption 1.1 with µ > 0, we prove (1) sharp local-in-time Strichartz estimates, outside a large compact set centered at origin, without the nontrapping condition;
(2) the global-in-space estimates with the nontrapping condition. Under the nontrapping condition and Assumption 1.1 with µ ≥ 0, we also show local-in-time Strichartz estimates with an arbitrarily small derivative loss. We mention that all results include the endpoint estimates (p, q) = (2, 2d/(d − 2)) for d ≥ 3. This is a natural continuation of author's previous work [25], which was concerned with the non-endpoint estimates for the case with at most linearly growing potentials.
F ( * ) denotes the characteristic function designated by ( * ). We now state the main result.
Theorem 1.2 (Strichartz estimates near infinity). Suppose that H satisfies Assumption 1.1 with µ > 0. Then, for any T > 0, p ≥ 2, q < ∞ and 2/p = d(1/2 − 1/q) and for sufficiently large R > 0, we have
||F (|x| > R)e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T ||ϕ|| L 2 (R d ) ,(1.
2)
where C T > 0 may be taken uniformly with respect to R.
To state the result on global-in-space estimates, we recall the nontrapping condition. Let us denote by k(x, ξ) denotes the classical kinetic energy:
k(x, ξ) = 1 2 d j,k=1 g jk (x)ξ j ξ k , and by (y 0 (t, x, ξ), η 0 (t, x, ξ)) the Hamilton flow generated by k(x, ξ):
y 0 (t) = ∂ ξ k(y 0 (t), η 0 (t)),η 0 (t) = −∂ x k(y 0 (t), η 0 (t)); (y 0 (0), η 0 (0)) = (x, ξ).
Note that the Hamiltonian vector field H k , generated by k, is complete on R 2d since (g jk ) satisfies the uniform elliptic condition. Hence, (y 0 (t, x, ξ), η 0 (t, x, ξ)) exists for all t ∈ R. Definition 1.3. We say that k(x, ξ) satisfies the nontrapping condition if for any (x, ξ) ∈ R 2d with ξ = 0, |y 0 (t, x, ξ)| → +∞ as t → ±∞.
(1.
3)
The second result is the following.
Theorem 1.4 (Global-in-space Strichartz estimates). Suppose that H satisfies Assumption 1.1 with µ ≥ 0. Let T > 0, p ≥ 2, q < ∞ and 2/p = d(1/2 − 1/q). Then, for any r > 0, there exists C T,r > 0 such that
||F (|x| < r)e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T,r || H 1/p ϕ|| L 2 (R d ) . (1.4)
Moreover if we assume in addition that k(x, ξ) satisfies the nontrapping condition (1.3), then
||F (|x| < r)e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T,r ||ϕ|| L 2 (R d ) .
(1.5)
In particular, combining with Theorem 1.2, we have (global-in-space) Strichartz estimates
||e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T ||ϕ|| L 2 (R d ) ,
under the nontrapping condition (1.3), provided that µ > 0.
When µ ≥ 0 we have the following partial result.
Theorem 1.5 (Near sharp estimates without asymptotic flatness). Suppose that H satisfies Assumption 1.1 with µ ≥ 0 and k(x, ξ) satisfies the nontrapping condition (1.3). Let T > 0, p ≥ 2, q < ∞ and 2/p = d(1/2 − 1/q). Then, for any ε > 0, there exists C T,ε > 0 such that
||e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T,ε || H ε ϕ|| L 2 (R d ) .
There are some remarks. [31,4] when A ≡ 0 and V is of long-range type. Theorems 1.2 and 1.4 hence are regarded as generalizations of their results for the case with growing electromagnetic potential perturbations.
(2) The only restriction for admissible pairs, in comparison to the flat case, is to exclude (p, q) = (4, ∞) for d = 1, which is due to the use of the Littlewood-Paley decomposition.
(3) The missing derivative loss H ε in Theorem 1.5 is due to the use of the following local smoothing effect (due to Doi [11]):
|| x −1/2−ε D 1/2 e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) ≤ C T,ε ||ϕ|| L 2 (R d ) .
It is well known that this estimate does not holds when ε = 0 even for H = H 0 . We would expect that Theorem 1.2 still holds true for the case with critical electromagnetic potentials in the following sense:
x −1 |∂ α x A j (x)| + x −2 |∂ α x V (x)| ≤ C αβ x −|α| ,(
at least if g jk satisfies the bounds in Assumption 1.1 with µ > 0). However, this is beyond our techniques (see, also remark 4.2).
The rest of the paper is devoted to the proofs of Theorems 1.2, 1.4 and 1.5. Throughout the paper we use the following notations: x stands for 1 + |x| 2 . We write L q = L q (R d ) if there is no confusion. For Banach spaces X and Y , we denote by || · || X→Y the operator norm from X to Y . We write Z + = N ∪ {0}. We denote the set of multi-indices by Z d + . We denote by K the kinetic energy part of H and by H 0 the free Schrödinger operator:
K = − 1 2 j,k ∂ j g jk (x)∂ k , H 0 = − 1 2 ∆ = − 1 2 d j=1 ∂ 2 j .
p(x, ξ) denotes the classical total energy (modulo lower order terms):
p(x, ξ) = 1 2 d j,k=1 g jk (x)(ξ j − A j (x))(ξ k − A k (x)) + V (x).
For h ∈ (0, 1] we consider H h := h 2 H as a semiclassical Schrödinger operator with h-dependent electromagnetic potentials h 2 V and hA j . We denote the corresponding total energy by p h (x, ξ):
p h (x, ξ) = 1 2 d j,k=1 g jk (x)(ξ j − hA j (x))(ξ k − hA k (x)) + h 2 V (x).
Before starting the details of the proofs, we here describe the main ideas. At first we remark that, since our Hamiltonian H is not bounded below in general, the Littlewood-Paley decomposition associated with H does not hold for any p > 2. To overcome this difficulty, we consider the following partition of unity on the phase space R 2d :
ψ ε (x, ξ) + χ ε (x, ξ) = 1,
where ψ ε is supported in {(x, ξ); x < ε|ξ|} for some sufficiently small constant ε > 0. It is easy to see that the total energy p(x, ξ) is elliptic on supp ψ ε :
C −1 |ξ| 2 ≤ p(x, ξ) ≤ C|ξ| 2 , (x, ξ) ∈ supp ψ ε ,
and we hence can prove a Littlewood-Paley type decomposition of the following form:
|| Op(ψ ε )u|| L q ≤ C q ||u|| L 2 + C q h=2 −j ,j≥0 || Op h (a h )f (h 2 H)u|| 2 L q 1/2 , where 2 ≤ q < ∞, {f (h 2 ·); h = 2 −j , j ≥ 0}
is a 4-adic partition of unity on [1, ∞) and a h is an appropriate h-dependent symbol supported in {|x| < 1/h, |ξ| ∈ I} for some open interval I ⋐ (0, ∞), Op(ψ ε ) and Op h (a h ) denote the corresponding pseudodifferential and semiclassical pseudodifferential operators, respectively.
Then, the idea of the proof of Theorem 1.2 is as follows. In view of the above Littlewood-Paley estimate, the proof is reduced to that of Strichartz estimates for F (|x| > R) Op h (a h )e −itH and Op(χ ε )e −itH . In order to prove Strichartz estimates for F (|x| > R) Op h (a h )e −itH , we use semiclassical approximations of Isozaki-Kitada type. We however note that because of the unboundedness of potentials with respect to x, it is difficult to construct directly such approximations. To overcome this difficulty, we introduce a modified Hamiltonian H due to [38] so that H = H for |x| ≤ L/h and H = K for |x| ≥ 2L/h for some constant L ≥ 1. Then, H h = h 2 H can be regarded as a "long-range perturbation" of the semiclassical free Schrödinger operator H h 0 = h 2 H 0 . We also introduce the corresponding classical total energy p h (x, ξ) so that
p h (x, ξ) = p h (x, ξ) for |x| ≤ L/h and p h (x, ξ) = k(x, ξ) for |x| ≥ 2L/h. Let a ±
h be supported in outgoing and incoming regions {R < |x| < 1/h, |ξ| ∈ I, ±x ·ξ > 1/2}, respectively, so that
F (|x| > R)a h = a + h + a − h , wherex = x/|x|. Rescaling t → th, we first construct the semiclassical approximations for e −it H h /h Op h (a ± h ) * of the following forms e −it H h /h Op h (a ± h ) * = J h (S ± h , b ± h )e −itH h 0 /h J h (S ± h , c ± h ) * + O(h N ), 0 ≤ ±t ≤ 1/h, respectively, where S ± h solve the Eikonal equation associated to p h and J h (S ± h , b ± h ) and J h (S ± h , c ± h )
are associated semiclassical Fourier integral operators. The method of the construction is similar to as that of Robert [28]. On the other hand, we will see that if L ≥ 1 is large enough, then the Hamilton flow generated by p h with initial conditions in supp a ± h cannot escape from {|x| ≤ L/h} for 0 < ±t ≤ 1/h, respectively, i.e.,
π x exp tH p h (supp a ± h ) ⊂ {|x| ≤ L/h}, 0 < ±t ≤ 1/h. Since p h = p h for |x| ≤ L/h, we have exp tH p h (supp a ± h ) = exp tH p h (supp a ± h ), 0 < ±t ≤ 1/h.
We thus can expect (at least formally) that the corresponding two quantum evolutions are approximately equivalent modulo some smoothing operator. We will prove the following rigorous justification of this formal consideration:
||(e −itH h /h − e −it H h /h ) Op h (a ± h ) * || L 2 →L 2 ≤ C M h M , 0 ≤ ±t ≤ 1/h, M ≥ 0, where H h = h 2 H. By using such approximations for e −itH h /h Op h (a ± h ) * , we prove local-in-time dispersive estimates for Op h (a ± h )e −itH Op h (a ± h ) * : || Op h (a ± h )e −itH Op h (a ± h ) * || L 1 →L ∞ ≤ C|t| −d/2 , 0 < h ≪ 1, 0 < |t| < 1.
Strichartz estimates follow from these estimates and the abstract Theorem due to Keel-Tao [20]. Strichartz estimates for Op(χ ε )e −itH follow from the following short-time dispersive estimate:
|| Op(χ ε )e −itH Op(χ ε ) * || L 1 →L ∞ ≤ C ε |t| −d/2 , 0 < |t| < t ε ≪ 1.
To prove this, we construct an approximation for Op(χ ε )e −itH Op(χ ε ) * of the following form:
Op(χ ε )e −itH Op(χ ε ) * = J(Ψ, a) + O H −γ →H γ (1), |t| < t ε ,
where the phase function Ψ = Ψ(t, x, ξ) is a solution to a time-dependent Hamilton-Jacobi equation associated to p(x, ξ) and J(Ψ, a) is the corresponding Fourier integral operator. In the construction, the following fact plays an important rule:
|∂ α x ∂ β ξ p(x, ξ)| ≤ C αβ , (x, ξ) ∈ supp χ ε , |α + β| ≥ 2.
We note that if (g jk ) jk − Id d = 0 depends on x then these bounds do not hold without such a restriction of the initial condition. Using these bounds, we can follow a classical argument due to [21] and construct an approximation for e −itH Op(χ ε ) * of the form J(Ψ, b) modulo some smoothing term. Next, using an Egorov type lemma, we will prove that Op(χ ε )(e −itH Op(χ ε ) * − J(Ψ, b)) still can be considered as an "error" term. The proof of Theorem 1.4 is based on a standard idea by [31,6,4]. Strichartz estimates with loss follow from semiclassical Strichartz estimates up to time scales of order h, which can be verified by the standard argument. Moreover, under the nontrapping condition, we will prove that the missing 1/p derivative loss can be recovered by using local smoothing effects due to Doi [11].
The proof of Theorem 1.5 is based on a slight modification of that of Theorem 1.4. By virtue of the Strichartz estimates for Op(χ ε )e −itH and the Littlewood-Paley decomposition, it suffices to show
|| Op h (a h )e −itH ϕ|| L p ([−T,T ];L q ) ≤ h −ε ||ϕ|| L 2 , 0 < h ≪ 1.
To prove this estimate, we first prove semiclassical Strichartz estimates for Op h (a h )e −itH up to time scales of order h inf |x|. The proof is based on a refinement of the standard WKB approximation for the semiclassical propagator Op h (a h )e −itH h /h . Combining semiclassical Strichartz estimates with a partition of unity argument with respect to x, we will obtain the following Strichartz estimate with an inhomogeneous error term:
|| Op h (a h )e −itH ϕ|| L p ([−T,T ];L q ) ≤ C T ||ϕ|| L 2 + C|| x −1/2−ε h −1/2−ε Op h (a h )e −itH ϕ|| L 2 ([−T,T ];L 2 ) ,
for any ε > 0, which, combined with local smoothing effects, implies Theorem 1.5. The paper is organized as follows. We first record some known results on the semiclassical pseudodifferential calculus and prove the above Littlewood-Paley decomposition in Section 2. Using dispersive estimates, which will be studied in Sections 4 and 5, we shall prove Theorem 1.2 in Section 3. We construct approximations of Isozaki-Kitada type and prove dispersive estimates for Op h (a ± h )e −itH Op h (a ± h ) * in Section 4. Section 5 discuss the dispersive estimates for Op(χ ε )e −itH Op(χ ε ) * . The proof of Theorem 1.4 and Theorem 1.5 are given in Section 6 and Section 7, respectively.
Semiclassical functional calculus
Throughout this section we assume Assumption 1.1 with µ ≥ 0, i.e.,
|∂ α x g jk (x)| + x −1 |∂ α x A j (x)| + x −2 |∂ α x V (x)| ≤ C αβ x −|α| . (2.1)
The goal of this section is to prove a Littlewood-Paley type decomposition under suitable restriction on the initial data. At first we record (without proof) some known results on the pseudodifferential calculus which will be used throughout the paper. We refer to [27,22] for the details of the proof.
Pseudodifferential calculus
For the metric g = dx 2 / x 2 + dξ 2 / ξ 2 and a weight function m(x, ξ) on the phase space R 2d , we use Hörmander's symbol class notation S(m, g), i.e., a ∈ S(m, g) if and only if a ∈ C ∞ (R 2d ) and
|∂ α x ∂ β ξ a(x, ξ)| ≤ C αβ m(x, ξ) x −|α| ξ −|β| , α, β ∈ Z d + .
To a symbol a ∈ C ∞ (R 2d ) and h ∈ (0, 1], we associate the semiclassical pseudodifferential operator (h-PDO for short) defined by Op h (a):
Op h (a)f (x) = 1 (2πh) d e i(x−y)·ξ/h a(x, ξ)f (y)dydξ, f ∈ S(R d ).
When h = 1 we write Op(a) = Op h (a) for simplicity. The Calderón-Vaillancourt theorem shows that for any symbol a ∈ C ∞ (R 2d ) satisfying |∂ α x ∂ β ξ a(x, ξ)| ≤ C αβ , Op h (a) is extended to a bounded operator on L 2 (R d ) uniformly with respect to h ∈ (0, 1]. Moreover, for any symbol a satisfying |∂ α x ∂ β ξ a(x, ξ)| ≤ C αβ ξ −γ , γ > d, Op(a h ) is extended to a bounded operator from L q (R d ) to L r (R d ) with the following bounds:
|| Op h (a)|| L q →L r ≤ C qr h −d(1/q−1/r) , 1 ≤ q ≤ r ≤ ∞,(2.2)
where C qr > 0 is independent of h ∈ (0, 1]. These bounds follow from the Schur lemma and an interpolation (see, e.g., [4,Proposition 2.4]). For two symbols a ∈ S(m 1 , g) and b ∈ S(m 2 , g), the composition Op h (a) Op h (b) is also a h-PDO and written in the form Op h (c) = Op h (a) Op h (b) with a symbol c ∈ S(m 1 m 2 , g) given by c(x, ξ) = e ihDηDz a(x, η)(z, ξ)| z=x,η=ξ . Moreover, c(x, ξ) has the following expansion
c = N −1 |α|=0 h |α| i |α| α! ∂ α ξ a · ∂ α x b + h N r N with r N ∈ S( x −N ξ −N m 1 m 2 , g). (2.
3)
The symbol of the adjoint Op h (a) * is given by a * (x, ξ) = e ihDηDz a(z, η)| z=x,η=ξ ∈ S(m 1 , g) which has the expansion
a * = N −1 |α|=0 h |α| i |α| α! ∂ α ξ ∂ α x a + h N r * N with r * N ∈ S( x −N ξ −N m 1 , g). (2.4)
Littlewood-Paley decomposition
As we mentioned in the outline of the paper, H is not bounded below in general and we hence cannot expect that the Littlewood-Paley decomposition associated with H, which is of the form
||u|| L q ≤ C q ||u|| L 2 + C q ∞ j=0 ||f (2 −2j H)u|| 2 L q 1/2 , holds if q = 2.
The standard Littlewood-Paley decomposition associated with H 0 also does not work well in our case, since the commutator of H with the Littlewood-Paley projection f (2 −2j H 0 ) can be grow at spatial infinity. To overcome this difficulty, let us introduce an additional localization as follows. Given a parameter ε > 0 and a cut-off function ϕ ∈ C ∞ 0 (R) such that ϕ ≡ 1 on [0, 1/2] and supp ϕ ⊂ [0, 1], we define ψ ε (x, ξ) by
ψ ε (x, ξ) = ϕ x ε|ξ| .
It is easy to see that {ψ ε } 0<ε≤1 is bounded in S(1, g) and supported in {(x, ξ) ∈ R 2d ; x < ε|ξ|}. Moreover, for sufficiently small ε > 0, the total energy p(x, ξ) is uniformly elliptic on the support of ψ ε and Op(ψ ε )H thus is essentially bounded below. In this subsection we prove a Littlewood-Paley type decomposition on the range of Op(ψ ε ). We begin with the following proposition which tells us that, for any f ∈ C ∞ 0 (R) and h ∈ (0, 1], Op(ψ ε )f (h 2 H) is approximated in terms of the h-PDO.
Proposition 2.1. There exists ε > 0 such that, for any f ∈ C ∞ 0 (R), we can construct bounded families {a h,j } h∈(0,1] ⊂ S( x −j ξ −j , g), j ≥ 0, such that (1) a h,0 is given explicitly by a h,0 (x, ξ) = ψ ε (x, ξ/h)f (p h (x, ξ)). Moreover,
supp a h,j ⊂ supp ψ ε (·, ·/h) ∩ supp f (p h ) ⊂ {(x, ξ) ∈ R 2d ; x < 1/h, |ξ| ∈ I},
for some open interval I ⋐ (0, ∞). In particular, we have
|| Op h (a h,j )|| L q ′ →L q ≤ C jqq ′ h −d(1/q ′ −1/q) , 1 ≤ q ′ ≤ q ≤ ∞, uniformly in h ∈ (0, 1]. (2) For any integer N > 2d, we set a h = N −1 j=0 h j a h,j . Then, || Op(ψ ε )f (h 2 H) − Op h (a h )|| L 2 →L q ≤ C qN h N/2 , 2 ≤ q ≤ ∞, uniformly in h ∈ (0, 1].
The following is an immediate consequence of this proposition.
Corollary 2.2. For any 2 ≤ q ≤ ∞ and h ∈ (0, 1], Op(ψ ε )f (h 2 H) is bounded from L 2 (R d ) to L q (R d ) and satisfies || Op(ψ ε )f (h 2 H)|| L 2 →L q ≤ C q h −d(1/2−1/q) ,
where C q > 0 is independent of h ∈ (0, 1]. Remark 2.3. If V, A ≡ 0, then Proposition 2.1 and Corollary 2.2 hold without the additional term Op(ψ ε ). We refer to [6] (for the case on compact manifolds without boundary) and to [4] (for the case with metric perturbations on R d ). For more general cases with Laplace-Beltrami operators non-compact manifolds with ends, we refer to [2,1].
Proof of Proposition 2.1. We begin with the well-known Helffer-Sjöstrand formula [17]:
f (h 2 H) = − 1 2πi C ∂ f ∂z (z)(h 2 H − z) −1 dz ∧ dz,
where f (z) is an almost analytic extension of f (λ). Since f ∈ C ∞ 0 (R), f (z) is also compactly supported and satisfies ∂z f (z) = O(| Im z| M ) for any M > 0. We shall construct a semiclassical approximation of Op(ψ ε )(h 2 H − z) −1 for z ∈ C \ [0, ∞). Although the method is based on the standard semiclassical parametrix construction (see, e.g., [27,6]), we give the details of the proof since we consider the composition of the PDO, Op(ψ ε ), which is not in the semiclassical regime, with the semiclassical resolvent (h 2 H − z) −1 .
p(x, ξ) and p 1 (x, ξ) denote the principal symbol and the subsymbol of H, respectively, i.e.,
H = p(x, D) + p 1 (x, D). (2.1) and the support property of ψ ε imply |∂ α x ∂ β ξ p(x, ξ)| ≤ C αβ x −|α| ξ 2−|β| , |∂ α x ∂ β ξ p 1 (x, ξ)| ≤ C αβ x −1−|α| ξ 1−|β| , (x, ξ) ∈ supp ψ ε . (2.5) Moreover, we obtain j,k (|g jk (x)ξ j A k (x)| + |g jk (x)A j (x)A k (x)|) + |V (x)| ≤ Cε|ξ| 2 , (x, ξ) ∈ supp ψ ε ,
where C > 0 is independent of x, ξ and ε. This estimate and the uniform ellipticity of k imply that p(x, ξ) is also uniformly elliptic on supp ψ ε :
C −1 1 |ξ| 2 ≤ p(x, ξ) ≤ C 1 |ξ| 2 , (x, ξ) ∈ supp ψ ε ,
provided that ε > 0 is small enough. Then, for any integer N ≥ 0, we can find symbols q h,j (z, x, ξ), j = 0, 1, ..., N − 1, and r h,N (z, x, ξ), depending holomorphically on z ∈ C \ R, such that
Op(ψ ε ) = N −1 j=0 Op(q h,j (z))(h 2 H − z) + Op(r h,N (z)).
More precisely, q h,0 is given explicitly by
q h,0 (z, x, ξ) = ψ ε (x, ξ) h 2 p(x, ξ) − z .
Using (2.5) and the fact that ψ ε ∈ S(1, g), we obtain
|∂ α x ∂ β ξ q h,0 (z, x, ξ)| ≤ C αβ 0≤l≤|β|+|α| x −|α| ξ 2l−|β| h 2l |h 2 p(x, ξ) − z| l+1 .
On the other hand, assuming |z| ≤ 1 without loss of generality and using the uniform ellipticity of p(x, ξ) (on supp ψ ε ), we learn that for (x, ξ) ∈ supp ψ ε ,
1 |h 2 p(x, ξ) − z| l ≤ | Im z| −l if h|ξ| ≤ 2C 1 , |hξ| −2l if h|ξ| ≥ 2C 1 . (2.6)
These two estimates imply
|∂ α x ∂ β ξ q h,0 (z, x, ξ)| ≤ C αβ x −|α| ξ −|β| | Im z| −1−|α+β| if h|ξ| ≤ 2C 1 , x −|α| h |β| if h|ξ| ≥ 2C 1 . (2.7)
We next consider q h,1 which is defined by
q h,1 (z, x, ξ) = h 2 i∂ ξ q h,0 (z, x, ξ) · ∂ x p(x, ξ) − h 2 q h,0 (z, x, ξ) · p 1 (x, ξ) h 2 p(x, ξ) − z .
A similar calculation as that for q h,0 yields
|∂ α x ∂ β ξ q h,1 (z, x, ξ) ≤ C αβ x −1−|α| ξ −|β| h| Im z| −2−|α+β| if h|ξ| ≤ 2C 1 , x −1−|α| ξ −1 h |β| if h|ξ| ≥ 2C 1 .
For j ≥ 2, q h,j are defined inductively by
q h,j (h 2 p − z) = − |α|+k=j, |α|≥1 i −|α| h 2 α! ∂ α ξ q h,k · ∂ α x p − |α|+k=j−1, i −|α| h 2 α! ∂ α ξ q h,k · ∂ α x p 1 .
Iterating the above procedure, we have
|∂ α x ∂ β ξ q h,j (z, x, ξ)| ≤ C αβ x −j−|α| ξ −|β| h j | Im z| −n(j)−|α+β| if h|ξ| ≤ 2C 1 , x −j−|α| ξ −j h |β| if h|ξ| ≥ 2C 1 , (2.8)
for some integer n(j) > 0. Moreover, q h,j are of the forms
q h,j = N j k=0 q h,jk (x, ξ) (h 2 (p, x, ξ) − z) k+1 , where N j ≤ 2j − 1 and q h,jk (x, ξ) satisfy supp q h,jk ⊂ supp ψ ε and |∂ α x ∂ β ξ q h,jk (x, ξ)| ≤ C αβjk x −|α| ξ 2k h 2k max( ξ −|β| h j , ξ −j h |β| ),(2.9)
uniformly with respect to h ∈ (0, 1]. By virtue of (2.8), for any 0 ≤ γ ≤ N and for some integer
n(N ) > 0, the remainder r h,N (z) satisfies |∂ α x ∂ β ξ r h,N (z, x, ξ)| ≤ C αβγ x −N −|α| ξ −γ−|β| h N −γ | Im z| −n(N )−|α+β| if h|ξ| ≤ 2C 1 , x −N −|α| ξ −N +γ h γ+|β| if h|ξ| ≥ 2C 1 . (2.10)
By the Helffer-Sjöstrand formula, Op(ψ ε )f (h 2 H) can be brought to the form
Op(ψ ε )f (h 2 H) = N −1 j=0 Op( a h,j ) + R(h, N ), where a h,0 (x, ξ) = ψ ε (x, ξ)f (h 2 p(x, ξ)), a h,j (x, ξ) = N j k=1 (−1) k k! q h,jk (x, ξ) d l f dλ l (h 2 p(x, ξ)), 1 ≤ j ≤ N − 1, R(h, N ) = − 1 2πi C ∂ f ∂z (z) Op(r h,N (z))(h 2 H − z) −1 dz ∧ dz. By definition, a h,j are supported in {(x, ξ); |x| < ε|ξ|, C −1 0 /h ≤ |ξ| ≤ C 0 /h} with some C 0 > 0. Moreover, it follows from (2.9) that a h,j ∈ S( x −j ξ −j , g). We now define a h,j (x, ξ) = h −j a h,j (x, ξ/h). Taking ε > 0 smaller if necessary, we see that supp a h,j ⊂ ψ ε (·, ·/h)f (p h ) ⊂ {(x, ξ); |x| < 1/h, C −1 0 ≤ |ξ| ≤ C 0 }, and that |∂ α x ∂ β ξ a h,j | ≤ C αβ x −j−|α| h −j−|β| ξ/h −j−|β| ≤ C αβ x −|α| (h + |ξ|) −j−|β| ≤ C αβ (2C 0 ) j+|β| x −|α| ξ −|β| , uniformly in h ∈ (0, 1], since |ξ| ≥ C −1 0 on supp a h,j . In particular, {a h,j } h∈(0,1] are bounded in S( x −j ξ −j , g). By virtue of (2.2), we obtain || Op h (a h,j )|| L q ′ →L q ≤ C jqq ′ h −d(1/q ′ −1/q) , 1 ≤ q ′ ≤ q ≤ ∞,
uniformly in h ∈ (0, 1]. Finally, we shall check the estimate on the remainder. Choosing N > 2d + 1 and γ = N/2 and using (2.10), we have
|| Op(r h,N (z))|| L 2 →L q ≤ C q h N/2 | Im z| −n(N ) , 2 ≤ q ≤ ∞. Using the bound ||(h 2 H − z) −1 || L 2 →L 2 ≤ | Im z| −1 , we conclude that ||R(h, N )|| L 2 →L q ≤ C N q h N/2 C ∂ f ∂z (z) 1 | Im z| n(N )+1 dz ∧ dz ≤ C N q h N/2 , which complete the proof.
Consider a 4-adic partition of unity:
f 0 (λ) + h f (h 2 λ) = 1, λ ∈ R, where f 0 , f ∈ C ∞ 0 (R) with supp f 0 ⊂ [−1, 1], supp f ⊂ [1/4, 4]
and h means that, in the sum, h takes all negative powers of 2 as values, i.e., h = h=2 −j ,j≥0 . Let F ∈ C ∞ 0 (R) be such that supp F ∈ [1/8, 8] and F ≡ 1 on supp f . The spectral decomposition theorem implies
1 = f 0 (H) + h f (h 2 H) = f 0 (H) + h F (h 2 H)f (h 2 H).
Let a h ∈ S(1, g) be as in Proposition 2.1 with f = F . Using Proposition 2.1, we obtain a Littlewood-Paley type estimates on a range of Op(ψ ε ).
Proposition 2.4. For any 2 ≤ q < ∞, || Op(ψ ε )u|| L q (R d ) ≤ C q ||u|| L 2 (R d ) + C q h || Op h (a h )f (h 2 H)u|| L q (R d ) 1/2 .
Proof. The proof is same as that of [6, Corollary 2.3] and we omit details.
Corollary 2.5. Let ε > 0 and ψ ε be as above and
χ ε = 1 − ψ ε . Let ρ ∈ C ∞ (R d ) be such that |∂ α x ρ(x)| ≤ C α x −|α| , α ∈ Z d + .
Then, for any T > 0 and any (p, q) satisfying p ≥ 2, q < ∞ and 2/p = d(
1/2 − 1/q), there exists C T > 0 such that ||ρe −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T ||ϕ|| L 2 (R d ) + C|| Op(χ ε )e −itH ϕ|| L p ([−T,T ];L q (R d )) + C h || Op h (a h )e −itH f (h 2 H)ϕ|| 2 L p ([−T,T ];L q (R d )) 1/2 ,
where a h is given by Proposition 2.1 with ψ ε replaced by ρψ ε . In particular,
a h (x, ξ) is supported in supp ρ(x)ψ(x, ξ/h)F (p h (x, ξ)).
In this section we prove Theorem 1.2 under Assumption 1.1 with µ > 0. We first state two key estimates which we will prove in later sections. For R > 0, an open interval I ⋐ (0, ∞) and σ ∈ (−1, 1), we define the outgoing and incoming regions Γ ± (R, I, σ) by
Γ ± (R, I, σ) := (x, ξ) ∈ R 2d ; |x| > R, |ξ| ∈ I, ± x · ξ |x||ξ| > −σ ,
respectively. We then have the following (local-in-time) dispersive estimates:
Proposition 3.1.
Suppose that H satisfies Assumption 1.1 with µ > 0. Let I ⋐ (0, ∞) and σ ∈ (−1, 1). Then, for sufficiently large R ≥ 1, small h 0 > 0 and any symbols a ± h ∈ S(1, g)
supported in Γ ± (R, I, σ) ∩ {x; |x| < 1/h}, we have || Op h (a ± h )e −itH Op h (a ± h ) * || L 1 →L ∞ ≤ C|t| −d/2 , 0 < |t| ≤ 1, uniformly with respect to h ∈ (0, h 0 ].
We prove this proposition in Section 4. In the region {|x| |ξ|}, we have the following (short-time) dispersive estimates:
Proposition 3.2.
Suppose that H satisfies Assumption 1.1 with µ ≥ 0. Let us fix arbitrarily ε > 0. Then, there exists t ε > 0 such that, for any symbol
χ ε ∈ S(1, g) supported in {(x, ξ); x ≥ ε|ξ|}, we have || Op(χ ε )e −itH Op(χ ε ) * || L 1 →L ∞ ≤ C ε |t| −d/2 , 0 < |t| ≤ t ε .
We prove this proposition in Section 5.
Proof of Theorem 1.2. We first note that, for any T > T 0 > 0,
||e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ CT T −1 0 ||e −itH ϕ|| L p ([−T 0 ,T 0 ];L q (R d )) , since e −itH is unitary on L 2 (R d ). Taking ρ ∈ C ∞ (R d ) so that 0 ≤ ρ(x) ≤ 1, ρ(x) = 1 for |x| ≥ 1 and ρ(x) = 0 for |x| ≤ 1/2, we set ρ R (x) = ρ(x/R). In order to prove Theorem 1.2, it suffices to show ||ρ R e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C||ϕ|| L 2 (R d ) ,
for sufficiently large R ≥ 1 and small T > 0. Let a h be as in Proposition 2.1. Replacing ψ ε with ρ R ψ ε and taking ε > 0 smaller if necessary, we may assume without loss of generality
that supp a h ⊂ {(x, ξ); R < |x| < 1/h, |ξ| ∈ I} for some open interval I ⋐ (0, ∞). Choosing θ ± ∈ C ∞ ([−1, 1]) so that θ + + θ − = 1, θ + = 1 on [1/2, 1] and θ + = 0 on [−1, −1/2], we set a ± h (x, ξ) = a h (x, ξ)θ ± (x·ξ), wherex = x/|x|. It is clear that {a ± h } h∈(0,1] is bounded in S(1, g) and supp a ± h ⊂ Γ ± (R, I, 1/2) ∩ {x; |x| < 1/h}, and that a h = a + h + a − h . We now apply Proposition 3.1 to a ± h and obtain the local-in-time dispersive estimate for Op h (a ± h )e −itH Op h (a ± h ) * (uniformly in h ∈ (0, h 0 ]), which, combined with the L 2 -boundedness of Op(a ±
h )e −itH and the abstract Theorem due to Keel-Tao [20], implies Strichartz estimates for Op(a h )e −itH :
|| Op h (a h )e −itH ϕ|| L p ([−1,1];L q (R d )) ≤ ± || Op h (a ± h )e −itH ϕ|| L p ([−1,1];L q (R d )) ≤ C||ϕ|| L 2 (R d ) , uniformly with respect to h ∈ (0, h 0 ]. Since Op h (a h ) is bounded from L 2 (R d ) to L q (R d ) with the bound of order O(h −d(1/2−1/q) ), for h 0 < h ≤ 1 we have h 0 <h≤1 || Op h (a h )e −itH f (h 2 H)ϕ|| 2 L p ([−1,1];L q (R d )) ≤ C(h 0 )||ϕ|| 2 L 2 (R d ) .
with some C(h 0 ) > 0. Using these two bounds, we obtain
h || Op h (a h )e −itH f (h 2 H)ϕ|| 2 L p ([−1,1];L q (R d )) ≤ C 0<h<h 0 ||f (h 2 H)ϕ|| 2 L 2 (R d )) + C(h 0 )||ϕ|| 2 L 2 (R d ) ≤ C||ϕ|| 2 L 2 (R d ) .
On the other hand, Strichartz estimates for Op(χ ε )e −itH is an immediate consequence of Proposition 3.2. By virtue of Corollary 2.5, we complete the proof.
Semiclassical approximations for outgoing propagators
Throughout this section we assume Assumption 1.1 with µ > 0. We here study the behavior of e −itH Op h (a ± h ) * , where a ± h ∈ S(1, g) are supported in Γ ± (R, I, σ) ∩ {|x| < 1/h}, respectively. The main goal of this section is to prove Proposition 3.1. For simplicity, we consider the outgoing propagator e −itH Op h (a + h ) * for 0 ≤ t ≤ 1 only, and the proof for incoming case is analogous.
In order to prove dispersive estimates, we construct a semiclassical approximation for the outgoing propagator e −itH Op h (a + h ) * by using the method of Isozaki-Kitada. Namely, rescaling t → th and setting H h = h 2 H, H h 0 = −h 2 ∆/2, we consider an approximation for the semiclassical propagator e −itH h /h Op h (a + h ) * of the following form
e −itH h /h Op h (a + h ) * = J h (S + h , b + h )e −itH h 0 /h J h (S + h , c + h ) * + O(h N ), 0 ≤ t ≤ h −1 ,
where S + h solves suitable Eikonal equation in the outgoing region and J(S + h , w) is the corresponding semiclassical Fourier integral operator (h-FIO for short):
J h (S + h , w)f (x) = (2πh) −d e i(S + h (x,ξ)−y·ξ)/h w(x, ξ)f (y)dydξ.
Such approximations (uniformly in time) have been studied by [29] for Schrödinger operators with long-range potentials, and by [27,28,4] for the case of long-range metric perturbations.
We also refer to the original paper by Isozaki-Kitada [18] in which the existence and asymptotic completeness of modified wave operators (with time-independent modifiers) were established for the case of Schrödinger operators with long-range potentials. We note that, in these cases, we do not need the additional restriction of the initial data in {|x| < 1/h}. The recent paper [25] concerns such approximations (locally in time) for the case of long-range metric perturbations, combined with potentials growing subquadratically at infinity, under the additional restriction in {|x| < 1/h}. Although the construction is similar as that in the previous papers, we give the details of the proof for reader's convenience.
As we mentioned in the outline of the paper, we first construct an approximation for the modified propagator e −it H h /h , where H h is defined as follows. Taking arbitrarily a cut-off function ψ ∈ C ∞ 0 (R d ) such that 0 ≤ ψ ≤ 1, ψ ≡ 1 for |x| ≤ 1/2 and ψ ≡ 0 for |x| ≥ 1, we define truncated electric and magnetic potentials,
V h and A h = (A h,j ) j by V h (x) := ψ(hx/L)V (x), A h,j (x) = ψ(hx/L)A j (x), respectively. It is easy to see that V h ≡ V, A h,j ≡ A j on {|x| ≤ L/(2h)}, supp A h,j , supp V h ⊂ {|x| ≤ L/h},
and that, for any α ∈ Z d + there exists C L,α > 0, independent of x, h, such that
h 2 |∂ α x V h (x)| + h|∂ α x A h (x)| ≤ C α,L x −µ−|α| . (4.1) Let us define H h by H h = 1 2 d j,k=1 (−ih∂ j − hA h,j (x))g jk (x)(−ih∂ k − hA h,k (x)) + h 2 V h (x).
We consider H h as a "semiclassical" Schrödinger operator with h-dependent electromagnetic potentials h 2 V h and hA h . By virtue of the estimates on g jk , A h and V h , H h can be regarded as a long-range perturbation of the semiclassical free Schrödinger operator H h 0 = −h 2 ∆/2. Such a type modification has been used to prove Strichartz estimates and local smoothing effects for Schrödinger equations with superquadratic potentials (see, Yajima-Zhang [38,Section 4]). Let us denote by p h the corresponding energy:
p h (x, ξ) = 1 2 d j,k=1 g jk (x)(ξ j − hA h,j (x))(ξ k − hA h,k (x)) + h 2 V h (x).
The following proposition, which was proved by [28], provides the existence of the phase function of h-FIO's.
{S + h ; 0 < h ≤ h 0 , R ≥ R 0 } ⊂ C ∞ (R 2d ; R) satisfying the Eikonal equation associated to p h : p h (x, ∂ x S + h (x, ξ)) = |ξ| 2 /2, (x, ξ) ∈ Γ + (R, I, σ), (4.2) such that |S + h (x, ξ) − x · ξ| ≤ C x 1−µ , x, ξ ∈ R d . (4.3)
Moreover, for any |α + β| ≥ 1,
|∂ α x ∂ β ξ (S + h (x, ξ) − x · ξ)| ≤ C αβ min{R 1−µ−|α| , x 1−µ−|α| }, x, ξ ∈ R d . (4.4)
Here C, C αβ > 0 are independent of x, ξ, R and h.
Proof. We only give the sketch of the proof and refer to [28, Section 4] for more details. Let us fix
I ⋐ I 0 ⋐ (0, ∞) and σ < σ 0 < 1. Let (x h (t, x, ξ), ξ h (t, x, ξ)) = exp tH p h (x, ξ) be the Hamilton flow generated by p h , i.e., the solution to dx h dt = ∂ p h ∂ξ , dξ h dt = − ∂ p h ∂x ; (x h , ξ h )| t=0 = (x, ξ). (4.5)
Using (4.1), we have the following a priori bounds:
|x h (t, x, ξ)| ≥ C(|x| + |t|), |∂ α x ∂ β ξ (x h (t, x, ξ) − x − tξ)| ≤ C αβ x −µ−|α| |t|, |∂ α x ∂ β ξ (ξ h (t, x, ξ) − ξ)| ≤ C αβ x −µ−|α| ,
uniformly in t ≥ 0, (x, ξ) ∈ Γ + (R/2, I 0 , σ 0 ) and h ∈ (0, h 0 ], provided that R ≥ 1 is large enough and h 0 > 0 is small enough. Using these bounds, we see that, for any fixed t ≥ 0, the map (x, ξ) → (x, ξ h (t, x, ξ)) is a diffeomorphism from Γ + (R/2, I 0 , σ 0 ) onto its range and has the inverse map (x, ξ) → (x, η h (t, x, ξ)) which is well-defined on [0, ∞) × Γ + (R, I, σ). We note that η h satisfies the same estimates as that for ξ h . It is easy to see that, for any t ≥ s ≥ 0, the flow
s → (x h (s, x, η h (t, x, ξ)), ξ h (s, x, η h (t, x, ξ)))
is a solution to (4.5) with the conditions
x h (0, x, η h (t, x, ξ)) = x, ξ h (t, x, η h (t,
x, ξ)) = ξ. By the same argument as that in [14, Lemma 2.4], we have, for any 0 < µ ′ < µ,
∂ x [x h (s, x, η h (t, x, ξ))] = O(1), ∂ x [ξ h (s, x, η h (t, x, ξ))] = O( x −1−µ ′ ),
uniformly in t ≥ s ≥ 0 and (x, ξ) ∈ Γ + (R, I, σ). Then, by the standard Hamilton-Jacobi theory, we can find the corresponding generator Ψ + h (t, x, ξ), that is a solution to the Hamilton-Jacobi equation: Rξ, ξ). Then, using a priori bounds for x h (t), ξ h (t), η h (t) and x h (t, x, η h (t)), we have ∂ t F h ∈ L 1 ([0, ∞) t ) and hence can define S + h (x, ξ) on Γ + (R, I, σ) by
∂ t Ψ + h (t, x, ξ) = p h (x, ∂ x Ψ + h (t, x, ξ)); Ψ + h (0, x, ξ) = x · ξ, satisfying (∂ x Ψ + h (t, x, ξ), ∂ ξ Ψ + h (t, x, ξ)) = (η h (t, x, ξ), x h (t, x, η h (t, x, ξ))). We set F h (t, x, ξ) = Ψ + h (t, x, ξ) − Ψ + h (t,S + h (x, ξ) = x · ξ + ∞ 0 ∂ t F h (t, x, ξ)dt.
Since ∂ x S + h = lim t→+∞ ∂ x Ψ + h (t) and ∂ ξ Ψ + h (t) → +∞ as t → +∞, by using the energy conservation
p h (x, ∂ x Ψ + h (t, x, ξ)) = p h (∂ ξ Ψ + h (t,
x, ξ), ξ), we can check that S + h satisfies the Eikonal equation (4.2). Moreover, using a priori bounds on x h , ξ h and η h , we obtain (4.3) and (4.4). Finally, we extend S + h to the whole space R 2d so that S + h = x · ξ outside Γ + (R/2, I 0 , σ 0 ) and complete the proof.
Remark 4.2. We remark that, in the proof of Proposition 4.1, we did not use the support properties of A h and V h . Suppose that A and V satisfy Assumption 1.1 with µ ≥ 0, i.e.,
x −1 |∂ α x A(x)| + x −2 |∂ α x V (x)| ≤ C αβ x −|α| .
If g jk is of long-range type, then we still can construct the solution S + h to (4.2), by using the support properties of A h and V h , provided that if L > 0, being independent of h, is small enough. However, in this case, S + h − x · ξ behaves like x 1−µ h −1 as h → 0, and we cannot obtain the uniform L 2 -boundedness of the corresponding h-FIO. This is one of the reason why we exclude the critical case µ = 0.
To the phase S + h as in Proposition 4.1 and an amplitude a ∈ S(1, g), we associate the h-FIO defined by
J h (S + h , a)f (x) = (2πh) −d e i(S + h (x,ξ)−y·ξ)/h a(x, ξ)f (y)dydξ.
Using (4.4), for sufficiently large R > 0, we have
|∂ ξ ⊗ ∂ x S + h (x, ξ) − Id | ≤ C R −µ < 1/2, |∂ α x ∂ β ξ S + h (x, ξ)| ≤ C αβ for |α + β| ≥ 2,
uniformly in h ∈ (0, h 0 ]. Therefore, the standard L 2 -boundedness of FIO implies that J h (S + h , a) is uniformly bounded on L 2 (R d ) with respect to h ∈ (0, h 0 ].
We now construct the outgoing approximation for e −it H h /h .
b + h = N −1 j=0 h j b + h,j with b + h,j ∈ S(1, g), supp b + h,j ⊂ Γ + (R 1/3 , I 1 , σ 1 ),
such that, for any a + ∈ S(1, g) with supp a + ⊂ Γ + (R, I, σ), we can find
c + h = N −1 j=0 h j c + h,j with c + h,j ∈ S(1, g), supp c + h,j ⊂ Γ + (R 1/2 , I 0 , σ 0 ), such that, for all 0 ≤ t ≤ h −1 , e −it H h /h Op h (a + ) * can be brought to the form e −it H h /h Op h (a + ) * = J h (S + h , b + h )e −itH h 0 /h J h (S + h , c + h ) * + Q + IK (t, h, N ), where J h (S + h , w), w = b + h , c + h ,
are h-FIO's associated to the phase S + h defined in Proposition 4.1 with R, I and σ replaced by R 1/4 , I 2 , σ 2 , respectively. Moreover, for any integer s ≥ 0 with 2s ≤ N − 1, the remainder Q + IK (t, h, N ) satisfies
|| D s Q + IK (t, h, N ) D s || L 2 →L 2 ≤ C N s h N −2s−1 , (4.6) uniformly with respect to h ∈ (0, h 0 ] and 0 ≤ t ≤ h −1 . (2) Let K S + h (t, x, y) be the distribution kernel of J h (S + h , b + h )e −itH h 0 /h J h (S + h , c + h ) * . Then, K S + h
satisfies dispersive estimates:
|K S + h (t, x, y)| ≤ C|th| −d/2 ,(4.7)
uniformly with respect to h ∈ (0, h 0 ], x, y ∈ R d and 0 ≤ t ≤ h −1 .
Proof. We give only details of the construction of amplitudes and the proof of (4.6). Dispersive estimates can be verified by the same argument as that in [4,Lemma 4.4]. By virtue of (2.4), there exist a + ∈ S(1, g) and q 0 (N ) ∈ S( x −N ξ −N , g) such that supp a + ⊂ supp a + and Op h (a + )
* = Op h ( a + ) + h N Op h (q 0 (N )). Setting J a = J h (S + h , b + h ) and J b = J h (S + h , c + h )
, we have the Duhamel formula:
e −it H h /h J a J * b = J a e −itH h 0 /h J * b − i h t 0 e −i(t−τ ) H h /h H h J a − J a H h 0 e −isH h 0 /h J * b dτ.
We shall construct the amplitudes b + h , c + h so that b + h satisfies transport equations associated to S + h , and that J a J * b is a microlocal approximation of Op h ( a + ) in the following sense:
|| Op h ( a + ) − J a J * b || L 2 →L 2 ≤ C N h N . (4.8)
The estimates (4.6) can be proved by using the method of non-stationary phase. Construction of the amplitudes. b + h,j can be constructed by a standard method of characteristics as follows. Recall that K and k(x, ξ) denote the kinetic part of H and the corresponding energy, respectively: ξ)) and consider the flow generated by X + h :
K = − 1 2 j,k ∂ j g jk (x)∂ k , k(x, ξ) = 1 2 j,k g jk (x)ξ j ξ k . We let X + h (x, ξ) = (∂ ξ k)(x, ∂ x S + h (x,(ẋ + h (t, x, ξ)) = X + h (x + h (t, x, ξ), ξ); x + h (0) = x.
By (notice that 1/3 > 5/16 > 1/4), some I 1 ⋐ I ⋐ I 2 and σ 1 < σ < σ 2 . Moreover, we have
|x + h (t, x, ξ)| ≥ C −1 (|x| + |t|), |∂ α x ∂ β ξ (x + h (t, x, ξ) − x − tξ)| ≤ C αβ x −µ−|α| |t|,(4.9)
uniformly in t ≥ 0 and (x, ξ) ∈ Γ + ( R, I, σ) (see, [28]). We let Y + h (x, ξ) = −(KS + h )(x, ξ). By virtue of (4.3), (4.4) and (4.9), we see that
Y + h (x + h (t, x, ξ), ξ) is integrable with respect to t ∈ [0, ∞), and that ∂ α x ∂ β ξ ∞ 0 Y + h (x + h (t, x, ξ), ξ)dt ≤ C αβ x −µ−|α| , (x, ξ) ∈ Γ + ( R, I, σ). Let us define b + h,j ∈ S(1, g), 0 ≤ j ≤ N − 1, inductively by b + h,0 = exp ∞ 0 Y + h (x + h (t, x, ξ), ξ)dt , b + h,j = i ∞ 0 (K b + h,j−1 )(x + h (t, x, ξ), ξ) exp t 0 Y + h (x + h (s, x, ξ), ξ)ds dt, j ≥ 1,
A direct computation shows that b + h,j solve the following transport equations:
X + h · ∂ x b + h,0 + Y + h b + h,0 = 0, X + h · ∂ x b + h,j + Y + h b + h,j + iK b + h,j−1 = 0, 1 ≤ j ≤ N − 1.
(4.10)
Taking ρ ∈ S(1, g) satisfying ρ ≡ 1 on Γ + ( R, I, σ) and supp ρ ⊂
Γ + (R 1/3 , I 1 , σ 1 ), we define b + h,j ∈ S(1, g) by b + h,j = ρ b + h,j . We next construct c + h,j . By definition, b + h,0 is elliptic on Γ + ( R, I, σ)(⊃ Γ + (R 1/2 , I 0 , σ 0 )) in the following sense: b + h,0 > c, (x, ξ) ∈ Γ + ( R, I, σ)
, with some c > 0 being independent of h. The standard FIO theory then shows that, for any a + ∈ S(1, g) with supp a + ⊂ Γ + (R, I, σ), there exist symbols c + h,j ∈ S(1, g) with supp c + h,j ⊂ Γ + (R 1/2 , I 0 , σ 0 ), j = 0, 1, ..., N − 1, such that (4.8) holds true. Indeed, c + h,j can be determined by the following triangular system:
c + h,j (x, ξ) = 1 b + h,0 (x, ξ) r + h,j (x, S + h (x, y, ξ))J 1 y=x , j = 0, 1, ..., N − 1.
Here S + h and J 1 are defined by
S + h (x, y, ξ) = 1 0 ∂ x S + h (y + θ(x − y), ξ)dθ, J 1 = | det ∂ ξ S + h (x, y, ξ)|, and r + h,0 = a + (x, S + h (x, y, ξ)). Moreover, if we denote the inverse map of ξ → S + h (x, y, ξ) by ξ → [ S + h ] −1 (x, y, ξ), then r + h,j is a linear combination of 1 i |α| α! ∂ α ξ ∂ α y b + h,k 0 (x, [ S + h ] −1 (x, y, ξ))c + h,k 1 (y, [ S + h ] −1 (x, y, ξ))J 2 y=x , where J 2 = | det ∂ ξ [ S + h ] −1 (
x, y, ξ)|, α ∈ Z d + and k 0 , k 1 = 0, 1, ..., j so that 0 ≤ |α| ≤ j, k 0 + k 1 = j − |α| and k 1 ≤ j − 1. The symbolic calculus then shows that J a J * b is a h-PDO and satisfies
J a J * b = Op h ( a + ) + h N Op h (q 1,h (N )) for some {q 1,h (N ); h ∈ (0, h 0 ]} ⊂ S( x −N ξ −N , g)
, which implies (4.8).
Estimates of the remainder. By virtue of (4.2) and (4.10) and the support properties of b + h,j , we see that there exist d + h,j ∈ S( x −j ξ −j , g) supported in Γ + (R 1/3 , I 1 , σ 1 ) \ Γ + ( R, I, σ) and q 2,h (N ) ∈ S( x −N ξ −N , g) such that
e iS + h /h H h e −iS + h /h N −1 j=0 h j b + h,j − h 2 2 ρ 2 N −1 j=0 h j b + h,j = N j=1 h j d + h,j + h N +1 q 2,h (N ). Setting d + h = N j=1 h j d + h,j , we have H h J a − J a H h 0 = J h (S + h , d + h ) + h N +1 J h (S + h , q 2,h (N )). We denote the distribution kernel of J h (S + h , d + h )e iτ h∆/2 J * b by q 3,h (τ, x, y): q 3,h (τ, x, y) = e i(S + h (x,ξ)−τ |ξ| 2 /2−S + h (y,ξ))/h d + h (x, ξ)c + h (y, ξ)dξ.
Then, by the same argument as that in [4, Lemma 3.4], we have
|∂ ξ S + h (x, ξ) − τ ξ − ∂ ξ S + h (y, ξ)| ≥ c(1 + τ + |x| + |y|), τ ≥ 0, h ∈ (0, h 0 ],
on the support of the amplitude d + h (x, ξ)c + h (y, ξ), where c > 0 is independent of t, x, y and h. Therefore, integrating by parts, we obtain that
|∂ α x ∂ β ξ q 3,h (τ, x, y)| ≤ C αβM h M −|α+β| (1 + τ + |x| + |y|) −M +|α+β| , τ ≥ 0,
for any M ≥ 0, uniformly in τ ≥ 0 and h ∈ (0, h 0 ]. We now come into the proof of (4.6). Combining the ellipticity of g jk with (4.1), we learn that there exists C 1 > 0, independent of h, such that
H h + C 1 ≥ C −1 1 hD 2 , h ∈ (0, h 0 ], which implies || D s ( H h + C 1 ) −s/2 || L 2 →L 2 ≤ C s || D s hD −s || L 2 →L 2 ≤ C s h −s , uniformly in h ∈ (0, h 0 ]. Therefore, it remains to show sup 0≤t≤1/h ||( H h + C 1 ) s/2 Q + IK (t, h, N ) D s || L 2 →L 2 ≤ C N,s h N −2s−1 , h ∈ (0, h 0 ].
The remainder is of the form:
Q + IK (t, h, N ) = − h N e −it H h /h Op h (q 0 (N ) + q 1,h (N )) − ih N t 0 e −i(t−τ ) H h /h J h (S + h , q 2,h (N ))e −iτ H h 0 /h J * b ds − i h t 0 e −i(t−s) H h /h Q 3 (τ, h)ds,
where Q 3 (τ, h) is a integral operator with the kernel q 3,h (τ, x, y). Since the total symbol of ( H h + C 1 ) s/2 Op h (q 0 (N ) + q 1,h (N )) hD s belongs to S( x −N ξ −N +2s , g), the standard L 2boundedness of PDO imply
||( H h + C 1 ) s/2 Op h (q 0 (N ) + q 1,h (N )) D s || L 2 →L 2 ≤ C N,s h −2s .
Using the L 2 -boundedness of J * b hD s = ( hD s J b ) * (see, [27]), we similarly obtain
||( H h + C 1 ) s/2 J h (S + h , q 2,h (N ))e −iτ H h 0 /h J * b D s || L 2 →L 2 ≤ C N,s h −2s .
A direct computation yields, for any M ≥ 0,
||( H h + C 1 ) s/2 Q 3 (τ, h) D s || L 2 →L 2 ≤ C M,s h M −2s
Since ( H h +C 1 ) s/2 commutes with e −it H h /h , these three estimates imply the desired estimate.
The following lemma, which has been essentially proved by [25], tells us that one can still construct the semiclassical approximation for the original propagator e −itH h /h if we restrict the support of initial data in the region Γ + (R, J, σ) ∩ {x; |x| < h −1 }.
≥ 0, h ∈ (0, h 0 ] and 0 ≤ t ≤ h −1 , we have ||(e −itH h /h − e −it H h /h ) Op h (a + h ) * D s || L 2 →L 2 ≤ C M,s h M −s ,
where C M,s > 0 is independent of h and t.
In order to prove this lemma, we need the following.
Lemma 4.5. Let f h ∈ C ∞ (R d ) be such that for any α ∈ Z d + , |∂ α x f (x)| ≤ C α uniformly with respect to h ∈ (0, h 0 ],e −it H h /h Op h (a + h ) * = J h (S + h , b + h )e −itH h 0 /h J h (S + h , c + h ) * + Q + IK (t, h, N ).
By virtue of (4.6), the remainder f h (x) D γ Q + IK (t, h, N ) D s is bounded on L 2 (R d ) with the norm dominated by C N sγ h N −γ−s−1 , uniformly with respect h ∈ (0, h 0 ] and t ∈ [0, 1/h]. On the other hand, by virtue of (4.4), the phase of distribution kernel of
J h (S + h , b + h )e −itH h 0 /h J h (S + h , c + h ) * , which is given by Φ + h (t, x, y, ξ) = S + h (x, ξ) − 1 2 t|ξ| 2 − S + h (y, ξ), satisfies ∂ ξ Φ + h (t, x, y, ξ) = (x − y)(Id +O(R −µ/4 )) − tξ.
We here note that supp c + h ⊂ {(y, ξ) ∈ R 2d ; a + h (y, ∂ ξ S + h (y, ξ)) = 0}. In particular, c + h (y, ξ) vanishes in the region {y; |y| ≥ 1/h}. We now set L = 2 √ sup I 2 + 2, where I 2 is given in Theorem 4.6. Since |x| ≥ L/h, |y| < 1/h and |ξ| 2 ∈ I 2 on the support of the amplitude
f h (x)b + h (x, ξ)c + h (y, ξ), we obtain |∂ ξ Φ + h (t, x, y, ξ)| > c(1 + |x| + |y| + |ξ| + t + h −1 ), 0 ≤ t ≤ h −1 ,
for some universal constant c > 0. The assertion now follows from an integration by parts and the L 2 -boundedness of h-FIO's.
Proof of Lemma 4.4. For simplicity, we use the notation S(m, g 0 ) with the metric g 0 = dx 2 / x 2 , i.e., f ∈ S(m, g 0 ) if and only if f ∈ C ∞ (R d ) and ∂ α x f (x) = O(m(x) x −|α| ). At first we note that H h ∈ S( ξ 2 + x 1−µ ξ , g) + S( x 2−µ , g 0 ). The Duhamel formula yields
(e −itH h /h − e −it H h /h ) = − i h t 0 e −i(t−s)H h /h W h 0 e −is H h /h ds = − i h t 0 e −i(t−s)H h /h e −is H h /h W h 0 ds + 1 h 2 t 0 e −i(t−s)H h /h s 0 e −i(s−τ )H h /h [ H h , W h 0 ]e −iτ H h /h dτ ds,
where W h 0 = H h − H h consists of the following two parts:
ih 2 2 j,k ∂ j g jk (1 − ψ(hx/L))A k + (1 − ψ(hx/L))A j g jk ∂ k , h 2 2 j,k (1 − ψ(hx/L)) 2 g jk (1 − ψ(hx/L))A j A k + (1 − ψ(hx/L))V.
In particular, W h 0 ∈ S( x 1−µ ξ , g) + S( x 2−µ , g 0 ) and its coefficients are supported in {|x| ≥ L/h}. By the support properties of a + h and W h 0 , we have
||W h 0 Op h (a + h ) * || L 2 →L 2 ≤ C M h M ,
for any M ≥ 0, provided that L > ∈ S( x −µ ξ 2 , g), W h 12 ∈ S( x 1−2µ ξ , g) and W h 13 ∈ S( x 2−3µ , g 0 ) (notice that an additional decay factor x −µ is in the second and third terms).
Setting W h 2 = W h 12 +W h 13 , we iterate this procedure. Then, we can find a positive number N µ , depending only on µ, such that (e −itH h /h − e −it H h /h ) Op h (a + h ) * can be written in the following form (modulo O(h ∞ ) on L 2 (R d )):
t≥s 1 ≥···≥s Nµ ≥0 e −i(t−s 1 )H h /h e −i(s 1 −s Nµ ) H h /h W Nµ e −is Nµ H h /h Op h (a + h ) * ds Nµ · · · ds 1 ,
where W h Nµ ∈ S( x −µ ξ 2 + x −ν ξ , g) + S( x −ν , g 0 ) with some ν = ν(µ, N µ ) > 0. Moreover, the coefficients of W h Nµ are supported in {x; |x| > L/h}. We now apply Lemma 4.5 to W Nµ e −is Nµ H h /h Op h (a + h ) * and obtain the assertion.
We now come into the proof of Proposition 3.1.
Proof of Proposition 3.1. Rescaling t → th, it suffices to show
|| Op h (a + h )e −itH h /h Op h (a + h ) * || L 1 →L ∞ ≤ C ε |th| −d/2 , 0 < |t| ≤ h −1 , where H h = h 2 H. Le A h (x, y) be the distribution kernel of Op h (a + h ): A h (x, y) = (2πh) −d e i(x−y)·ξ/h a + h (x, ξ)dξ.
Since a + h ∈ S(1, g) is compactly supported in I with respect to ξ, we easily see that
sup x |A h (x, y)|dy + sup y |A h (x, y)|dx ≤ C, h ∈ (0, 1].
On the other hand, since ξ s a + h ξ γ ∈ S(1, g) for any s, γ, we have
|| D s Op h (a + h ) D γ || L 2 →L 2 ≤ C s h −s−γ .
Combining these to estimates with Theorem 4.3 and Lemma 4.4, we can write
Op h (a + h )e −itH h /h Op h (a + h ) * = K 1 (t, h, N ) + K 2 (t, h, N ), where K 1 (t, h, N ) = Op h (a + h )J h (S + h , b + h )e −itH h 0 /h J h (S + h , c +
h ) * and its distribution kernel, which we denote by K 1 (t, x, ξ), satisfies dispersive estimates:
|K 1 (t, x, y)| ≤ |A h (x, z)||K S + h (t, z, y)|dz ≤ C N |th| −d/2 , 0 < t ≤ h −1 ,|| D s K 2 (t, h, N ) D s || L 2 →L 2 ≤ C N,s h N −2s−1 .
If we choose N ≥ d/2 + 2, then it follows from the Sobolev embedding that the distribution kernel of K 2 (t, h, N ) is uniformly bounded in R 2d with respect to h ∈ (0, h 0 ] and 0 < t ≤ h −1 . Therefore, Op h (a + h )e −itH h /h Op h (a + h ) * has the distribution kernel K(t, x, y) satisfying dispersive estimates for 0 < t ≤ h −1 :
|K(t, x, y)| ≤ C N |th| −d/2 , x, y ∈ R d .
(4.11)
Finally, using the following relation,
Op h (a + h )e −itH h /h Op h (a + h ) * = Op h (a + h )e itH h /h Op h (a + h ) * * ,
we learn K(t, x, y) = K(−t, y, x) and (4.11) also holds for 0 < −t ≤ h −1 . For the incoming case, the proof is analogous and we omit it.
Fourier integral operators with time dependent phase
Throughout this section we assume Assumption 1.1 with µ ≥ 0. Consider a symbol χ ε ∈ S(1, g) supported in a region Ω(ε) := {(x, ξ) ∈ R 2d ; x > ε|ξ|/2}, where ε > 0 is an arbitrarily small fixed constant. In this section we prove the following dispersive estimate:
|| Op(χ ε )e −itH Op(χ ε ) * || L 1 →L ∞ ≤ C ε |t| −d/2 , 0 < |t| ≤ t ε ,
where t ε > 0 is a small constant depending on ε. This estimate, combined with the L 2boundedness of Op(χ ε ) and e −itH , implies Strichartz estimates for Op(χ ε )e −itH . Let us give a short summary of the steps of proof. Recall that p(x, ξ) denotes the total energy. Choose χ * ε ∈ S(1, g) so that supp χ * ε = supp χ ε and Op(χ ε ) * = Op(χ * ε ) + Op(r N ) with some r N ∈ S( x −N ξ −N , g) for sufficiently large N > d/2. We first construct an approximation for e −itH Op(χ * ε ) in terms of the FIO with a time dependent phase:
J(Ψ, b N )f (x) = 1 (2π) d e i(Ψ(t,x,ξ)−y·ξ) b(t, x, ξ)f (y)dydξ,
where Ψ is a generating function of the Hamilton flow associated to p(x, ξ) and (∂ ξ Ψ, ξ) → (x, ∂ x Ψ) is the corresponding canonical map, and the amplitude b = b 0 + b 2 + · · · + b N −1 solves the corresponding transport equations. Although such parametrix constructions are well known as WKB approximations (at least if χ * ε is compactly supported in ξ and the time scale depends on the size of frequency), we give the detail of proof since, in the present case, supp χ * ε is not compact with respect to ξ and t ε is independent of the size of frequency. The crucial point is that p(x, ξ) is of quadratic type on Ω(ε):
|∂ α x ∂ β ξ p(x, ξ)| ≤ C αβ , (x, ξ) ∈ Ω(ε), |α + β| ≥ 2,
which allows us to follow a classical argument (due to, e.g., [21]) and construct the approximation for |t| < t ε if t ε > 0 is small enough. The composition Op(χ ε )J(Ψ, b) is also a FIO with the same phase, and a standard stationary phase method can be used to prove dispersive estimates for 0 < |t| < t ε . It remains to obtain the L 1 → L ∞ bounds of the remainders Op(χ ε )e −itH Op(r N ) and Op(χ ε )e −itH (Op(χ * ε ) − J(Ψ, b N )). If e −itH maps from the Sobolev space H d/2 (R d ) to itself, then L 1 → L ∞ bounds are direct consequences of the Sobolev embedding and L 2 -boundedness of PDO. However, our Hamiltonian H is not bounded below (on {|x| |ξ|}) and such a property does not hold in general. To overcome this difficulty, we use an Egorov type lemma as follows. By the Sobolev embedding and the Littlewood-Paley decomposition, the proof is reduced to that of the following estimates:
j≥0 ||2 jγ S j (D) Op(χ ε )e −itH Op(r N ) D γ f || 2 L 2 ≤ C||f || 2 L 2 , (5.1)
where γ > d/2 and S j is a dyadic partition of unity. Then, we will prove that there exists η j (t, ·, ·) ∈ S(1, g) such that 2 j ≤ C(1 + |x| + |ξ|) on supp η j (t), and that
S j (D) Op(χ ε )e −itH = e −itH Op(η j (t)) + O L 2 →L 2 (2 −jN ), |t| < t ε ≪ 1.
Choosing δ > 0 with γ + δ ≤ N/2, we learn that 2 j(γ+δ) η j (t)r N ξ γ ∈ S(1, g) and hence (5.1). Op(χ ε )e −itH (Op(χ * ε ) − J(Ψ, b)) can be controlled similarly.
Short-time behavior of Hamilton flow
This subsection discusses the classical mechanics generated by p(x, ξ). We denote the solution to the following Hamilton equations by (X(t), Ξ(t)) = (X(t, x, ξ), Ξ(t, x, ξ)):
Ẋ j = ∂p ∂ξ j (X, Ξ) = k g jk (X)(Ξ k − A k (X)), Ξ j = − ∂p ∂x j (X, Ξ) = − 1 2 k,l ∂g kl ∂x j (X)(Ξ k − A k (X))(Ξ l − A l (X)) + k,l g kl (X) ∂A k ∂x j (X)(Ξ l − A l (X)) − ∂V ∂x j (X)
with the initial condition (X(0), Ξ(0)) = (x, ξ), whereḟ = ∂ t f . We first observe that the flow conserves the energy:
p(x, ξ) = p(X(t), Ξ(t)), which implies k(X, Ξ) = k(x, ξ) + O(|ξ| x + x 2 + |Ξ| X + X 2 )
. Combining with the uniform ellipticity of k(x, ξ), we have
|Ẋ(t)| + |Ξ(t)| ≤ C(1 + |ξ| + |x| + |X(t)| + |Ξ(t)|).
Applying Gronwall's inequality to this estimate, we obtain an a priori bound:
|X(t) − x| + |Ξ(t) − ξ| ≤ C T |t|(1 + |x| + |ξ|), |t| ≤ T, x, ξ ∈ R d .
Using this estimate, we obtain more precise behavior of the flow with initial conditions in Ω(ε).
Lemma 5.1. Let ε > 0. Then, for sufficiently small t ε > 0 and all α, β ∈ Z d + ,
|∂ α x ∂ β ξ (X(t, x, ξ) − x)| + |∂ α x ∂ β ξ (Ξ(t, x, ξ) − ξ| ≤ C αβε |t| x 1−|α+β| , uniformly with respect to (t, x, ξ) ∈ (−t ε , t ε ) × Ω(ε).
Proof. We only consider the case when t ≥ 0, the proof for the case t ≤ 0 is similar. Let (x, ξ) ∈ Ω(ε). At first we remark that for sufficiently small t ε > 0,
|x|/2 ≤ |X(t, x, ξ)| ≤ 2|x|, |t| ≤ t ε . (5.2)
For |α + β| = 0, the assertion is obvious. We let |α + β| = 1 and differentiate the Hamilton equations with respect to ∂ α x ∂ β ξ :
d dt ∂ α x ∂ β ξ X ∂ α x ∂ β ξ Ξ = ∂ x ∂ ξ p(X, Ξ) ∂ 2 ξ p(X, Ξ) −∂ 2 x p(X, Ξ) −∂ ξ ∂ x p(X, Ξ) ∂ α x ∂ β ξ X ∂ α x ∂ β ξ Ξ . (5.3)
Using (5.2), we learn that p(X(t), Ξ(t)) is of quadratic type in Ω(ε):
(∂ α x ∂ β ξ p)(X(t), Ξ(t)) ≤ C αβε x 2−|α+β| , (t, x, ξ) ∈ (−t ε , t ε ) × Ω(ε).
All entries of the above matrix hence are uniformly bounded in (t, x, ξ) ∈ (−t ε , t ε )×Ω(ε). Taking t ε > 0 smaller if necessary, integrating (5.3) with respect to t and applying Gronwall's inequality, we have the assertion with |α + β| = 1. For |α + β| ≥ 2, we prove the estimate for ∂ 2 ξ 1 X(t) and ∂ 2 ξ 1 Ξ(t) only, where ξ = (ξ 1 , ξ 2 , ..., ξ d ). Proofs for other cases are similar, and proofs for higher derivatives follow from an induction on |α + β|. By the Hamilton equation, we learn
d dt ∂ 2 ξ 1 X(t) = ∂ x ∂ ξ p(X(t), Ξ(t))∂ 2 ξ 1 X(t) + ∂ 2 ξ p(X(t), Ξ(t))∂ 2 ξ 1 Ξ(t) + Q(X(t), Ξ(t)),
where Q(X(t), Ξ(t)) satisfies
Q(X(t), Ξ(t)) ≤ C ε |α+β|=3,|β|≥1 (∂ α x ∂ β ξ p)(X(t), Ξ(t))(∂ ξ 1 X(t)) |α| (∂ ξ 1 Ξ(t)) |β| ≤ C ε x −1 .
We similarly obtain
d dt ∂ 2 ξ 1 Ξ(t) = −∂ 2 x p(X(t), Ξ(t))∂ 2 ξ 1 X(t) − ∂ ξ ∂ x p(X(t), Ξ(t))∂ 2 ξ 1 Ξ(t) + O( x −1 ).
Applying Gronwall's inequality, we have the desired estimates.
Lemma 5.2.
(1) Let t ε > 0 be small enough. Then, for any |t| < t ε , the map
g(t) : (x, ξ) → (X(t, x, ξ), ξ)
is a diffeomorphism from Ω(ε/2) onto its range, and satisfies Ω(ε) ⊂ g(t, Ω(ε/2)) for all |t| < t ε .
(2) Let Ω(ε) ∋ (x, ξ) → (Y (t, x, ξ), ξ) ∈ Ω(ε/2) be the inverse map of g(t). Then, Y (t, x, ξ) and Ξ(t, Y (t, x, ξ), ξ) satisfy the same estimates as that for X(t, x, ξ) and Ξ(t, x, ξ) of Lemma 5.1, respectively:
|∂ α x ∂ β ξ (Y (t, x, ξ) − x)| + |∂ α x ∂ β ξ (Ξ(t, Y (t, x, ξ), ξ) − ξ| ≤ C αβε |t| x 1−|α+β| , uniformly with respect to (t, x, ξ) ∈ (−t ε , t ε ) × Ω(ε).
Proof. Choosing a cut-off function ρ ∈ S(1, g) such that 0 ≤ ρ ≤ 1, supp ρ ⊂ Ω(ε/3) and ρ ≡ 1 on Ω(ε/2), we modify g(t) as follows:
g ρ (t, x, ξ) = (X ρ (t, x, ξ), ξ), X ρ (t, x, ξ) = (1 − ρ(x, ξ))x + ρ(x, ξ)X(t, x, ξ).
g ρ (t, x, ξ) is then obviously smooth with respect to (x, ξ) and Lemma 5.1 implies
|∂ α x ∂ β ξ g ρ (t, x, ξ)| ≤ C αβε , |α + β| ≥ 1, |J(g ρ )(t, x, ξ) − Id | ≤ Ct ε ,
where J(g ρ ) is the Jacobi matrix with respect to (x, ξ). Choosing t ε > 0 so small that Ct ε < 1/2, and applying the Hadamard global inverse mapping theorem, we see that, for any fixed |t| < t ε , g ρ (t) is a diffeomorphism from R 2d onto itself. By definition, g(t) is diffeomorphic from Ω(ε/2) onto its range. Since g ρ (t) is bijective, it remains to check that
Ω(ε) c ⊃ g ρ (t, Ω(ε/2) c ), |t| < t ε .
Suppose that (x, ξ) ∈ Ω(ε/2) c . If (x, ξ) ∈ Ω(ε/3) c , then the assertion is obvious since g ρ (t) ≡ Id outside Ω(ε/3). If (x, ξ) ∈ Ω(ε/3) \ Ω(ε/2), then, by Lemma 5.1 and the support property of ρ, we have
|X ρ (t, x, ξ)| ≤ |x| + ρ(x, ξ)|(X(t, x, ξ) − x)| ≤ (ε/2 + C 0 t ε ) ξ
for some C 0 > 0 independent of x, ξ and t ε . Choosing t ε < ε/(2C 0 ), we obtain the assertion.
We next prove the estimates on Y (t). Since (Y (t, x, ξ), ξ) ∈ Ω(ε/2), we learn
|Y (t, x, ξ) − x| = |X(0, Y (t, x, ξ), ξ) − X(t, Y (t, x, ξ), ξ)| ≤ sup (x,ξ)∈Ω(ε/2) |X(t, x, ξ) − x| ≤ C ε |t| x .
For α, β ∈ Z d + with |α + β| = 1, apply ∂ α x ∂ β ξ to the equality x = X(t, Y (t, x, ξ), ξ). We then have the following equality
A(t, Z(t, x, ξ))∂ α x ∂ β ξ (Y (t, x, ξ) − x) = ∂ α y ∂ β η (y − X(t, y, η))| (y,η)=Z(t,x,ξ) ,
where Z(t, x, ξ) = (Y (t, x, ξ), ξ) and A(t, Z) = (∂ x X)(t, Z) is a d × d-matrix. By Lemma 5.1 and a similar argument as that in the proof of Lemma 5.2 (1), we learn that A(t, Z(t, x, ξ)) is invertible if t ε > 0 is small enough, and that A(t, Z(t, x, ξ)) and A(t, Z(t, x, ξ)) −1 are bounded uniformly in (t, x, ξ) ∈ (−t ε , t ε ) × Ω(ε/2) . Therefore,
∂ α x ∂ β ξ (Y (t, x, ξ) − x) ≤ C αβ sup (x,ξ)∈Ω(ε/2) |∂ α x ∂ β ξ (x − X(t, x, ξ))| ≤ C αβ |t| x 1−|α+β| .
Proofs for higher derivatives are obtained by an induction with respect to |α + β| and proofs for Ξ(t, Y (t, x, ξ), ξ) are similar.
The parametrix for
Op(χ ε )e −itH Op(χ ε ) *
Before starting the construction of parametrix, we prepare two lemmas. The following is an Egorov type theorem which will be used to control remainder terms. We write exp tH p (x, ξ) = (X(t, x, ξ), Ξ(t, x, ξ)).
Lemma 5.3.
For h ∈ (0, 1], consider a h-dependent symbol η h ∈ S(1, g) such that supp η h ⊂ Ω(ε) ∩ {1/(2h) < |ξ| < 2/h}. Then, for sufficiently small t ε > 0, independent of h, and any integer N ≥ 0, there exists a bounded family of symbols {η N h (t, ·, ·); |t| < t ε , 0 < h ≤ 1} ⊂ S(1, g) such that supp η N h (t, ·, ·) ⊂ exp tH p (supp η h ), and that
||e itH Op(η h )e −itH − Op(η N h (t))|| L 2 →L 2 ≤ C N ε h N ,
uniformly with respect to 0 < h ≤ 1 and |t| < t ε .
Proof. Let η 0 h (t, x, ξ) = η h (exp tH p (x, ξ)) = η h (X(t, x, ξ), Ξ(t, x, ξ)). It is easy to see that supp η 0 h ⊂ exp tH p (supp η h ). Moreover, Lemma 5.1 implies that {η 0 h ; 0 ≤ t < t ε , 0 < h ≤ 1} is a bounded subset of S(1, g). By a direct computation, η 0 h solves
∂ t η 0 h + {p, η 0 h } = 0; η 0 h | t=0 = η h ,
where {·, ·} is the Poisson bracket. By a standard pseudodifferential calculus, there exists a bounded set {r 0
h (t, ·, ·); 0 ≤ t < t ε , 0 < h ≤ 1} ⊂ S(1, g) with supp r 0 h ⊂ supp η 0 h such that d dt Op(η 0 h ) + i[H, Op(η 0 h )] = h Op(r 0 h ).
We next set
η 1 h (t, x, ξ) = t 0 r 0 h (s, X(s − t, x, ξ), Ξ(s − t,
x, ξ))ds.
Again, we learn that {η 1 h (t, ·, ·); 0 ≤ t < t ε , 0 < h ≤ 1} ⊂ S(1, g) is bounded and that supp η 1 h ⊂ exp tH p (supp η h ) for all 0 ≤ t < t ε and 0 < h ≤ 1. Moreover, η 1 h solves
∂ t η 1 h + {p, η 1 h } = r 0 h ; η 1 h | t=0 = 0, which implies d dt Op(η 0 h + hη 1 h ) + i[H, Op(η 0 h + hη 1 h )] = h 2 Op(r 1 h ).
with some {r 1 h ; 0 ≤ t < t ε , 0 < h ≤ 1} ⊂ S(1, g) and supp r 1 h ⊂ supp η 0 h . Iterating this procedure and putting η N h = N −1 j=0 h j η j h , we obtain the assertion. For −t ε < t ≤ 0, the proof is analogous.
Using this lemma, we have the following.
Lemma 5.4. Let ε > 0. Then, for any symbol χ ε ∈ S(1, g) with supp χ ε ⊂ Ω(ε) and any integer N ≥ 0, there exists χ * ε ∈ S(1, g) with supp χ * ε ⊂ Ω(ε) such that for 2γ < N ,
sup |t|<tε || Op(χ ε )e −itH Op(χ ε ) * − Op(χ ε )e −itH Op(χ * ε )|| H −γ (R d )→H γ (R d ) ≤ C N γε
Proof. By the expansion formula (2.4), there exists χ * ε ∈ S(1, g) with supp χ * ε ⊂ Ω(ε) such that
Op(χ ε ) * = Op(χ * ε ) + Op(r 0 (N )) with some r 0 (N ) ∈ S( x −N ξ −N , g). For δ > 0 with 2γ + δ ≤ N , we split D γ Op(χ ε )e −itH Op(r 0 (N )) D γ = D γ Op(χ ε )e −itH D −γ−δ x −γ−δ · x γ+δ D γ+δ Op(r 0 (N )) D γ .
Since x γ+δ ξ γ+δ r 0 (N ) ξ γ ∈ S(1, g), x γ+δ D γ+δ Op(r 0 (N )) D γ is bounded on L 2 . In order to prove the L 2 -boundedness of the first term, we use the Littlewood-Paley decomposition and Lemma 5.3 as follows. Consider a dyadic partition of unity with respect to the frequency:
∞ j=0 S j (D) = 1, where S j (ξ) = S(2 −j ξ), j ≥ 1, with some S ∈ C ∞ 0 (R d ) supported in {1/2 < |ξ| < 2} and S 0 ∈ C ∞ 0 (R d ) supported in {|ξ| < 1}. Then, || D γ Op(χ ε )e −itH D −γ−δ x −γ−δ f || L 2 ≤ C ∞ j=0 ||2 jγ S j (D) Op(χ ε )e −itH D −γ−δ x −γ−δ f || 2 L 2 1/2 .
By the expansion formula (2.3), there exists η j ∈ S(1, g) such that supp
η j ⊂ Ω(ε) ∩ {2 j−1 < |ξ| < 2 j+1 }, and that S j (D) Op(χ ε ) = Op(η j ) + O L 2 →L 2 (2 −jN ). By Lemma 5.3 with h = 2 −j , there exists a symbol η N j (t) ∈ S(1, g) such that Op(η j )e −itH = e −itH Op(η N j (t)) + O L 2 →L 2 (2 −jN ).
Since N ≥ γ + δ, the remainders satisfy
||2 −j(N −γ) e −itH D −γ−δ x −γ−δ f || 2 L 2 ≤ C2 −2jδ ||f || 2 L 2 . Suppose that (x, ξ) ∈ supp η N j (t). Since supp η N j (t) ⊂ exp tH p (supp η j ), we have |X(−t, x, ξ)| > ε Ξ(−t, x, ξ) , 2 j−1 < |Ξ(−t, x, ξ)| < 2 j+1 .
Using Lemma 5.1 with the initial data (X(−t), Ξ(−t)), we learn |x − X(−t, x, ξ)| + |ξ − Ξ(−t, x, ξ)| ≤ Ct ε X(−t, x, ξ) .
Combining these two estimates, we see that (x, ξ) ∈ Ω(ε/2) and
2 j ≤ C(1 + |x| + |ξ|), |t| < t ε ,
with some C > 0 independent of x, ξ and t, provided that t ε > 0 is small enough. Therefore, 2 j(γ+δ) η N j (t) ξ −γ−δ x −γ−δ ∈ S(1, g) and the corresponding PDO is bounded on L 2 . Finally, we conclude
∞ j=0 ||2 jγ Op(η j )e −itH D −γ−δ x −γ−δ f || 2 L 2 ≤ C ∞ j=0 ||2 −jδ 2 j(γ+δ) Op(η N j (t)) D −γ−δ x −γ−δ f || 2 L 2 + 2 −2jδ ||f || 2 L 2 ≤ C ∞ j=0 2 −2jδ ||f || 2 L 2 ≤ C||f || 2 L 2 ,
which completes the proof.
We next consider a parametrix construction of Op(χ ε )e −itH Op(χ * ε ). Let us first make the following ansatz:
v(t, x) = 1 (2π) d e i(Ψ(t,x,ξ)−y·ξ) b N (t, x, ξ)f (y)dydξ, where b N = N −1 j=0 b j .
In order to approximately solve the Schrödinger equation
i∂ t v(t) = Hu(t); v| t=0 = Op(χ * ε )ϕ,
the phase function Ψ and the amplitude b N should satisfy the following Hamilton-Jacobi equation and transport equations, respectively:
∂ t Ψ + p(x, ∂ x Ψ) = 0; Ψ| t=0 = x · ξ, (5.4) ∂ t b 0 + X · ∂ x b 0 + Yb 0 = 0; b 0 | t=0 = χ ε , ∂ t b j + X · ∂ x b j + Ya j + iKb j−1 = 0; b j | t=0 = 0, 1 ≤ j ≤ N − 1, (5.5)
where K is the kinetic energy part of H and a vector field X and a function Y are defined by
X(t, x, ξ) := (∂ ξ k)(x, ∂ x Ψ(t, x, ξ)), Y(t, x, ξ) := −(KΨ)(t, x, ξ).
We first construct the phase function Ψ.
Proposition 5.5. Let us fix ε > 0 arbitrarily. Then, for sufficiently small t ε > 0, we can construct a smooth and real-valued function Ψ ∈ C ∞ ((−t ε , t ε ) × R 2d ; R) which solves the Hamilton-Jacobi equation (5.4) for (x, ξ) ∈ Ω(ε) and |t| ≤ t ε . Moreover, for all α, β ∈ Z d + , x, ξ ∈ R d and |t| ≤ t ε ,
|∂ α x ∂ β ξ (Ψ(t, x, ξ) − x · ξ + tp(x, ξ)| ≤ C αβε |t| 2 x 2−|α+β| , (5.6)
where C αβε > 0 is independent of x, ξ and t.
Proof. We consider the case when t ≥ 0, and the proof for t ≤ 0 is similar. We first define the action integral Ψ(t, x, ξ) on [0, t ε ) × Ω(ε/2) by
Ψ(t, x, ξ) := x · ξ + t 0 L(X(s, Y (t, x, ξ), ξ), Ξ(s, Y (t, x, ξ), ξ))ds,
where L(x, ξ) = ξ · ∂ ξ p(x, ξ) − p(x, ξ) is the Lagrangian associated to p(x, ξ), and X, Ξ and Y are given by Lemma 5.2 (2) with ε replaced by ε/2. The smoothness of Ψ(t, x, ξ) follows from corresponding properties of X(t), Ξ(t) and Y (t). It is well known that Ψ(t, x, ξ) solves the Hamilton-Jacobi equation
∂ t Ψ(t, x, ξ) + p(x, ∂ x Ψ(t, x, ξ)) = 0; Ψ| t=0 = x · ξ,
for (x, ξ) ∈ Ω(ε/2), and satisfies
∂ x Ψ(t, x, ξ) = Ξ(t, Y (t, x, ξ), ξ), ∂ ξ Ψ(t, x, ξ) = Y (t, x, ξ).
Lemma 5.2 (2) shows that p(Y (t, x, ξ), ξ) is of quadratic type:
|∂ α x ∂ β ξ p(Y (t, x, ξ), ξ)| ≤ C αβε x 2−|α+β| , (t, x, ξ) ∈ [0, t ε ) × Ω(ε/2),
which, combined with the energy conservation
p(x, ∂ x Ψ(t, x, ξ)) = p(Y (t, x, ξ), ξ), imply |∂ α x ∂ β ξ ( Ψ(t, x, ξ) − x · ξ)| ≤ C αβε |t| x 2−|α+β| , (t, x, ξ) ∈ [0, t ε ) × Ω(ε/2). We similarly obtain, for (t, x, ξ) ∈ [0, t ε ) × Ω(ε/2), |p(x, ∂ x Ψ(t, x, ξ)) − p(x, ξ)| = ∂ x Ψ(t, x, ξ)) − ξ · 1 0 (∂ ξ p)(x, θ∂ x Ψ(t, x, ξ)) + (1 − θ)ξ)dθ ≤ C ε |t| x 2 ,
and, more generally,
|∂ α x ∂ β ξ (p(x, ∂ x Ψ(t, x, ξ)) − p(x, ξ))| ≤ C αβε |t| x 2−|α+β| .
Therefore, integrating the Hamilton-Jacobi equation with respect to t, we have
|∂ α x ∂ β ξ Ψ(t, x, ξ) − x · ξ + tp(x, ξ) | ≤ C αβε |t| 2 x 2−|α+β| .
Finally, choosing a cut-off function ρ ∈ S(1, g) so that 0 ≤ ρ ≤ 1, ρ ≡ 1 on Ω(ε) and supp ρ ⊂ Ω(ε/2), we define
Ψ(t, x, ξ) := x · ξ − tp(x, ξ) + ρ(x, ξ)( Ψ(t, x, ξ) − x · ξ + tp(x, ξ)).
Ψ(t, x, ξ) clearly satisfies the statement of Proposition 5.5.
Using the phase function constructed in Proposition 5.5, we can define the FIO, J(Ψ, a) :
S → S ′ by J(Ψ, a)f (x) = 1 (2π) d e i(Ψ(t,x,ξ)−y·ξ) a(x, ξ)f (y)dydξ, f ∈ S(R d ),
where a ∈ S(1, g). Moreover, we have the following.
Lemma 5.6. Let t ε > 0 be small enough. Then, for any bounded family of symbols {a(t); |t| < t ε } ⊂ S(1, g), J(Ψ, a) is bounded on L 2 (R d ) uniformly with respect to |t| < t ε :
sup |t|≤tε ||J(Ψ, a)|| L 2 →L 2 ≤ C ε .
Proof. For sufficiently small t ε > 0, the estimates (5.6) imply
|(∂ ξ ⊗ ∂ x Ψ)(t, x, ξ) − Id | ≤ C ε t ε < 1/2, |∂ α x ∂ β ξ Ψ(t, x, ξ)| ≤ C αβε for |α + β| ≥ 2,
uniformly with respect to (t, x, ξ) ∈ (−t ε , t ε ) × R 2d . Therefore, the assertion is a consequence of the standard L 2 -boundedness of FIO, or equivalently Kuranishi's trick and the L 2 -boundedness of PDO (see, e.g., [27] or [25,Lemma 4.2]).
We next construct the amplitude.
Proposition 5.7. Let Ψ be the phase function as in Proposition 5.5 with ε replaced by ε/3. For any integer N ≥ 0, there exist families of symbols {b j (t, ·, ·); |t| < t ε } ⊂ S( x −j ξ −j , g), j = 0, 1, 2, ..., N − 1, such that supp b j (t, ·, ·) ⊂ Ω(ε/2) and b j solve the transport equations (5.5).
Proof. We consider the case t ≥ 0 only. Recall that a vector field X and function Y are defined by
X(t, x, ξ) := (∂ ξ k)(x, ∂ x Ψ(t, x, ξ)), Y(t, x, ξ) := −(KΨ)(t, x, ξ),
respectively. Symbols b j can be constructed in terms of a standard method of characteristics, along the flow generated by X, as follows. For all 0 ≤ s, t ≤ t ε , we consider the solution to the following ODE: ∂ t z(t, s, x, ξ) = X(z(t, s, x, ξ), ξ); z(s, s) = x.
Since X(t, x, ξ) is of linear type on Ω(ε/3), that is
|∂ α x ∂ β ξ X(t, x, ξ)| ≤ C αβε x 1−|α+β| , (x, ξ) ∈ Ω(ε/3),
by a same argument as that in the proof Lemma 5.1, z(t, s) is well defined on Ω(ε/3) and satisfies
|∂ α x ∂ β ξ (z(t, s, x, ξ) − x)| ≤ C αβε t ε x 1−|α+β| , (x, ξ) ∈ Ω(ε/3). (5.7) b j (t), 0 ≤ j ≤ N , are defined inductively by b 0 (t, x, ξ) = χ * ε (z(0, t, x, ξ), ξ) exp t 0 Y(s, z(s, t, x, ξ), ξ)ds , b j (t, x, ξ) = − t 0 (iKb j−1 )(s, z(s, t, x, ξ), ξ) exp t u Y(u, z(u, t, x, ξ), ξ)du ds.
Since supp χ * ε ⊂ Ω(ε), using (5.7) and a same argument as that in the proof of Lemma 5.2 (1), we see that ∂ α x ∂ β ξ b j (t, x, ξ) are supported in Ω(ε/2) for all α, β ∈ Z d + . Thus, if we extend b j on R 2d so that b j (t, x, ξ) = 0 outside Ω(ε/2), then b j is still smooth in (x, ξ). Since Y(t, x, ξ) satisfies
|∂ α x ∂ β ξ Y(t, x, ξ)| ≤ C αβ x −|α+β| , (x, ξ) ∈ Ω(ε/2), 0 ≤ t ≤ t ε ,
{b j (t, ·, ·); t ∈ [0, t ε ], 0 ≤ j ≤ N − 1} is a bounded set in S( x −j ξ −j , g). Finally, a standard Hamilton-Jacobi theory shows that b j (t) solve the transport equations (5.5).
We now state the main result in this section.
Theorem 5.8. Let us fix ε > 0 arbitrarily. Then, for sufficiently small t ε > 0, any nonnegative integer N ≥ 0 and any symbol χ ε ∈ S(1, g) with supp χ ε ⊂ Ω(ε), we can find a bounded family of symbols {a N (t, ·, ·); |t| < t ε } ⊂ S(1, g) such that Op(χ ε )e −itH Op(χ ε ) * can be brought to the form
Op(χ ε )e −itH Op(χ ε ) * = J(Ψ, a N ) + Q(t, N ),
where J(Ψ, a N ) is the FIO with phase Ψ(t, x, ξ) constructed in Proposition 5.5 with ε replaced by ε/3. The distribution kernel of J(Ψ, a N ), which we denote by K Ψ,a N (t, x, y), satisfies the dispersive estimate:
|K Ψ,a N (t, x, y)| ≤ C N,ε |t| −d/2 , 0 < |t| < t ε , x, ξ ∈ R d .
Moreover, for any γ ≥ 0 with N > 2γ, the remainder Q(t, N ) satisfies
|| D γ Q(t, N ) D γ || L 2 →L 2 ≤ C N γε |t|, |t| < t ε . (5.8)
In particular, if we choose N ≥ d + 1, then the distribution kernel of Q(t, N ) is uniformly bounded in R 2d with respect to |t| < t ε , and hence
|| Op(χ ε )e −itH Op(χ ε ) * || L 1 →L ∞ ≤ C ε |t| −d/2 , 0 < |t| < t ε .
Proof of Theorem 5.8. We consider the case when t ≥ 0 and the proof for t < 0 is similar. By virtue of Lemma 5.4, we may replace Op(χ ε ) * by Op(χ * ε ) without loss of generality.
Let b N = N −1 j=0 b j with b j constructed in Proposition 5.7. Since J(Ψ, b N )| t=0 = Op(χ * ε ), we have the Duhamel formula Op(χ ε )e −itH Op(χ * ε ) = Op(χ ε )J(Ψ, b N ) + i t 0 Op(χ ε )e −i(t−s)H (D t + H)J(Ψ, b N )| t=s ds.
Estimates on the remainder. It suffices to show that
sup |t|<tε || D γ Op(χ ε )e −itH (D t + H)J(Ψ, b N ) D γ || L 2 →L 2 ≤ C N γε .
Since Ψ, b j solve the Hamilton-Jacobi equation (5.4) and transport equations (5.5), respectively, a direct computation yields
e −iΨ(t,x,ξ) (D t + H) e iΨ(t,x,ξ) N −1 j=0 b j (t, x, ξ) = r N (t, x, ξ),
with some {r N (t, ·, ·); 0 ≤ t ≤ t ε } ⊂ S( x −N ξ −N , g). In particular,
(D t + H)J(Ψ, b N ) = J(Ψ, r N ).
A standard L 2 -boundedness of FIO then implies
sup |t|<tε || x γ+δ D γ+δ J(Ψ, r N ) D γ || L 2 →L 2 ≤ C N γδ ,
for any γ, δ ≥ 0 with 2γ + δ ≤ N . Since we already proved that
sup |t|≤tε || D γ Op(χ ε )e −itH D −γ−δ x −γ−δ || L 2 →L 2 ≤ C γδ ,
we obtain the desired estimate. Dispersive estimates. By the composition formula of PDO and FIO (cf. [27]), the composition Op(χ ε )J(Ψ, b N ) is also a FIO with the same phase Ψ and the amplitude
a N (t, x, ξ) = 1 (2π) d e iy·η χ ε (x, η + Ξ(t, x, y, ξ))b N (t, x + y, ξ)dydη,
where Ξ(t, x, y, ξ) = 1 0 (∂ x Ψ)(t, y + λ(x − y), ξ)dλ. By virtue of (5.6), Ξ satisfies
|∂ α x ∂ α ′ y ∂ β ξ ( Ξ(t, x, y, ξ) − ξ)| ≤ C αα ′ β |t|, |α + α ′ + β| ≥ 1.
Combining with the fact that χ ε , b N ∈ S(1, g), supp χ ε ⊂ Ω(ε) and supp b N (t, ·, ·) ⊂ Ω(ε/2), we see that {a N ; 0 ≤ t < t ε } is bounded in S(1, g). The distribution kernel of J(Ψ, a N ) is given by
K Ψ,a N (t, x, y) = 1 (2π) d e i(Ψ(t,x,ξ)−y·ξ) a N (t, x, ξ)dξ.
By virtue of Proposition 5.5, we have
sup |t|≤tε |∂ α x ∂ β y ∂ γ ξ (Ψ(t, x, ξ) − y · ξ)| ≤ C αβ , |α + β + γ| ≥ 2, ∂ 2 ξ Ψ(t, x, ξ) = −t(g jk (x)) j,k + O(t 2 ), |t| → 0.
As a consequence, if t ε > 0 is small enough, then the phase function Ψ(t, x, ξ) − y · ξ has a unique non-degenerate critical point for all |t| < t ε , and we can apply the stationary phase method to K Ψ,a N (t, x, y). Therefore,
|K Ψ,a N (t, x, y)| ≤ C|t| −d/2 , 0 < |t| ≤ t ε , x, ξ ∈ R d ,
which complete the proof.
6 Proof of Theorem 1.4
Suppose that H satisfies Assumption 1.1 with µ ≥ 0. In this section we give the proof of Theorem 1.4. In view of Corollary 2.5, (1.4) is a consequence of the following proposition.
Proposition 6.1. For any symbol a ∈ C ∞ 0 (R 2d ) and T > 0,
|| Op h (a)e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T h −1/p ||ϕ|| L 2 (R d ) ,
uniformly with respect to h ∈ (0, 1], provided that (p, q) satisfies (1.1).
Proof. This proposition follows from the standard WKB approximation for e −itH Op h (a) up to time scales of order 1/h. The proof is essentially same as that in the case for the Laplace-Beltrami operator on compact manifolds without boundaries (see, [6, Section 2]), and we omit details.
Using this proposition, we have the semiclassical Strichartz estimates with inhomogeneous error terms:
Proposition 6.2. Let a ∈ C ∞ 0 (R 2d ).
Then, for any T > 0 and any (p, q) satisfying the admissible condition (1.1),
|| Op h (a)e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T || Op h (a)ϕ|| L 2 (R d ) + C T h||ϕ|| L 2 (R d ) + Ch −1/2 || Op h (a)e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) + Ch 1/2 ||[Op h (a), H]e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) ,
uniformly with respect to h ∈ (0, 1]. This proposition has been proved by [4] for the case with V, A ≡ 0. We give a refinement of this proposition with its proof in Section 7.
Next, we shall prove that if k(x, ξ) satisfies the nontrapping condition (1.3), then the missing 1/p derivative can be recovered. We first recall the local smoothing effects for Schrödinger operators proved by Doi [11]. For any s ∈ R, we set B s :
= {f ∈ L 2 (R d ); x s f, D s f ∈ L 2 (R d )}.
Define a symbol e s (x, ξ) by e s (x, ξ) := (k(x, ξ) + |x| 2 + L(s)) s/2 ∈ S((1 + |x| + |ξ|) s , g),
where L(s) > 1 is a large constant depending on s. We denote by E s its Weyl quantization:
E s f (x) = Op w (e s )f (x) = 1 (2π) d e i(x−y)·ξ e s x + y 2 , ξ f (y)dydξ.
Then, for any s ∈ R, there exists L(s) > 0 such that E s is a homeomorphism from B r+s to B r for all r ∈ R, and (E s ) −1 is still a Weyl quantization of a symbol in S((1 + |x| + |ξ|) −s , g) (see, [11,Lemma 4.1]).
Proposition 6.3 (The local smoothing effects [11]). Suppose that k(x, ξ) satisfies the nontrapping condition (1.3). Then, for any T > 0 and σ > 0, there exists C T,σ > 0 such that
|| x −1/2−σ E 1/2 e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) ≤ C T,σ ||ϕ|| L 2 (R d ) . (6.1)
Remark 6.4. (6.1) implies a standard local smoothing effect:
|| x −1/2−σ D 1/2 e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) ≤ C T,σ ||ϕ|| L 2 (R d ) . (6.2)
Indeed, we compute
x −1/2−σ D 1/2 = D 1/2 x −1/2−σ + [ D 1/2 , x −1/2−σ ] = D 1/2 (E 1/2 ) −1 E 1/2 x −1/2−σ + [ D 1/2 , x −1/2−σ ] = D 1/2 (E 1/2 ) −1 x −1/2−σ E 1/2 + [E 1/2 , x −1/2−σ ] + [ D 1/2 , x −1/2−σ ].
It is easy to see that D 1/2 (E 1/2 ) −1 , [E 1/2 , x −1/2−σ ] and [ D 1/2 , x −1/2−σ ] are bounded on L 2 (R d ) since their symbols belong to S(1, g). Therefore, (6.1) implies (6.2).
Proof of (1.5) of Theorem 1.4. It is clear that (1.5) follows from Proposition 6.2, (6.2) and Corollary 2.5, since a is compactly supported with respect to x and {a, p} ∈ S( ξ , g), where p = p(x, ξ).
Strichartz estimates with loss without asymptotic flatness
This section is devoted to prove Theorem 1.5. We may assume µ = 0 without loss of generality. We begin with the following proposition.
Proposition 7.1. Let I ⋐ (0, ∞) be an open interval and C 0 > 0. Then, there exist δ 0 , h 0 > 0 such that for any 0 < δ ≤ δ 0 , 0 < h ≤ h 0 , 1 ≤ R ≤ 1/h and any symbol a h ∈ S(1, g) supported in {(x, ξ); R < |x| < C 0 /h, |ξ| ∈ I}, we have
|| Op h (a h )e −itH Op h (a h ) * || L 1 →L ∞ ≤ C δ |t| −d/2 , 0 < |t| < δhR, (7.1)
where C δ > 0 may be taken uniformly with respect to h and R.
Remark 7.2. When |t| > 0 in (7.1) is small and independent of R, then Proposition 7.1 is wellknown and the proof is given by the standard method of the short-time WKB approximation for e −itH h /h Op h (a h ) * (see, e.g., [6]).
For h ∈ (0, 1], R ≥ 1, an open interval I ⋐ (0, ∞) and C 0 > 0, we set
Γ(R, h, I) := {(x, ξ) ∈ R 2d ; R < |x| < C 0 /h, |ξ| ∈ I}.
Proposition 7.1 is a consequence of the same argument in the proof of Proposition 3.1 and the following proposition, which is a refinement of the standard WKB approximation for the semiclassical Schrödinger propagator: Proposition 7.3. Let I ⋐ I 1 ⋐ (0, ∞) and C 0 > 0. Then, there exist δ 0 , h 0 > 0 such that the followings hold for any
0 < δ ≤ δ 0 , 0 < h ≤ h 0 and 1 ≤ R ≤ C 0 /h. (1) There exists Φ h (t, x, ξ) ∈ C ∞ ((−δR, δR) × R 2d )
such that Φ h solves the following Hamilton-Jacobi equation:
∂ t Φ h (t, x, ξ) = −p h (x, ∂ x Φ h (t, x, ξ)), |t| < δR, (x, ξ) ∈ Γ(R/2, h/2, I 1 ), Φ h (0, x, ξ) = x · ξ, (x, ξ) ∈ Γ(R/2, h/2, I 1 ), (7.2)
where p h is defined in the beginning of Section 2. Moreover, we have
|∂ α x ∂ β ξ (Φ h (t, x, ξ) − x · ξ + tp h (x, ξ)) | ≤ C αβ R −|α| h|t| 2 , α, β ∈ Z d + ,(7.3)
uniformly with respect to x, ξ ∈ R d , h ∈ (0, h 0 ], 0 ≤ R ≤ C 0 /h and |t| < δR.
(2) For any a ∈ S(1, g) with supp a ∈ Γ(R, h, I) and any integer N ≥ 0, we can find b N h (t, ·, ·) ∈ S(1, g) such that
e −it H h /h Op h (a) * = J h (Φ h , b N h ) + Q WKB (t, h, N ), where J h (Φ h , b N h )
is the h-FIO with the phase function Φ h and the amplitude b N h , and its distribution kernel satisfies
|K WKB (t, h, x, y)| ≤ C|th| −d/2 , h ∈ (0, h 0 ], 0 < |t| ≤ δR, x, ξ ∈ R d . (7.4)
Moreover the remainder Q WKB (t, h, N ) satisfies
|| D s Q WKB (t, h, N ) D s || L 2 →L 2 ≤ C N,s h N −2s |t|, h ∈ (0, h 0 ], |t| ≤ δR.
Sketch of the proof. The proof is similar to that of Theorem 5.8 and, in particular, the proof of the second claim is completely same. Thus, we give only the outline of the construction of Φ h . We may assume C 0 = 1 without losing generality. Let us denote by (X h , Ξ h ) the Hamilton flow generated by p h . To construct the phase function, the most important step is to study the inverse map of (x, ξ) → (X h (t, x, ξ), ξ). Choose an open interval I 1 so that I 1 ⋐ I 1 ⋐ (0, ∞).
The following bounds have been proved by [25]:
|∂ α x ∂ β ξ (X h (t, x, ξ) − x)| + x |∂ α x ∂ β ξ (Ξ h (t, x, ξ) − ξ)| ≤ C αβ x −|α| |t|,
for (x, ξ) ∈ Γ(R/3, h/3, I 1 ) and |t| ≤ δR. For sufficiently small δ > 0 and for any fixed |t| ≤ δR, the above estimates imply |∂ x X h (t) − Id | ≤ CR −1 |t| ≤ Cδ < 1/2.
By the same argument as that in the proof of Lemma 5.2, the map (x, ξ) → (X h (t, x, ξ), ξ) is a diffeomorphism from Γ(R/3, h/3, I 1 ) onto its range, and that the corresponding inverse (x, ξ) → (Y h (t, x, ξ), ξ) is well-defined for |t| < δR and (x, ξ) ∈ Γ(R/2, h/2, I 1 ). Moreover, X h (t) satisfies the same estimates as that for X h (t):
|∂ α x ∂ β ξ (Y h (t, x, ξ) − x)| ≤ C αβ x −|α| |t|, |t| < δR, (x, ξ) ∈ Γ(R/2, h/2, I 1 )
We now define Φ h by
Φ h (t, x, ξ) := x · ξ + t 0 L h (X h (s, Y (t,
x, ξ), ξ), Ξ(s, y(t, x, ξ), ξ))ds,
where L h = ξ · ∂ ξ p h − p h . By the standard Hamilton-Jacobi theory, Φ h solves (7.2). Moreover, using the estimates on x h , ξ h and y h , we see that
|p h (∂ x Φ h , ξ) − p h (x, ξ)| ≤ |y(t) − x| λ 0 (∂ x p h )(λy h (t) − (1 − λ)x, ξ)dλ ≤ C|y(t) − x|(h + h 2 x 2 )
≤ Ch|t|, and that |∂ α x ∂ β ξ (p h (∂ x Φ h , ξ) − p h (x, ξ))| ≤ C αβ x −|α| h|t|. Using these estimates, we can check that Φ h satisfies (7.3). Finally, we extend Φ h to the whole space so that Φ h (t, x, ξ) = x · ξ − tp h (x, ξ) outside Γ(R/3, h/3, I 1 ). Using Proposition 7.1, we obtain a refinement of Proposition 6.2: Proposition 7.4. Let 0 < R ≤ 1/h and let a ∈ S(1, g) be supported in {(x, ξ); R < |x| < 1/h, |ξ| ∈ I}. Then, for any T > 0 and (p, q) satisfying the admissible condition (1.1) We here choose b h ∈ S(1, g) so that b h ≡ 1 on supp a and b h is supported in a sufficiently small neighborhood of supp a. By Proposition 7.1, Op h (b h )e −i(t−s)H Op h (b h ) satisfies dispersive estimates (7.1) for 0 < |t − s| < δhR with some δ > 0 small enough. Using the Keel-Tao theorem [20] and the unitarity of e −itH , we then learn that for any interval J R of size |J R | ≤ 2hR, the following homogeneous and inhomogeneous Strichartz estimates hold uniformly with respect to h ∈ (0, h 0 ]:
|| Op h (b h )e −itH ϕ|| L p (J R ;L q (R d )) ≤ C||ϕ|| L 2 (R d ) , (7.5) t 0 F (s ∈ J R ) Op h (b h )e −i(t−s)H Op h (b h ) * g(s)ds L p (J R ;L q (R d ))
≤ C||g|| L 1 (J R ;L 2 (R d )) . We similarly obtain the same bound for j = N :
|| Op h (a h )e −itH || L p (J N ;L q ) ≤ C|| Op h (a h )ϕ|| L 2 + Ch||ϕ|| L 2 + C(hR) 1/2 ||[Op h (a h ), H]e −itH ϕ|| L 2 (J N ;L 2 ) .
For j = 1, 2, ..., N − 1, taking θ ∈ C ∞ 0 (R) so that θ ≡ 1 on [−1/2, 1/2] and supp θ ⊂ [−1, 1], we set θ j (t) = θ(t/(hR) − j − 1/2)). It is easy to see that θ j ≡ 1 on J j and supp θ j ⊂ J j = J j + [−hR/2, hR/2]. We consider v j = θ j (t) Op h (a)e −itH ϕ, which solves i∂ t v j = Hv j + θ ′ j Op h (a h )e −itH ϕ + θ j [Op h (a h ), H]e −itH ϕ; v j | t=0 = 0.
The Duhamel formula then yields Summing over j = 0, 1, ..., N , since N ≤ T /h and p ≥ 2, we have the assertion by Minkowski's inequality.
Proof of Theorem 1.5. In view of Corollary 2.5, Theorem 1.4 and Proposition 3.2, it suffices to show that, for any a h ∈ S(1, g) with supp a h ∈ {(x, ξ); 2 ≤ |x| ≤ 1/h, |ξ| ∈ I} and any ε > 0,
h || Op h (a h )e −itH f (h 2 H)ϕ|| 2 L p ([−T,T ];L q ) ≤ C T,ε || H ε ϕ|| 2 L 2 .
Let us consider a dyadic partition of unity: Since {χ j a h , p} ∈ S( x −1 ξ , g), we similarly obtain where χ j (x) = χ(2 −j x) for some χ ∈ C ∞ 0 (R d ) satisfying χ ≡ 1 on [1/2, 2] and supp χ ⊂ [1/4, 4], and b h ∈ S(1, g) is supported in a neighborhood of supp a h so that b h ≡ 1 on supp a h . Summing over 1 ≤ j ≤ j h and using local smoothing effects (6.2), since p, q ≥ 2, we obtain
|| Op h (a h )e −itH ϕ|| 2 L p ([−T,T ];L q ) ≤ 1≤j≤j h ||χ j Op h (a h )e −itH ϕ|| 2 L p ([−T,T ];L q ) ≤ C T 1≤j≤j h (||χ j Op h (a h )ϕ|| 2 L 2 + h||ϕ|| 2 L 2 ) + C 1≤j≤j h || χ j x −1/2−ε h −1/2−ε Op h (a h + b h )e −itH ϕ|| 2 L 2 ([−T,T ];L 2 ) ≤ C T ||ϕ|| 2 L 2 + C|| x −1/2−ε h −1/2−ε Op h (a h + b h )e −itH ϕ|| 2 L 2 ([−T,T ];L 2 ) ≤ C T,ε h −2ε ||ϕ|| 2 L 2 , which implies h || Op h (a h )e −itH f (h 2 H)ϕ|| 2 L p ([−T,T ];L q ) ≤ C T,ε h h −2ε ||f (h 2 H)ϕ|| 2 L 2 ≤ C T,ε || H ε/2 ϕ|| 2 L 2 .
We complete the proof.
Remark 1. 6 .
6(1) The estimates of forms (1.2), (1.4) and (1.5) have peen proved by
Proposition 4. 1 .
1Let us fix arbitrarily open intervals I ⋐ (0, ∞), −1 < σ < 1 and L > 0. Then, there exist R 0 , h 0 > 0 and a family of smooth and real-valued functions
Theorem 4. 3 .
3Let us fix arbitrarily open intervals I ⋐ I 0 ⋐ I 1 ⋐ I 2 ⋐ (0, ∞), −1 < σ < σ 0 < σ 1 < σ 2 < 1 and L > 0. Let R 0 and h 0 be as in Proposition 4.1 with I, σ replaced by I 2 , σ 2 , respectively. Then, for every integer N ≥ 0, the followings hold uniformly with respect to R ≥ R 0 and h ∈ (0, h 0 ]. (1) There exists a symbol
(4.3) and (4.4), we learn that x + h (t) is well-defined on [0, ∞) × Γ + ( R, I, σ) with R = R 5 16
Lemma 4 . 4 .
44Suppose that {a + h } h∈(0,1] is a bounded set in S(1, g) with symbols supported in Γ + (R, I, σ) ∩ {x; |x| < h −1 }.Then, there exists L ≥ 1 such that, for any M, s
and that supp f h ⊂ {|x| ≥ L/h}. Let L ≥ 1 be large enough. Then, under the conditions in Lemma 4.4, we have||f h (x) D γ e −it H h /h Op h (a + h ) * D s || L 2 →L 2 ≤ C M,s,γ h M −s−γ ,for any s, γ ≥ 0 and M ≥ 0, uniformly with respect h ∈ (0, h 0 ] and 0 ≤ t ≤ 1/h.Proof. We apply Theorem 4.3 to e −it H h /h Op h (a + h ) * and obtain
uniformly in h ∈ (0, h 0 ]. On the other hand, since || D s Op h (a + h ) D s || L 2 →L 2 ≤ C s h −2s , h ∈ (
||
Op h (a h )e −itH ϕ|| L p ([−T,T ];L q (R d )) ≤ C T || Op h (a h )ϕ|| L 2 (R d ) + C T h||ϕ|| L 2 (R d ) + C T (hR) −1/2 || Op h (a h )e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) + C T (hR) 1/2 ||[H, Op h (a h )]e −itH ϕ|| L 2 ([−T,T ];L 2 (R d )) ,uniformly with respect to h ∈ (0, h 0 ].Proof. The proof is similar to that of [4, Proposition 5.4]. By time reversal invariance we can restrict our considerations to the interval [0, T ]. We may assume T ≥ hR without loss of generality and split [0, T ] as follows: [0, T ] = J 0 ∪ J 1 ∪ · · · ∪ J N , where J j = [jhR, (j + 1)hR], 0 ≤ j ≤ N − 1, and J N = [T − δhR, T ]. For j = 0, we have the Duhamel formula Op h (a h )e −itH = e −itH Op h (a h ) − i t 0 e −i(t−s)H [Op h (a h ), H]e −isH ds, t ∈ J 0 .
other hand, using the expansions (2.3) and(2.4), we see that for any integer M ≥ 0,Op h (a h ) = Op h (b h ) Op h (a h ) + h M Op h (r 1 ) = Op h (b h ) * Op h (a h ) + h M Op h (r 2 ), [Op h (a h ), H] = Op h (b h ) * [Op h (a h ), H] + h M Op h (r 3 ), with some r 1 , r 2 , r 3 ∈ S( x −M ξ −M , g). Therefore, we can write Op h (a h )e −itH = Op h (b h )e −itH Op h (a h ) Op h (b h )e −i(t−s)H Op h (b h ) * [Op h (a h ), H]e −isH ds + Q(t, h, M ),where the remainder Q(t, h, M ) satisfies||Q(t, h, M )|| L 2 →L q ≤ C M h M −1−d(1/2−1/q) , 2 ≤ q ≤ ∞,uniformly in h ∈ (0, 1]. Combining this estimate with (7.5) and (7.6), we obtain|| Op h (a h )e −itH || L p (J 0 ;L q ) ≤ C|| Op h (a h )ϕ|| L 2 + Ch||ϕ|| L 2 + C||[Op h (a h ), H]e −itH ϕ|| L 1 (J 0 ;L 2 ) ≤ C|| Op h (a h )ϕ|| L 2 + Ch||ϕ|| L 2 + C(hR) 1/2 ||[Op h (a h ), H]e −itH ϕ|| L 2 (J 0 ;L 2 ) .
Op h (b h )e −i(t−s)H Op h (b h ) * θ ′ j (s) Op h (a h ) + θ j (s)[Op h (a h ), H] e −isH ϕds,for t ∈ J j , which, combined the same argument as above, implies||v j || L p (J j ;L q ) ≤ Ch 2 ||ϕ|| L 2 + C(hR) −1 || Op h (a h )e −itH ϕ|| L 1 ( J j ;L 2 ) + C||[Op h (a h ), H]e −itH ϕ|| L 1 ( J j ;L 2 ) ≤ Ch 2 ||ϕ|| L 2 + C(hR) −1/2 || Op h (a h )e −itH ϕ|| L 2 ( J j ;L 2 ) + C(hR) 1/2 ||[Op h (a h ), H]e −itH ϕ|| L 2 ( J j ;L 2 ) .
∈ C ∞ 0 (R d ) with supp χ ⊂ {1/2 < |x| < 2} and j h ≤ [log(1/h)] + 1. We set χ j (x) = χ(2 −j x). Proposition 7.4 then implies||χ j Op h (a h )e −itH ϕ|| L p ([−T,T ];L q ) ≤ C T ||χ j Op h (a h )ϕ|| L 2 + C T h||ϕ|| L 2 + C T (h2 j ) −1/2 ||χ j Op h (a h )e −itH ϕ|| L 2 ([−T,T ];L 2 ) + C T (h2 j ) 1/2 ||[χ j Op h (a h ), H]e −itH ϕ|| L 2 ([−T,T ];L 2 ) ,Since 2 j−1 ≤ |x| ≤ 2 j+1 and |x| ≤ 1/h on supp χ j a h we have, for any ε ≥ 0,(h2 j ) −1/2 ||χ j Op h (a h )e −itH ϕ|| L 2 ([−T,T ];L 2 ) ≤ C||χ j x −1/2−ε h −1/2−ε Op h (a h )e −itH ϕ|| L 2 ([−T,T ];L 2 ) .
(
h2 j ) 1/2 ||χ j [Op h (a h ), H]e −itH ϕ|| L 2 ([−T,T ];L 2 ) ≤ || χ j x −1/2−ε h −1/2−ε Op h (b h )e −itH ϕ|| L 2 ([−T,T ];L 2 ) + C T h||ϕ|| L 2 ,
1. Since the second term of W h 0 commutes with V h , a symbolic calculus yields that [ H h , W h 0 ] is decomposed into three parts: W h || L 2 →L 2 ≤ C M h M for any M ≥ 0.01 ∈ S( x −µ ξ 2 , g),
W h
02 ∈ S( x 1−µ ξ , g) and W h
03 ∈ S( x 2−2µ , g 0 ). We set W h
1 = W h
02 + W h
03 . By the Duhamel
formula, we have
W h
1 e −iτ H h /h = e −iτ H h /h W h
1 +
i
h
τ
0
e −i(τ −u) H h /h [ H h , W h
1 ]e −iu H h /h du.
By the support properties of a +
h and W h
1 , we have
||W h
1 Op h (a +
h ) * Again, by the symbolic calculus, we can write [ H h , W h
1 ] = W h
11 + W h
12 + W h
13 , where W h
11
Proof. This proposition follows from the L 2 -boundedness of e −itH , Propositions 2.1 and 2.4 (with ψ ε replaced by ρψ ε ) and the Minkowski inequality.3 Proof of Theorem 1.2
Acknowledgements. We would like to express his sincere thanks to Erik Skibsted for valuable discussions and for hospitality at Institut for Matematiske Fag, Aarhus Universitet, where a part of this work was carried out.
Littlewood-Paley decompositions on manifolds with ends. J.-M Bouclet, Bull. Soc. Math. France. 138Bouclet, J.-M.: Littlewood-Paley decompositions on manifolds with ends. Bull. Soc. Math. France. 138 (2010), 1-37.
Semi-classical calculus on non compact manifolds with ends and weighted L p estimates. J.-M Bouclet, Ann. Inst. Fourier. 61Bouclet, J.-M.: Semi-classical calculus on non compact manifolds with ends and weighted L p estimates. Ann. Inst. Fourier. 61(2011), 1181-1223.
J.-M Bouclet, Strichartz estimates on asymptotically hyperbolic manifolds. Anal. PDE. 4Bouclet, J.-M.: Strichartz estimates on asymptotically hyperbolic manifolds. Anal. PDE. 4 (2011), 1-84.
Strichartz estimates for long range perturbations. J.-M Bouclet, N Tzvetkov, Amer. J. Math. 129Bouclet, J.-M., Tzvetkov, N.: Strichartz estimates for long range perturbations. Amer. J. Math. 129 (2007), 1565-1609.
On global Strichartz estimates for non trapping metrics. J.-M Bouclet, N Tzvetkov, J. Funct. Analysis. 254Bouclet, J.-M., Tzvetkov, N.: On global Strichartz estimates for non trapping metrics. J. Funct. Analysis. 254 (2008), 1661-1682.
Strichartz inequalities and the nonlinear Schrödinger equation on compact manifolds. N Burq, P Gérard, N Tzvetkov, Amer. J. Math. 126Burq, N., Gérard, P., Tzvetkov, N.: Strichartz inequalities and the nonlinear Schrödinger equation on compact manifolds. Amer. J. Math. 126 (2004), 569-605.
Strichartz estimates without loss on manifolds with hyperbolic trapped geodesics. N Burq, C Guillarmou, A Hassell, Geom. Funct. Anal. 20Burq, N., Guillarmou, C., Hassell, A.: Strichartz estimates without loss on manifolds with hyperbolic trapped geodesics. Geom. Funct. Anal. 20 (2010), 627-656.
Semilinear Schrödinger equations. T Cazenave, Courant. Lect. Nates Math. 10AMSCazenave, T.: Semilinear Schrödinger equations. Courant. Lect. Nates Math. vol. 10, AMS, Providence, RI, 2003.
Smoothing estimates for the Schrödinger equation with unbounded potentials. P D'ancona, L Fanelli, J. Differential Equations. 246D'Ancona, P., Fanelli, L.: Smoothing estimates for the Schrödinger equation with unbounded potentials J. Differential Equations 246 (2009), 4552-4567.
Endpoint Strichartz estimates for the magnetic Schrödinger equation. P D'ancona, L Fanelli, L Vega, N Visciglia, J. Funct. Analysis. 258D'Ancona, P., Fanelli, L., Vega, L., Visciglia, N.: Endpoint Strichartz estimates for the magnetic Schrödinger equation. J. Funct. Analysis 258 (2010), 3227-3240.
Smoothness of solutions for Schrödinger equations with unbounded potentials. S Doi, Publ. Res. Inst. Math. Sci. 41Doi, S.: Smoothness of solutions for Schrödinger equations with unbounded potentials. Publ. Res. Inst. Math. Sci. 41 (2005), 175-221.
Strichartz and smoothing estimates for Schrödinger operators with almost critical magnetic potentials in three and higher dimensions. M Erdogan, M Goldberg, W Schlag, Forum Math. 21Erdogan, M., Goldberg, M., Schlag, W.: Strichartz and smoothing estimates for Schrödinger operators with almost critical magnetic potentials in three and higher dimensions. Forum Math. 21 (2009), 687-722.
Remarks on convergence of the Feynman path integrals. D Fujiwara, Duke Math. J. 47Fujiwara, D.: Remarks on convergence of the Feynman path integrals. Duke Math. J. 47 (1980), 559-600.
Prolongement méromorphe de la matrice de scattering pour des problèmesà deux corpsà longue portée. C Gerard, A Martinez, Ann. Inst. H. Poincaré Phys. Théor. 5Gerard, C., Martinez., A.: Prolongement méromorphe de la matrice de scattering pour des problèmesà deux corpsà longue portée. Ann. Inst. H. Poincaré Phys. Théor. 5 (1989), 81-100.
The global Cauchy problem for the non linear Schrödinger equation. Ann. lHP-Analyse non linéaire. J Ginibre, G Velo, 2Ginibre, J., Velo, G.: The global Cauchy problem for the non linear Schrödinger equation. Ann. lHP-Analyse non linéaire. 2 (1985), 309-327.
Sharp Strichartz estimates on nontrapping asymptotically conic manifolds. A Hassell, T Tao, J Wunsch, Amer. J. Math. 128Hassell, A., Tao, T., Wunsch, J.: Sharp Strichartz estimates on nontrapping asymptotically conic manifolds. Amer. J. Math. 128 (2006), 963-1024.
Equation de Schrödinger avec champ magnétique etéquation de Harper. B Helffer, J Sjöstrand, Lecture Notes in Physics. Schrödinger Operators, H. Holden and A. Jensen345Springer-VerlagHelffer, B., Sjöstrand, J.: Equation de Schrödinger avec champ magnétique etéquation de Harper. In Schrödinger Operators, H. Holden and A. Jensen, eds., pp. 118-197. Lecture Notes in Physics 345, Springer-Verlag, 1989
Modified wave operators with time independent modifiers. H Isozaki, H Kitada, J. Fac. Sci. Univ. Tokyo. 32Isozaki, H., Kitada, H.: Modified wave operators with time independent modifiers. J. Fac. Sci. Univ. Tokyo. 32 (1985), 77-104.
Decay estimates for Schrödinger operators. J.-L Journé, A Soffer, C D Sogge, Comm. Pure Appl. Math. 44Journé, J.-L., Soffer, A., Sogge, C. D.: Decay estimates for Schrödinger operators. Comm. Pure Appl. Math. 44 (1991), 573-604.
Endpoint Strichartz Estimates. M Keel, T Tao, Amer. J. Math. 120Keel, M., Tao, T.: Endpoint Strichartz Estimates. Amer. J. Math. 120 (1998), 955-980.
A family of Fourier integral operators and the fundamental solution for a Schrödinger equation. H Kitada, H Kumano-Go, Osaka J. Math. 18Kitada, H., Kumano-go, H.: A family of Fourier integral operators and the fundamental solution for a Schrödinger equation. Osaka J. Math. 18 (1981), 291-360.
An Introduction to Semiclassical and Microlocal Analysis. A Martinez, Springer-VerlagNew YorkMartinez, A.: An Introduction to Semiclassical and Microlocal Analysis. New York: Uni- versitext, Springer-Verlag, 2002
Strichartz estimates and local smoothing estimates for asymptotically flat Schrödinger equations. J Marzuola, J Metcalfe, D Tataru, J. Funct. Analysis. 255Marzuola, J., Metcalfe, J., Tataru, D.: Strichartz estimates and local smoothing estimates for asymptotically flat Schrödinger equations. J. Funct. Analysis. 255 (2008), 1497-1553.
H Mizutani, Strichartz estimates for Schrödinger equations on scattering manifolds. Comm. Partial Differential Equations. 37Mizutani, H.: Strichartz estimates for Schrödinger equations on scattering manifolds. Comm. Partial Differential Equations. 37 (2012), 169-224.
Strichartz estimates for Schrödinger equations with variable coefficients and potentials at most linear at spatial infinity. H Mizutani, Submitted.Mizutani, H.: Strichartz estimates for Schrödinger equations with variable coefficients and potentials at most linear at spatial infinity. Submitted. (http://arxiv.org/abs/1108.2103)
Strichartz estimates for Schrödinger equations with variable coefficients. L Robbiano, C Zuily, Mém. SMF. Math. Fr. (N.S.). 102101Robbiano, L., Zuily, C.: Strichartz estimates for Schrödinger equations with variable coef- ficients. Mém. SMF. Math. Fr. (N.S.), No. 101-102, 1-208 (2005)
Autour de l'approximation semi-classique. D Robert, Progr. Math. 68 Birkhäuser. Robert, D.: Autour de l'approximation semi-classique. Progr. Math. 68 Birkhäuser, Basel, 1987
Relative time delay for perturbations of elliptic operators and semiclassical asymptotics. D Robert, J. Funct. Analysis. 126Robert, D.: Relative time delay for perturbations of elliptic operators and semiclassical asymptotics. J. Funct. Analysis 126 (1994), 36-82.
Semi-classical estimates for resolvents and asymptotics for total scattering cross-sections. D Robert, H Tamura, Ann. Inst. Henri Poincare. 46Robert, D., Tamura, H.: Semi-classical estimates for resolvents and asymptotics for total scattering cross-sections. Ann. Inst. Henri Poincare 46 (1987), 415-442.
Dispersive estimates for Schrödinger operators: a survey. Mathematical aspects of nonlinear dispersive equations. W Schlag, Ann. of Math. Stud. Princeton Univ. Press163Princeton, NJSchlag, W.: Dispersive estimates for Schrödinger operators: a survey. Mathematical aspects of nonlinear dispersive equations, Ann. of Math. Stud. Princeton Univ. Press, Princeton, NJ. 163 (2007), 255-285
Strichartz estimates for a Schrödinger operator with non smooth coefficients. G Staffilani, D Tataru, Comm. Partial Differential Equations. 27Staffilani, G., Tataru, D.: Strichartz estimates for a Schrödinger operator with non smooth coefficients. Comm. Partial Differential Equations. 27 (2002), 1337-1372.
Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations. R Strichartz, Duke Math. J. 44Strichartz, R.: Restrictions of Fourier transforms to quadratic surfaces and decay of solu- tions of wave equations. Duke Math. J. 44 (1977), 705-714.
Parametrices and dispersive estimates for Schrödinger operators with variable coefficients. D Tataru, Amer. J. Math. 130Tataru, D.: Parametrices and dispersive estimates for Schrödinger operators with variable coefficients. Amer. J. Math. 130 (2008), 571-634.
Existence of solutions for Schrödinger evolution equations. K Yajima, Comm Math. Phys. 110Yajima, K.: Existence of solutions for Schrödinger evolution equations. Comm Math. Phys. 110 (1987), 415-426.
Schrödinger evolution equation with magnetic fields. K Yajima, J. d'Anal. Math. 56Yajima, K.: Schrödinger evolution equation with magnetic fields. J. d'Anal. Math. 56 (1991), 29-76.
Boundedness and continuity of the fundamental solution of the time dependent Schrödinger equation with singular potentials. K Yajima, Tohoku Math. J. 50Yajima, K.: Boundedness and continuity of the fundamental solution of the time dependent Schrödinger equation with singular potentials. Tohoku Math. J. 50 (1988), 577-595.
Dispersive estimates for Schrödinger equations with threshold resonance and eigenvalue. K Yajima, Comm. Math. Phys. 259Yajima, K.: Dispersive estimates for Schrödinger equations with threshold resonance and eigenvalue. Comm. Math. Phys. 259 (2005), 475-509.
Local smoothing property and Strichartz inequality for Schrödinger equations with potentials superquadratic at infinity. K Yajima, G Zhang, J. Differential Equations. 202Yajima, K., Zhang, G.: Local smoothing property and Strichartz inequality for Schrödinger equations with potentials superquadratic at infinity. J. Differential Equations. 202 (2004), 81-110.
| []
|
[
"Infinite transitivity and special automorphisms",
"Infinite transitivity and special automorphisms"
]
| [
"Ivan Arzhantsev "
]
| []
| [
"Ark. Mat"
]
| It is known that if the special automorphism group SAut(X) of a quasiaffine variety X of dimension at least 2 acts transitively on X, then this action is infinitely transitive. In this paper we question whether this is the only possibility for the automorphism group Aut(X) to act infinitely transitively on X. We show that this is the case, provided X admits a nontrivial Gaor Gm-action. Moreover, 2-transitivity of the automorphism group implies infinite transitivity. | 10.4310/arkiv.2018.v56.n1.a1 | [
"https://www.intlpress.com/site/pub/files/_fulltext/journals/arkiv/2018/0056/0001/ARKIV-2018-0056-0001-a001.pdf"
]
| 119,625,038 | 1610.09115 | 6fd797b8d71de2fed19383d1701e6f8651203ce0 |
Infinite transitivity and special automorphisms
2018
Ivan Arzhantsev
Infinite transitivity and special automorphisms
Ark. Mat
56201810.4310/ARKIV.2018.v56.n1.a1
It is known that if the special automorphism group SAut(X) of a quasiaffine variety X of dimension at least 2 acts transitively on X, then this action is infinitely transitive. In this paper we question whether this is the only possibility for the automorphism group Aut(X) to act infinitely transitively on X. We show that this is the case, provided X admits a nontrivial Gaor Gm-action. Moreover, 2-transitivity of the automorphism group implies infinite transitivity.
Introduction
Consider a set X, a group G and a positive integer m. An action G×X →X is said to be m-transitive if it is transitive on ordered m-tuples of pairwise distinct points in X, and is infinitely transitive if it is m-transitive for all positive integers m.
It is easy to see that the symmetric group S n acts n-transitively on a set of order n, while the action of the alternating group A n is (n−2)-transitive. A generalization of a classical result of Jordan [21] based on the classification of finite simple groups claims that there are no other m-transitive finite permutation groups with m>5.
Clearly, the group S(X) of all permutations of an infinite set X acts infinitely transitively on X. The first explicit example of an infinitely transitive and faithful action of the free group F n with the number of generators n≥2 was constructed in [33]; see [16], [24] and references therein for recent results in this direction.
Infinite transitivity on real algebraic varieties was studied in [9], [22], [23] and [30]. For multiple transitive actions of real Lie groups on real manifolds, see [10] and [29].
A classification of multiple transitive actions of algebraic groups on algebraic varieties over an algebraically closed field is obtained in [27]. It is shown there that the only 3-transitive action is the action of PGL(2) on the projective line P 1 . Moreover, for reductive groups the only 2-transitive action is the action of PGL(m+1) on P m .
In this paper we consider highly transitive actions in the category of algebraic varieties over an algebraically closed field K of characteristic zero. By analogy with the full permutation group S(X) it is natural to ask about transitivity properties for the full automorphism group Aut(X) of an algebraic variety X. The phenomenon of infinite transitivity for Aut(X) in affine and quasiaffine settings was studied in many works, see [2], [3], [6], [7], [17], [26] and [37]. The key role here plays the special automorphism group SAut(X).
More precisely, let G a (resp. G m ) be the additive (resp. multiplicative) group of the ground field K. We let SAut(X) denote the subgroup of Aut(X) generated by all algebraic one-parameter unipotent subgroups of Aut(X), that is, subgroups in Aut(X) coming from all regular actions G a ×X →X.
Let X be an irreducible affine variety of dimension at least 2 and assume that the group SAut(X) acts transitively on the smooth locus X reg . Then [2, Theorem 0.1] claims that the action is infinitely transitive. This result can be extended to quasiaffine varieties; see [7,Theorem 2] and [17,Theorem 1.11].
We address the question whether transitivity of SAut(X) is the only possibility for the automorphism group Aut(X) of an irreducible quasiaffine variety X to act infinitely transitively on X. We show that 2-transitivity of the group Aut(X) implies transitivity of the group SAut(X) provided X admits a nontrivial G a -or G m -action; see Theorem 11 and Corollary 13. We conjecture that the assumption on existence of a nontrivial G a -or G m -action on X is not essential and 2-transitivity of Aut(X) always implies transitivity of SAut(X) and thus infinite transitivity of Aut(X) (Conjecture 16).
The quasiaffine case differs from the affine one at least by two properties: the algebra of regular functions K[X] need not be finitely generated and not every locally nilpotent derivation on K[X] gives rise to a G a -action on X. These circumstances require new ideas when transferring the proofs obtained in the affine case. Our interest in the quasiaffine case, especially when the algebra K[X] is not finitely generated, is motivated by several reasons. Homogeneous quasiaffine varieties appear naturally as homogeneous spaces X =G/H of an affine algebraic group G. By Grosshans' Theorem, the question whether the algebra K[G/H] is finitely generated is crucial for the Hilbert's fourteenth problem, see [20] and [35,Section 3.7]. The group Aut(X) acts infinitely transitively on X provided the group G is semisimple [2,Proposition 5.4]. On the other hand, quasiaffine varieties, including the ones with not finitely generated algebra of regular functions, appear as universal torsors X →X over smooth rational varieties X in the framework of the Cox ring theory, see e.g. [1,Propositions 1.6.1.6,4.3.4.5]. By [7,Theorem 3], for a wide class of varieties X arising in this construction, the special automorphism group SAut( X) acts infinitely transitively on X.
Let us give a short overview of the content of the paper. In Section 2, we recall basic facts on the correspondence between G a -actions on an affine variety X and locally nilpotent derivations of the algebra K[X]. Proposition 1 extends this correspondence to the case when X is quasiaffine.
In Section 3, we generalize the result of [4] on the automorphism group of a rigid affine variety to the quasiaffine case. Recall that an irreducible algebraic variety X is called rigid if X admits no nontrivial G a -action. Theorem 5 states that the automorphism group of a rigid quasiaffine variety contains a unique maximal torus; the proof is an adaptation of the method of [18, Section 3] to our setting.
Also, we describe all affine algebraic groups, which can be realized as a full automorphism group of a quasiaffine variety (Proposition 8); the list of such groups turns out to be surprisingly short.
Section 4 contains our main results, Theorem 11 and Corollary 13. In Corollary 14 we observe that if an irreducible quasiaffine variety X admits a nontrivial G a -or G m -action, the group Aut(X) acts on X with an open orbit O, and the action of Aut(X) is 2-transitive on O, then X is unirational. This result follows also from [34,Corollary 3].
In the last section, we discuss some questions related to Conjecture 16. We pose a problem on transitivity properties for the automorphism group on a quasiaffine variety with few locally finite automorphisms (Problem 20) and ask about classification of homogeneous algebraic varieties (Problem 21).
The author would like to thank Sergey Gaifullin, Alexander Perepechko, Andriy Regeta and Mikhail Zaidenberg for helpful comments and remarks. Also he is grateful to the anonymous referee for valuable suggestions.
Locally nilpotent derivations and G a -actions
In this section we discuss basic facts on locally nilpotent derivations and G a -actions on quasiaffine varieties; see [17, Section 1.1], [7, Section 2], and [15] for related results.
Let Every locally nilpotent derivation defines a one-parameter subgroup {exp(s∂), s∈K} of automorphisms of the algebra A. This subgroup gives rise to an algebraic action of the group G a on the algebra A. The latter means that every element a∈A is contained in a finite dimensional G a -invariant subspace U of A, and the G a -module U is rational. Conversely, the differential of an algebraic G a -action on A is a locally nilpotent derivation; see [19, Section 1.5] for details.
Assume that the domain A is finitely generated and X =Spec(A) is the corresponding irreducible affine variety. The results mentioned above establish a bijection between locally nilpotent derivations on A and algebraic actions G a ×X →X. Moreover, the algebra of invariants A Ga coincides with the kernel of the corresponding locally nilpotent derivation.
If X is an irreducible quasiaffine variety, then again every action G a ×X →X defines a locally nilpotent derivation of A:=K [X]. Since regular functions separate points on X, such a derivation determines a G a -action uniquely. At the same time, not every locally nilpotent derivation of A corresponds to a G a -action on X. For example, the derivation ∂ ∂x2 of the polynomial algebra
K[x 1 , x 2 ] does not correspond to a G a -action on X :=A 2 \{(0, 0)}, while the derivation x 1 ∂ ∂x2 does.
The following result seems to be known, but for lack of a precise reference we give it with a complete proof.
Proposition 1. Let X be an irreducible quasiaffine variety and A=K[X]. Then
(i) for every ∂ ∈LND(A) there exists a nonzero f ∈Ker(∂) such that the locally nilpotent derivation f∂ corresponds to a G a -action on X;
(ii) if ∂ ∈LND(A) corresponds to a G a -action on X, then for every f ∈Ker(∂) the derivation f∂ corresponds to a G a -action on X.
Proof. We begin with (i). Fix a derivation ∂ ∈LND(A) and the corresponding G a -action on A. Consider an open embedding X →Z into an irreducible affine variety Z.
Fix a finite dimensional G a -invariant subspace U in A containing a set of generators of K[Z]. Let B be the subalgebra in A generated by U and Y be the affine variety Spec(B). Since B is G a -invariant, we have the induced G a -action on Y . The inclusion B ⊆A defines an open embedding X →Y . Claim 2. Every divisor D⊆Y contained in Y \X is G a -invariant.
Proof. Assume that the variety Y is normal and take a function f ∈K(Y ) which has a pole along the divisor D. Multiplying f by a suitable function from B we may suppose that f has no pole outside D. Then f is contained in A. If the divisor D is not G a -invariant, there is an element g∈G a such that g·D intersects X. It shows that the function g·f has a pole on X and thus is not in A, a contradiction.
If Y is not normal, we lift the G a -action to the normalization of Y and apply the same arguments to integral closures of A and B.
Claim 3. There is an open
G a -invariant subset W ⊆Y which is contained in X.
Proof. Let F be the union of irreducible components of Y \X of codimension at least 2. Then the closure G a ·F is a proper closed G a -invariant subset whose complement intersected with X is the desired subset W .
Let Y 0 :=Y \W . This is a closed G a -invariant subvariety in Y and its ideal I(Y 0 ) in B is a G a -invariant subspace.
Applying the Lie-Kolchin Theorem, we find a nonzero G a -invariant function f ∈I(Y 0 ). Then f ∈Ker(∂) and the G a -action on Y corresponding to the derivation f∂ fixes all points outside W . In particular, this action induces a G a -action on X. This proves (i).
Now we come to (ii). Consider the action G a ×X →X corresponding to ∂.
Torus actions on rigid quasiaffine varieties
In this section we generalize the results of [18, Section 3] and [4, Theorem 1] to the case of a quasiaffine variety. Let us recall that an irreducible algebraic variety X is called rigid, if it admits no nontrivial G a -action.
Theorem 5. Let X be a rigid quasiaffine variety. There is a subtorus T⊆ Aut(X) such that for every torus action T ×X →X the image of T in Aut(X) is contained in T. In other words, T is a unique maximal torus in Aut(X).
Let us begin with some preliminary results.
Z \X in K[Z]. Since I is T -invariant, there is a non-constant T -semi-invariant f ∈I. The principal open subset Z f is contained in X. Since the algebra K[Z f ] is the localization K[Z] f and K[X] is contained in K[Z f ], we conclude that the algebra K[X] f =K[Z] f is finitely generated.
Let A=⊕ i∈Z A i be a graded K-algebra and ∂ : A→A a derivation. We define a linear map ∂ k : A→A by setting ∂ k (a) to be the homogeneous component ∂(a) deg(a)+k of the element ∂(a) for every homogeneous element a∈A. It is easy to check that ∂ k is a derivation for all k∈Z. We call it the kth homogeneous component of the derivation ∂.
Proof of Theorem 5. Assume that there are two torus actions T i ×X →X, i= 1, 2, such that the images of T i in Aut(X) are not contained in some torus T. The latter means that the actions do not commute. We may assume that T 1 and T 2 are one-dimensional.
...f a k k f a − a∂ (f )f a1 1 ...f a k k f a+1 .
It shows that the shift of degree with respect to the first grading from h to ∂ (h) does not exceed the maximal shift of degree for f 1 , ..., f k , f. Hence the shift is bounded and we obtain the claim. Let ∂ m be a nonzero homogeneous component of ∂ with maximal absolute value of the weight m. Since the derivations ∂ and ∂ do not commute, we have m =0. Then (∂ m ) r (a) is the highest (or the lowest) homogeneous component of the element (∂ ) r (a) for every homogeneous a∈A. Since a is contained in a finite dimensional ∂ -invariant subspace in A, the elements (∂ ) r (a) cannot have nonzero projections to infinitely many components A u . Thus (∂ m ) r (a)=0 for r 0. We conclude that ∂ m is a nonzero locally nilpotent derivation of the algebra A. By Corollary 4, we obtain a contradiction with the condition that X is rigid.
Corollary 7.
In the setting of Theorem 5, the maximal torus T is a normal subgroup of Aut(X).
Let us finish this section with a description of affine algebraic groups which can be realized as automorphism groups of quasiaffine varieties. When this paper was already written, I found the same result in [28, Theorem 1.3], cf. also [32, Theorem 4.10 (a)].
Proposition 8. Let X be an irreducible quasiaffine variety. Assume that the automorphism group Aut(X) admits a structure of an affine algebraic group such that the action Aut(X)×X →X is a morphism of algebraic varieties. Then either Aut(X) is finite, or isomorphic to a finite extension of a torus, or isomorphic to the linear group
G = 1 0 a t , a ∈ K, t ∈ K × .
Proof. We assume first that X is a rational curve. If X =A 1 then Aut(X) is isomorphic to the group G. If X is A 1 with one point removed, then Aut(X) is an extension of 1-torus. If we remove more than one point from A 1 , the group Aut(X) becomes finite. For a singular rational curve X, the automorphism group Aut(X) lifts to normalization and preserves the preimage of the singular locus. Thus Aut(X) is contained in an extension of 1-torus.
It follows from the description of the automorphism group of an elliptic curve and from Hurwitz's Theorem that the automorphism group of an affine curve X of positive genus is finite. Now let us assume that dim X ≥2. If X is rigid then the affine algebraic group Aut(X) contains no one-parameter unipotent subgroup. It means that the unipotent radical and the semisimple part of Aut(X) are trivial. Hence Aut(X) is either finite or a finite extension of a torus.
Finally, let G a ×X →X be a non-trivial action and ∂ ∈LND(K[X]) the corresponding locally nilpotent derivation. By [19,Principle 11], the transcendence degree of the algebra Ker(∂) equals dim(X)−1≥1. Let U be a subspace in Ker(∂). Proposition 1, (ii) implies that the automorphisms exp(f∂), f ∈U , form a commutative unipotent subgroup in Aut(X) of dimension dim(U ). Since dim(U ) may be arbitrary, the group Aut(X) does not admit a structure of an affine algebraic group.
Remark 9. Many examples of affine algebraic varieties whose automorphism group is a finite extension of a torus are provided by trinomial hypersurfaces, see [4,Theorem 3].
Remark 10. The class of affine algebraic groups which can be realized as the automorphism groups of complete varieties is much wider. For example, the automorphism group of a complete toric variety is always an affine algebraic group of type A. A description of such groups is given in [13] and [14]. Some other affine algebraic groups appear as the automorphism groups of Mori Dream Spaces; see e.g. [5,Theorem 7.2]. It is shown in [11,Theorem 1] that any connected algebraic group over a perfect field is the neutral component of the automorphism group scheme of some normal projective variety.
Main results
We come to a characterization of transitivity properties for the automorphism group Aut(X) in terms of the special automorphism group SAut(X). ( 1 ) Assume first that there is a nontrivial G a -action on X. Let us take two distinct points x 1 and x 2 in O on one G a -orbit. By assumption, for every distinct points y 1 , y 2 ∈O there exists an automorphism ϕ∈Aut(X) with ϕ(x i )=y i , i=1, 2. Then the points y 1 and y 2 lie in the same orbit for the G a -action obtained from the initial one by conjugation with ϕ. It means that the group SAut(X) acts transitively on O.
Now assume that X is rigid and admits a nontrivial G m -action. If the maximal torus T from Theorem 5 acts transitively on O, then O is isomorphic to the torus T and Aut(X) acts on O transitively, but not 2-transitively. Indeed, let us fix an isomorphism between O and (K × ) n . The group Aut(O) is isomorphic to a semidirect product of T and the group GL n (Z). It shows that the stabilizer in Aut(O) of the unit in (K × ) n preserves the set of points with rational coordinates. Consequently, the group Aut(O), and thus the group Aut(X), cannot act 2-transitively on O.
Now assume that the action of T is not transitive on O. Let us take points x 1 , x 2 , x 3 ∈O such that x 1 =x 2 lie in the same T-orbit and x 3 belongs to other T-orbit. By Corollary 4, every automorphism of X permutes T-orbits on X and thus there is no automorphism preserving x 1 and sending x 2 to x 3 , a contradiction with 2-transitivity.
This completes the proof of Theorem 11.
Remark 12. Implication (1)⇒(3) for an affine variety X admitting a nontrivial G a -action was observed earlier in [12].
Corollary 13. Let X be an irreducible quasiaffine variety of dimension at least 2. Assume that X admits a nontrivial G a -or G m -action. Then the following conditions are equivalent.
(1) The group Aut(X) acts 2-transitively on X.
(2) The group Aut(X) acts infinitely transitively on X.
(3) The group SAut(X) acts transitively on X.
(4) The group SAut(X) acts infinitely transitively on X.
We recall that the Makar-Limanov invariant ML(A) of an algebra A is the intersection of kernels of all locally nilpotent derivations on A. Using Proposition 1, one can easily show that the Makar-Limanov invariant ML(K[X]) of the algebra of regular functions on an irreducible quasiaffine variety X coincides with the algebra of invariants K[X] SAut(X) of the special automorphism group. We denote ML(K[X]) just by ML(X). Note that a quasiaffine variety X is rigid if and only if ML(X)= K[X].
In [31], a field version of the Makar-Limanov invariant is introduced. Namely, the field Makar-Limanov invariant FML(X) of an irreducible quasiaffine variety X is the subfield of K(X) consisting of all rational SAut(X)-invariants. The condition FML(X)=K implies ML(X)=K, but the converse is not true in general. By [ [34,Theorem 5]. The latter theorem claims that if X is an irreducible variety, the group Aut(X) acts generically 2-transitive on X, and Aut(X) contains a non-trivial connected algebraic subgroup, then X is unirational. Moreover, if X is irreducible, complete, and the group Aut(X) acts generically 2-transitive on X, then X is unirational [34,Corollary 3].
Let us finish this section with the following conjecture. Remark 17. Jelonek [25] has proved that every quasiaffine variety X with an infinite automorphism group is uniruled, i.e., for a generic point in X there exists a rational curve in X through this point.
Concluding remarks and questions
In this section we discuss some results and questions related to Conjecture 16. Let φ be an automorphism of a quasiaffine variety X and φ * be the induced automorphism of the algebra K[X]. We say that φ is locally finite if every element of K[X] is contained in a finite dimensional φ * -invariant subspace.
The following fact is well known to experts, but for the convenience of the reader we give it with a short proof. Proposition 18. Let X be an irreducible quasiaffine variety and φ an automorphism of X. The following conditions are equivalent.
(1) There exists a regular action G×X →X of an affine algebraic group G on X such that φ is contained in the image of G in the group Aut(X).
(2) The automorphism φ is locally finite.
Proof. For implication (1)⇒(2), see e.g. [35,Lemma 1.4]. Conversely, assume that φ is locally finite and let U be a finite-dimensional φ * -invariant subspace in K[X] which generates a subalgebra A in K[X] such that the morphism X →Z := Spec(A) is an open embedding. Let G be the subgroup of all automorphisms of X that preserve the subspace U . Since U generates the field K(X), the group G is a subgroup of the general linear group GL(U ). Moreover, every element of G induces an automorphism of Z. The subgroup G of all elements of GL(U ) which induce an automorphism of Z is closed in GL(U ). The subgroup G of G consists of automorphisms of Z which preserve the (closed) subvariety Z \X. This proves that G is an affine algebraic group.
Remark 19. For further characterizations of automorphisms belonging to algebraic subgroups of Aut(X), see [36].
Clearly, every automorphism of finite order is locally finite. The condition that a quasiaffine variety X admits no nontrivial actions of the groups G a and G m means that every locally finite automorphism of X has finite order. is an automorphism of X and the function T 1 is not contained in a finite dimensional φ * -invariant subspace of K[X]. An automorphism of the affine plane A 2 which is not locally finite may be given as (x, y) −→ (x+y 2 , x+y+y 2 ).
More examples of automorphisms which are not locally finite can be found in [8]. The authors describe a family of rational affine surfaces S such that the normal subgroup Aut(S) alg of Aut(S) generated by all algebraic subgroups of Aut(S) is not generated by any countable family of such subgroups, and the quotient Aut(S)/ Aut(S) alg contains a free group over an uncountable set of generators. A description of automorphisms in [8] is given in a purely geometric terms. It seems to be an important problem to find more methods for constructing automorphisms of quasiaffine varieties which are not locally finite.
Working with Conjecture 16, one may wish to replace an arbitrary quasiaffine variety by a quasiaffine variety admitting a nontrivial G a -or G m -action. For example, let X be an irreducible quasiaffine variety such that the group Aut(X) is 2-transitive on X. Is it true that the group Aut(X ×A 1 ) is 2-transitive on X ×A 1 ? This question is related to algebraic families of automorphisms in the sense of [36].
Let us finish this section with a general problem on transitivity for algebraic varieties. We say that an algebraic variety X is homogeneous if the group Aut(X) acts transitively on X. A wide class of homogeneous varieties form homogeneous spaces of algebraic groups. At the same time, not every homogeneous variety is homogeneous with respect to an algebraic group; an example of a homogeneous quasiaffine toric surface which is not a homogeneous space of an algebraic group is given in [6, Example 2.2]. More generally, it follows from [6, Theorem 2.1] that every smooth quasiaffine toric variety is homogeneous. We plan to describe all homogeneous toric varieties in a forthcoming publication.
Problem 21. Describe all homogeneous algebraic varieties.
Conjecture 16 can be considered as a first step towards the solution of this problem.
A be a K-domain and ∂ : A→A a derivation, i.e., a linear map satisfying the Liebniz rule ∂(ab)=∂(a)b+a∂(b) for all a, b∈A. The derivation ∂ is called locally nilpotent if for any a∈A there exists a positive integer m such that ∂ m (a)=0. Let us denote the set of all locally nilpotent derivations of A by LND(A). Clearly, if ∂ ∈LND(A) and f ∈Ker(∂), then f∂∈LND(A).
By [35, Theorem 1.6], there is an open equivariant embedding X →Y into an affine variety Y . For any f ∈Ker(∂), the orbits of the G a -action on Y corresponding to f∂ coincide with the orbits of the original actions on Y \{f =0}, while all points of the set {f =0} become fixed. In particular, this action leaves the set X invariant. This completes the proof of Proposition 1.
Corollary 4 .
4Let X be an irreducible quasiaffine variety and A=K[X]. The variety X admits a nontrivial G a -action if and only if there is a nonzero locally nilpotent derivation on A.
Lemma 6 .
6Let X be an irreducible quasiaffine variety and T ×X →X be an action of a torus. Then there is a T -semi-invariant f ∈K[X] such that the localization K[X] f is finitely generated. Proof. By [35, Theorem 1.6], there exists an open equivariant embedding X →Z into an irreducible affine T -variety Z. Let I be the ideal of the subvariety
Theorem 11 .
11Let X be an irreducible quasiaffine variety of dimension at least 2. Assume that X admits a nontrivial G a -or G m -action and the group Aut(X) acts on X with an open orbit O. Then the following conditions are equivalent. (1) The group Aut(X) acts 2-transitively on O. (2) The group Aut(X) acts infinitely transitively on O. (3) The group SAut(X) acts transitively on O. (4) The group SAut(X) acts infinitely transitively on O. Proof. Let us prove implications (1)⇒(3)⇒(4)⇒(2)⇒(1). Implications (4)⇒ (2)⇒(1) are obvious. Implication (3)⇒(4) is proved in [2, Theorem 2.2] for X affine and in [7, Theorem 2], [17, Theorem 1.11] for X quasiaffine. It remains to prove (1)⇒(3).
Conjecture 16 .
16Conditions (1)-(4) of Theorem 11 are equivalent for any irreducible quasiaffine variety X of dimension at least 2.
Problem 20 .
20Let X be an irreducible quasiaffine variety such that every locally finite automorphism of X has finite order. Can the group Aut(X) act transitively (2-transitively, infinitely transitively) on X? Let us give examples of automorphisms which are not locally finite. Let X be a 2-torus with the algebra of regular functions K[X]=K[T 1 , T −1 1 , T 2 , T −1 2 ]. Then the map φ : (t 1 , t 2 ) −→ (t 1 t 2 , t 2 )
Let A:=K[X] and be gradings corresponding to the actions of T 1 and T 2 , respectively. Consider semisimple derivations ∂ and ∂ on A defined by ∂(a)=ua for every a∈A u and ∂ (b)=ub for every b∈A u . Let ∂ k be the kth homogeneous component of ∂ with respect to the first grading. We claim that there are only finitely many nonzero homogeneous components and thus the sum ∂ = has only finite number of nonzero terms. Consider a localization K[X] f from Lemma 6, where f is homogeneous with respect to the first grading. The algebra K[X] f is generated by some elements f 1 , ..., f k ∈K[X], which are homogeneous with respect to the first grading, and the element 1 f . Since K[X] is contained in K[X] f , every element h∈K[X] is a linear combination of elements of the formA =
u∈Z
A u and A =
u∈Z
A u
k∈Z
∂ k
f a1
1 ...f a k
k
f a
and the image ∂ (h) is a linear combination of the elements
s
a s ∂ (f s )f a1
1 ...f as−1
s
2 ,
2Corollary 1.14], we have FML(X)=K if and only if the group SAut(X) acts on X with an open orbit. In this case the variety X is unirational [2, Proposition 5.1]. Together with Theorem 11 this yields the following result. Corollary 14. Let X be an irreducible quasiaffine variety. Assume that X admits a nontrivial G a -or G m -action and the group Aut(X) acts on X with an open orbit O. If the group Aut(X) is 2-transitive on O, then X is unirational. Remark 15. Corollary 14 is a particular case of
( 1 ) This is the only implication where we use the condition on Gaor Gm-action.
. I Arzhantsev, U Derenthal, J Hausen, A Laface, Adv. Math. 144Cambridge University PressArzhantsev, I., Derenthal, U., Hausen, J. and Laface, A., Cox Rings, Cam- bridge Studies in Adv. Math. 144, Cambridge University Press, New York, 2015.
Flexible varieties and automorphism groups. I Arzhantsev, H Flenner, S Kaliman, F Kutzschebauch, M Zaidenberg, Duke Math. J. 162Arzhantsev, I., Flenner, H., Kaliman, S., Kutzschebauch, F. and Zaiden- berg, M., Flexible varieties and automorphism groups, Duke Math. J. 162 (2013), 767-823.
Infinite transitivity on affine varieties. I Arzhantsev, H Flenner, S Kaliman, F Kutzschebauch, M Zaidenberg, Birational Geometry, Rational Curves, and Arithmetic, Simons Symposium. New YorkSpringerArzhantsev, I., Flenner, H., Kaliman, S., Kutzschebauch, F. and Zaiden- berg, M., Infinite transitivity on affine varieties, in Birational Geometry, Ra- tional Curves, and Arithmetic, Simons Symposium, 2012, pp. 1-13, Springer, New York, 2013.
The automorphism group of a rigid affine variety. I Arzhantsev, S Gaifullin, Math. Nachr. 290Arzhantsev, I. and Gaifullin, S., The automorphism group of a rigid affine variety, Math. Nachr. 290 (2017), 662-671.
The automorphism group of a variety with torus action of complexity one. I Arzhantsev, J Hausen, E Herppich, A Liendo, Mosc. Math. J. 14Arzhantsev, I., Hausen, J., Herppich, E. and Liendo, A., The automorphism group of a variety with torus action of complexity one, Mosc. Math. J. 14 (2014), 429-471.
Flag varieties, toric varieties, and suspensions: three instances of infinite transitivity. I Arzhantsev, K Kuyumzhiyan, M Zaidenberg, Sb. Math. 203Arzhantsev, I., Kuyumzhiyan, K. and Zaidenberg, M., Flag varieties, toric vari- eties, and suspensions: three instances of infinite transitivity, Sb. Math. 203 (2012), 923-949.
Infinite transitivity on universal torsors. I Arzhantsev, A Perepechko, H Suess, J. Lond. Math. Soc. 89Arzhantsev, I., Perepechko, A. and Suess, H., Infinite transitivity on universal torsors, J. Lond. Math. Soc. 89 (2014), 762-778.
Affine surfaces with a huge group of automorphisms. J Blanc, A Dubouloz, Int. Math. Res. Not. Blanc, J. and Dubouloz, A., Affine surfaces with a huge group of automorphisms, Int. Math. Res. Not. 2015 (2015), 422-459.
Geometrically rational real conic bundles and very transitive actions. J Blanc, F Mangolte, Compos. Math. 147Blanc, J. and Mangolte, F., Geometrically rational real conic bundles and very transitive actions, Compos. Math. 147 (2011), 161-187.
Les bouts des espaces homogènes de groupes de Lie. A Borel, Ann. of Math. 2Borel, A., Les bouts des espaces homogènes de groupes de Lie, Ann. of Math. (2) 58 (1953), 443-457.
On automorphisms and endomorphisms of projective varieties, in Automorphisms in Birational and Affine Geometry. M Brion, Springer Proc. Math. Stat. 79SpringerBrion, M., On automorphisms and endomorphisms of projective varieties, in Auto- morphisms in Birational and Affine Geometry, Springer Proc. Math. Stat. 79, Levico Terme, Italy, October 2012, pp. 59-81, Springer, 2014.
Affine multiplicative flexible varieties. R Budylin, S Gaifullin, A Trushin, in preparationBudylin, R., Gaifullin, S. and Trushin, A., Affine multiplicative flexible varieties, in preparation, 2016.
The homogeneous coordinate ring of a toric variety. D Cox, J. Algebraic Geom. 4Cox, D., The homogeneous coordinate ring of a toric variety, J. Algebraic Geom. 4 (1995), 17-50.
Sous-groupes algebriques de rang maximum du groupe de Cremona. M Demazure, Ann. Sci. Éc. Norm. Supér. 3Demazure, M., Sous-groupes algebriques de rang maximum du groupe de Cremona, Ann. Sci. Éc. Norm. Supér. 3 (1970), 507-588.
Rationally integrable vector fields and rational additive group actions. A Dubouloz, A Liendo, Int. J. Math. 27Dubouloz, A. and Liendo, A., Rationally integrable vector fields and rational addi- tive group actions, Int. J. Math. 27 (2016), 19 pages.
Highly transitive actions of groups acting on trees. P Fima, S Moon, Y Stalder, Proc. Amer. Math. Soc. 143Fima, P., Moon, S. and Stalder, Y., Highly transitive actions of groups acting on trees, Proc. Amer. Math. Soc. 143 (2015), 5083-5095.
The Gromov-Winkelmann theorem for flexible varieties. H Flenner, S Kaliman, M Zaidenberg, J. Eur. Math. Soc. 18JEMS)Flenner, H., Kaliman, S. and Zaidenberg, M., The Gromov-Winkelmann theorem for flexible varieties, J. Eur. Math. Soc. (JEMS) 18 (2016), 2483-2510.
On the uniqueness of C * -actions on affine surfaces, in Affine Algebraic Geometry. H Flenner, M Zaidenberg, Contemp. Math. 369Amer. Math. SocFlenner, H. and Zaidenberg, M., On the uniqueness of C * -actions on affine surfaces, in Affine Algebraic Geometry, Contemp. Math. 369, pp. 97-111, Amer. Math. Soc., Providence, RI, 2005.
Algebraic Theory of Locally Nilpotent Derivations. G Freudenburg, Encyclopaedia Math. Sci. 136SpringerFreudenburg, G., Algebraic Theory of Locally Nilpotent Derivations, Encyclopaedia Math. Sci. 136, Springer, Berlin, 2006.
Observable groups and Hilbert's fourteenth problem. F Grosshans, Amer. J. Math. 95Grosshans, F., Observable groups and Hilbert's fourteenth problem, Amer. J. Math. 95 (1973), 229-253.
Théorèmes sur les groupes primitifs. J Camille, J. Math. Pures Appl. 6Camille, J., Théorèmes sur les groupes primitifs, J. Math. Pures Appl. 6 (1871), 383-408.
The group of automorphisms of a real rational surface is n-transitive. J Huisman, F Mangolte, Bull. Lond. Math. Soc. 41Huisman, J. and Mangolte, F., The group of automorphisms of a real rational surface is n-transitive, Bull. Lond. Math. Soc. 41 (2009), 563-568.
Automorphisms of real rational surfaces and weighted blow-up singularities. J Huisman, F Mangolte, Manuscripta Math. 132Huisman, J. and Mangolte, F., Automorphisms of real rational surfaces and weighted blow-up singularities, Manuscripta Math. 132 (2010), 1-17.
Transitivity degrees of countable groups and acylindrical hyperbolicity. M Hull, D Osin, Israel J. Math. 216Hull, M. and Osin, D., Transitivity degrees of countable groups and acylindrical hyperbolicity, Israel J. Math. 216 (2016), 307-353.
On the group of automorphisms of a quasi-affine variety. Z Jelonek, Math. Ann. 362Jelonek, Z., On the group of automorphisms of a quasi-affine variety, Math. Ann. 362 (2015), 569-578.
Affine modifications and affine hypersurfaces with a very transitive automorphism group. S Kaliman, M Zaidenberg, Transform. Groups. 4Kaliman, S. and Zaidenberg, M., Affine modifications and affine hypersurfaces with a very transitive automorphism group, Transform. Groups 4 (1999), 53-95.
Mehrfach transitive Operationen algebraischer Gruppen. F Knop, Arch. Math. 41Knop, F., Mehrfach transitive Operationen algebraischer Gruppen, Arch. Math. 41 (1983), 438-446.
Automorphism groups of affine varieties and a characterization of affine n-space. H Kraft, Trans. Moscow Math. Soc. 78Kraft, H., Automorphism groups of affine varieties and a characterization of affine n-space, Trans. Moscow Math. Soc. 78 (2017), 171-186.
Two-transitive Lie groups. L Kramer, J. Reine Angew. Math. 563Kramer, L., Two-transitive Lie groups, J. Reine Angew. Math. 563 (2003), 83-113.
Infinitely transitive actions on real affine suspensions. K Kuyumzhiyan, F Mangolte, J. Pure Appl. Algebra. 216Kuyumzhiyan, K. and Mangolte, F., Infinitely transitive actions on real affine suspensions, J. Pure Appl. Algebra 216 (2012), 2106-2112.
Ga-actions of fiber type on affine T-varieties. A Liendo, J. Algebra. 324Liendo, A., Ga-actions of fiber type on affine T-varieties, J. Algebra 324 (2010), 3653-3665.
Configuration spaces of the affine line and their automorphism groups. V Lin, M Zaidenberg, Automorphisms in Birational and Affine Geometry. Levico Terme, ItalySpringer79Lin, V. and Zaidenberg, M., Configuration spaces of the affine line and their au- tomorphism groups, in Automorphisms in Birational and Affine Geometry, Springer Proc. Math. Stat. 79, Levico Terme, Italy, October 2012, pp. 431- 467, Springer, 2014.
A permutation representation of a free group. T Mcdonough, Quart. J. Math. Oxford Ser. 28McDonough, T., A permutation representation of a free group, Quart. J. Math. Oxford Ser. 28 (1977), 353-356.
On infinite dimensional algebraic transformation groups. V Popov, Transform. Groups. 19Popov, V., On infinite dimensional algebraic transformation groups, Transform. Groups 19 (2014), 549-568.
Invariant Theory. V Popov, E Vinberg, Encyclopaedia Math. Sci. 55SpringerPopov, V. and Vinberg, E., Invariant Theory, Encyclopaedia Math. Sci. 55, Springer, Berlin, 1994.
A note on automorphism group of algebraic varieties. C Ramanujam, Math. Ann. 156Ramanujam, C., A note on automorphism group of algebraic varieties, Math. Ann. 156 (1964), 25-33.
On automorphisms of matrix invariants. Z Reichstein, Trans. Amer. Math. Soc. 340Reichstein, Z., On automorphisms of matrix invariants, Trans. Amer. Math. Soc. 340 (1993), 353-371.
| []
|
[
"Non-relativistic M-Theory solutions based on Kähler-Einstein spaces",
"Non-relativistic M-Theory solutions based on Kähler-Einstein spaces"
]
| [
"Eoinó Colgáin \nKorea Institute for Advanced Study\n130-722SeoulKorea\n",
"Oscar Varela \nMax-Planck-Institut für Gravitationsphysik\nAEI\nAm Mühlenberg 1D-14476PotsdamGermany\n",
"Hossein Yavartanoo \nKorea Institute for Advanced Study\n130-722SeoulKorea\n\nJefferson Physical Laboratory\nHarvard University\n02138CambridgeMAUSA\n"
]
| [
"Korea Institute for Advanced Study\n130-722SeoulKorea",
"Max-Planck-Institut für Gravitationsphysik\nAEI\nAm Mühlenberg 1D-14476PotsdamGermany",
"Korea Institute for Advanced Study\n130-722SeoulKorea",
"Jefferson Physical Laboratory\nHarvard University\n02138CambridgeMAUSA"
]
| []
| We present new families of non-supersymmetric solutions of D = 11 supergravity with non-relativistic symmetry, based on six-dimensional Kähler-Einstein manifolds. In constructing these solutions, we make use of a consistent reduction to a five-dimensional gravity theory coupled to a massive scalar and vector field. This theory admits a non-relativistic CFT dual with dynamical exponent z = 4, which may be uplifted to D = 11 supergravity. Finally, we generalise this solution and find new solutions with various z, including z = 2. | 10.1088/1126-6708/2009/07/081 | [
"https://arxiv.org/pdf/0906.0261v3.pdf"
]
| 14,033,637 | 0906.0261 | 938a328e6132c5f5b8c1a7b1ac3a3f31e7f1886f |
Non-relativistic M-Theory solutions based on Kähler-Einstein spaces
21 Jul 2009
Eoinó Colgáin
Korea Institute for Advanced Study
130-722SeoulKorea
Oscar Varela
Max-Planck-Institut für Gravitationsphysik
AEI
Am Mühlenberg 1D-14476PotsdamGermany
Hossein Yavartanoo
Korea Institute for Advanced Study
130-722SeoulKorea
Jefferson Physical Laboratory
Harvard University
02138CambridgeMAUSA
Non-relativistic M-Theory solutions based on Kähler-Einstein spaces
21 Jul 2009
We present new families of non-supersymmetric solutions of D = 11 supergravity with non-relativistic symmetry, based on six-dimensional Kähler-Einstein manifolds. In constructing these solutions, we make use of a consistent reduction to a five-dimensional gravity theory coupled to a massive scalar and vector field. This theory admits a non-relativistic CFT dual with dynamical exponent z = 4, which may be uplifted to D = 11 supergravity. Finally, we generalise this solution and find new solutions with various z, including z = 2.
Introduction
Over the past year, non-relativistic conformal (NRC) field theories have attracted a lot of attention, primarily driven by the prospect of tailoring the AdS/CFT correspondence so that it may be used as a tool to describe condensed matter systems in a laboratory environment. These systems are described by Schrödinger symmetry, which is a non-relativistic version of conformal symmetry. The corresponding algebra is generated by Galilean transformations, an anisotropic scaling of space, x → λx, and time, x + → λ z x + , where z > 0 is a real number usually referred to as the dynamical exponent, and an additional special conformal transformation when z = 2. For NRC field theories with one time and d spatial dimensions, the corresponding symmetry algebra will be denoted Sch z (1, d).
Gravity duals for NRC field theories were initially proposed in [1,2] and were subsequently embedded in type IIB in [3,4,5] and D = 11 supergravity in [6]. The IIB solutions of [3,4,5] with z = 2 are obtained by coordinate transformations which deform the three-form flux, but in the process break supersymmetry. Other techniques that have been employed in the construction of NRC gravity duals in type IIB and D = 11 supergravity include metric deformations [7] and uplift of suitable solutions of the lower dimensional theories to which the D = 10, 11 supergravities on Sasaki-Einstein manifolds consistently truncate [5,6]. Some solutions obtained by these two methods do preserve supersymmetry [7,8]. Solutions pursued via uplift turn out to permit only set dynamical exponents, whereas more general constructions, still based on Sasaki-Einstein spaces [8,9,10], allow for richer classes of solutions with many different values of z, including z = 2 . For a selection of other works on gravity duals of NRC field theories in various dimensions, both supersymmetric and non-supersymmetric, see [11].
In all these cases, the D = 10 or D = 11 metric dual to an NRC field theory in spatial dimension d corresponds to a deformation of a given D-dimensional solution containing (d + 3)-dimensional Anti-de Sitter space, that breaks the original AdS d+3 isometry so(2, d + 2) down to its Sch z (1, d) subalgebra. The purpose of this paper is to obtain D = 11 supergravity solutions with Sch z (1, 2) symmetry, associated to the AdS 5 × KE 6 class of D = 11 supergravity solutions with KE 6 a six-dimensional
Kähler-Einstein space of positive curvature [12,13]. Interestingly enough, despite the lack of supersymmetry of the general AdS 5 × KE 6 solution 1 for arbitrary KE 6 , the special case when KE 6 is CP 3 has recently been shown to be classically stable 1 See [14] for the classification of the superymmetric M-Theory solutions containing AdS 5 .
[15]. We expect our Sch z (1, 2)-invariant solutions, dual to NRC field theories in spatial dimension d = 2, to inherit the non-supersymmetric character of the original AdS 5 × KE 6 solutions.
As mentioned earlier, the first examples of gravitational solutions dual to NRC field theories were found in lower-dimensional theories of gravity coupled to a massive vector field [1]. One benefit of much recent work on consistent Kaluza-Klein (KK) truncations [16,17,18] is that these solutions may be uplifted to type IIB [5] and D = 11 supergravity settings [6]. In a similar fashion, we will first show, in section 2, that there exists a consistent KK truncation of D = 11 supergravity on KE 6 to a D = 5 theory involving a massive vector and a massive scalar. We subsequently uplift, in section 3, a solution to the D = 5 theory to eleven-dimensions to find a new M-Theory solution with dynamical exponent z = 4. In section 4 we perform a generalisation to a class of NRC solutions obtained as deformations of the original AdS 5 × KE 6 solution that, in general, cannot be obtained from uplift. In this class, we will find new Sch z (1, 2)-invariant M-Theory solutions with different dynamical exponents z, including z = 2. Like the analog constructions in [7,8,9,10], the metric of all these solutions will maintain the KE 6 part of the original AdS 5 × KE 6 . Further generalisations should be possible allowing for more general internal geometries [19].
The AdS 5 × KE 6 geometries that we take as starting point for our analysis are solutions to the equations of motion of D = 11 supergravity,
dG 4 = 0 , (1.1) d * 11 G 4 + 1 2 G 4 ∧ G 4 = 0 , (1.2) R AB = 1 12 G AC 1 C 2 C 3 G B C 1 C 2 C 3 − 1 144 g AB G C 1 C 2 C 3 C 4 G C 1 C 2 C 3 C 4 = 0 ,(1.3)
with metric and four-form given, respectively, by
ds 2 11 = ds 2 (AdS 5 ) + ds 2 (KE 6 ), (1.4) G 4 = cJ ∧ J . (1.5)
Here, c is a constant, J is the Kähler form on KE 6 , and the metrics g µν and g mn for AdS 5 and KE 6 , respectively, are normalised so that their with Ricci tensors are R µν = −2c 2 g µν , R mn = 2c 2 g mn .
(1.6)
Note. While we were in the process of completing this paper, [20] appeared which, although supersymmetric in the main, section 5 therein has some overlap with our analysis.
2 Consistent truncation of D = 11 supergravity on
KE 6
For every general supersymmetric solution AdS n × w M D−n , where × w denotes warped product, of a D-dimensional supergravity theory, there exists a consistent truncation of the D-dimensional theory down to a suitable n-dimensional pure, massless gauged supergravity [16,17,18]. For supersymmetric Freund-Rubin backgrounds, the massive supermultiplet containing the breathing mode of the internal space M D−n can also be retained consistently, together with the supergravity multiplet [6]. In all these cases, the G-structure on M D−n specified by supersymmetry plays a crucial role in constructing the KK ansatz which describes the embedding of the retained n-dimensional fields into the D-dimensional ones. In the case at hand here, despite the lack of supersymmetry of the AdS 5 × KE 6 background (1.4), (1.5), the Kähler form J of KE 6 will still allow us to build a KK ansatz that consistently includes massive modes, along the lines of [6].
At any rate, there is an argument about which modes one should expect to be able to keep in the truncation of D = 11 supergravity on KE 6 . Consider first the particular case when the internal KE 6 is CP 3 , which has isometry group SU(4), and for which the KK spectrum is explicitly known [15]. Following [21], one should be able to truncate consistently the KK tower of D = 11 supergravity on CP 3 to its SU(4) singlet sector. This contains the massless graviton, one massive real scalar and one massive real vector [15], both with mass 12c 2 . Now, it is precisely the singlet character of these modes under the relevant SU(4) symmetry of the particular KE 6 = CP 3 that makes them expected to be universal for all KE 6 spaces. We can thus predict a consistent truncation of D = 11 supergravity on any KE 6 to a D = 5 theory with the field content quoted above. In particular, no massless vector that could enter the D = 5 N = 2 supergravity multiplet along with the metric should be expected to survive the truncation, so the resulting D = 5 theory should not correspond to a supergravity 2 .
Without much further ado, consider the following KK ansatz
ds 2 11 = ds 2 5 + e 2U ds 2 (KE 6 ), (2.1) G 4 = H 4 + H 2 ∧ J + cJ ∧ J , (2.2)
where U, H 4 and H 2 are, respectively, a scalar (the breathing mode of the internal KE 6 ), a four-form and a two-form on the external five-dimensional spacetime, with line element ds 2 5 , and J is again the Kähler form on KE 6 . By choosing the coefficient in the J ∧ J term to be the same constant c that appears in the background flux (1.5) we are anticipating that this coefficient cannot be turned into a dynamical D = 5 field without violating the D = 11 Bianchi identity for G 4 . Also, one could have tried to add to the KK ansatz (2.2) terms involving the holomorphic (3,0)-form Ω defining the complex structure on KE 6 , but it is unclear how to deal with those terms when plugging the ansatz into the D = 11 equations of motion.
The KK ansatz (
dH 4 = 0 , (2.3) dH 2 = 0 , (2.4) d(e 6U * H 4 ) + 6cH 2 = 0 , (2.5) d(e 2U * H 2 ) + 2cH 4 + H 2 ∧ H 2 = 0 , (2.6) d(e 6U * dU) + 2c 2 (e −2U − e 4U )vol 5 − 1 6 e 6U H 4 ∧ * H 4 = 0 , (2.7) R αβ = −2c 2 e −8U η αβ + 6 (∇ β ∇ α U + ∂ α U∂ β U) + 3 2 e −4U H αλ H β λ − 1 6 η αβ H λµ H λµ + 1 12 H αλµν H β λµν − 1 12 η αβ H λµνρ H λµνρ . (2.8)
All the dependence on the internal KE 6 drops out, leaving fully-fledged D = 5 equations of motion for the D = 5 fields. This shows the consistency of the truncation.
We can now introduce the Lagrangian of the D = 5 theory and work out the masses of the various fields. First of all, the Bianchi identities (2.3), (2.4) for H 4 and H 2 can be trivially solved by introducing a three-form and a one-form potential such that
H 4 = dB 3 , (2.9) H 2 = dB 1 . (2.10)
The Lagrangian that gives rise to the D = 5 equations of motion (2.5)-(2.8) upon variation of B 3 , B 1 , U and the D = 5 metric g µν can then be worked out. It reads (2.11) or, in terms of the Einstein frame metricḡ µν = e 4U g µν ,
L = e 6U R vol 5 + 30e 6U dU ∧ * dU − 1 2 e 6U H 4 ∧ * H 4 − 3 2 e 2U H 2 ∧ * H 2 +6c 2 2e 4U − e −2U vol 5 − B 1 ∧ (6cH 4 + H 2 ∧ H 2 ) ,L Einstein =Rvol 5 − 18dU ∧ * dU − 1 2 e 12U H 4 ∧ * H 4 − 3 2 H 2 ∧ * H 2 +6c 2 2e −6U − e −12U v ol 5 − B 1 ∧ (6cH 4 + H 2 ∧ H 2 ) , (2.12)
with barred quantities referring to the Einstein frame metric.
It is useful to dualise B 3 into a scalar B. In order to do this, define H 5 = dH 4 and add the piece
L ′ = −BH 5 (2.13)
to the Lagrangian (2.12). Integrating out H 4 we find that it is now given as
H 4 = −e −12U * H 1 ,(2.14)
where we have found it convenient to define
H 1 = dB − 6cB 1 . (2.15)
Substituting this back into L Einstein + L ′ we find the dual Lagrangian
L dual =Rvol 5 − 18dU ∧ * dU − 1 2 e −12U H 1 ∧ * H 1 − 3 2 H 2 ∧ * H 2 +6c 2 2e −6U − e −12U v ol 5 − B 1 ∧ H 2 ∧ H 2 .
(2.16)
The masses of the D = 5 fields can now be computed by expanding the Lagrangian (2.16) about the AdS 5 vacuum, keeping up to quadratic terms. Doing this, for U and
B 1 we find m 2 U = m 2 B 1 = 12c 2 ,(2.17)
while B (the scalar dual to B 3 ) is just a Stückelberg field that can be gauged away to give B 1 its mass. As anticipated, the D = 5 theory obtained upon consistent KK truncation of D = 11 supergravity on KE 6 , and described by the Lagrangian (2.12) or (2.16), contains the D = 5 metric, one massive scalar and one massive vector with mass (2.17). When KE 6 = CP 3 , the SU(4)-neutrality (table 2 of [15]) and the masses (tables 3 and 4 of [15]) of U and B 1 show that these are the modes in the k = 0 level of the (k + 3)(k + 4)c 2 towers of real scalars and real one-forms, respectively.
We are interested in solutions to the D = 5 field equations (2.3)-(2.8) displaying NRC symmetry. Rather than working with the full theory, we will consider a suitable further truncation. There are three further consistent truncations, apparently no longer explained by a group theory argument as the one above. The first is obtained by setting H 4 = H 2 = 0, leaving only the five-dimensional metric and the breathing mode U. The second, leading to five-dimensional General Relativity with a cosmological constant, is trivially obtained by insisting on H 4 = H 2 = 0 and further setting U = 0. The third, which is the one we are interested in, will be described in the next section.
NRC solutions from uplift
It is consistent with the D = 5 equations of motion to set H 4 = 6ce −6U * B 1 , where the Hodge dual here refers again to the metric appearing in the Lagrangian (2.11), and B 1 is defined in (2.10) . Rather than a further truncation, this just corresponds to gauging away B 3 or, alternatively, the Stückelberg scalar B, as can be seen from equations (2.14), (2.15) . The third possible further truncation referred to above is obtained (having gauged away B 3 ) by further setting U = 0 (and, thus, H 4 = 6c * B 1 ) while restricting B 1 to light-like configurations,
B 1 ∧ * B 1 = 0 , H 2 ∧ H 2 = 0 . (3.1)
In this case, the equations of motion (2.5)-(2.8) reduce to (3.1) together with
d * H 2 + 12c 2 * B 1 = 0 , (3.2) R αβ = −2c 2 η αβ + 3 2 H αλ H β λ + 18c 2 B α B β (3.3) (with H 2 = dB 1L = R vol 5 + 6c 2 vol 5 − 3 2 H 2 ∧ * H 2 − 18c 2 B 1 ∧ * B 1 , (3.4)
which was argued in [1] to allow for solutions with metric displaying Schrödinger symmetry. These solutions should be supported by a light-like massive vector of the form B 1 ∝ r z dx + (see [5]), where z is the dynamical exponent, thus immediately satisfying (3.1). Specifically, we look for solutions to (3.1), (3.2), (3.3) of the form
ds 2 5 = −α 2 r 2z (dx + ) 2 + 2 c 2 r 2 dr 2 + 2 c 2 r 2 −dx + dx − + dx 2 1 + dx 2 2 , B 1 = βr z dx + . (3.5)
where α, β and the dynamical exponent z are constants to be determined. The configuration (3.5) does satisfy the conditions (3.1) and turns out to also solve the
equations (3.2), (3.3) provided that z(z + 2) = 24 , (3.6) α 2 (z 2 − 1) = β 2 ( 3 4 z 2 + 18). (3.7)
Thus, as in [5], we indeed find solutions for z = 4 (and β = α √ 2 ) and z = −6 (and β = α √ 7
3 ). By convention z > 0, so we ignore the latter possibility. The z = 4 solution can now be uplifted to D = 11 with the help of the KK ansatz (2.1), (2.2). We find
ds 2 11 = −α 2 r 8 (dx + ) 2 + 2 c 2 dr 2 r 2 + 2 c 2 r 2 −dx + dx − + dx 2 1 + dx 2 2 + ds 2 (KE 6 ) , G 4 = 12 α c 2 r 5 dx + ∧ dr ∧ dx 1 ∧ dx 2 − 2 √ 2αr 3 dx + ∧ dr ∧ J + cJ ∧ J . (3.8)
This is a new (non-supersymmetric) M-Theory solution dual to a NRC field theory in spatial dimension d = 2 with dynamical exponent z = 4. One can generalise this solution and consider more general ansatze for D = 11 supergravity solutions dual to d = 2 non-relativistic conformal field theories with dynamical exponent z, where the internal directions still correspond to a KE 6 space. We now turn to this point. 3 This D = 5 theory, with even the same mass for the vector B 1 if we choose c = √ 2, was first discussed in section 4.2 of [5], but the D = 5 parent theories with Lagrangian (2.16) above and (4.21) of [5] are very different. As in [5,6], the Lagrangian
Some generalisations
As we have just mentioned, the D = 11 solution (3.8) is locally invariant under Sch 4 (1, 2). In particular, the scale invariance acts on coordinates as [2]
(x + , x − , x i , r) → (λ z x + , λ 2−z x − , λx i , λ −1 r) , i = 1, 2 (4.1)
(with z = 4 in (3.8)), while leaving the KE 6 coordinates unchanged. Following [7,8],
we can generalise the metric in (3.8) as:
ds 2 11 = 2 c 2 − f 0 r 2z (dx + ) 2 − r 2 dx + (dx − + r z−2 C 1 ) + 1 r 2 dr 2 +r 2 dx 2 1 + dx 2 2 + ds 2 (KE 6 ) ,(4.2)
where C 1 is a one-form and f 0 a function, both of them defined on the internal KE 6 . Both C 1 and r 2z f 0 , serve the same role of breaking the SO(2, 4) isometry of the original AdS 5 × KE 6 metric (1.4) down to Sch z (1, 2).
An ansatz for the accompanying four-form flux may be constructed by considering the forms invariant under Sch z (1, 2) symmetry (see [22]), though the equations of motion constrain the candidate forms. The specific ansatz we then consider for the four-form flux is
G 4 = − 1 z+2 d(µ 0 r z+2 dx + ∧ dx 1 ∧ dx 2 ) − 1 z d(µ 2 ∧ r z dx + ) + cJ ∧ J ,(4.3)
where, in general, µ 0 is a function and µ 2 a two-form, both defined on KE 6 . The restrictions and relations for f 0 , C 1 , µ 0 and µ 2 . In the following, we will spell out several interesting cases.
A solution with z = 2
We can find a D = 11 supergravity solution with dynamical exponent z = 2 by setting, for some constant α, f 0 = 13α 4c 4 , choosing C 1 such that dC 1 = αJ, while writing µ 0 = 12α √ 2 c 5 , µ 2 = − 2α c 3 so that the flux (4.3) reads
G 4 = 12α √ 2 c 5 r 3 dx + ∧ dr ∧ dx 1 ∧ dx 2 − 2α c 3 rdx + ∧ dr ∧ J + cJ ∧ J . (4.4)
A generalisation of this solution appeared previously in [20], where the internal space is a variant of CP 3 [13].
A class of solutions with z ≥ √ 3
Setting C 1 = 0 in the metric (4.2) and µ 0 = 0, µ 2 = 0 in (4.3) (which takes the flux back to its background value (1.5)), some calculation reveals that the resulting combination of metric and four-form provides a solution of D = 11 supergravity if f 0 is an eigenfunction of the Laplacian ∆ KE ≡ * d * d + d * d * on KE 6 with eigenvalue 2(z 2 − 1)c 2 :
∆ KE f 0 = 2(z 2 − 1)c 2 f 0 . (4.5)
This class of solutions thus provides a D = 11 counterpart of the Type IIB solutions first discussed in [7]. For the particular case KE 6 = CP 3 , these eigenvalues are k(k + 3)c 2 , k = 0, 1, . . ., with the corresponding eigenfunctions transforming in the (k0k) irrep of SU(4) [23,15]. Ruling out k = 0, which just corresponds to a space locally isometric to AdS 5 × KE 6 , we have a sequence of families of solutions with dynamical exponents
z k = 1 + 1 2 k(k + 3) , k = 1, 2 . . . ,(4.6)
thus obeying the bound
z k ≥ √ 3 . (4.7)
For each k = 1, 2, 3 . . ., this class contains a family of dim(k0k) = 15, 84, 300, . . .
supergravity solutions with the dynamical exponent z k in (4.6). As noted in [7], this class of solutions should be unstable. Stability could be restored in [7] by appropriately turning on fluxes. We can try to do the same here by setting, for simplicity, µ 2 to be proportional to the Kähler form J. In this case, only for z = 4 do we find a solution with metric (4.2) (with C 1 = 0), supported by the flux
G 4 = αr 5 dx + ∧ dr ∧ dx 1 ∧ dx 2 − αc 2 3 √ 2 r 3 dx + ∧ dr ∧ J + cJ ∧ J ,(4.8)
for any constant α. In this case, f 0 gets shifted by a positive term proportional to α 2 , which can be tuned to render the solution stable [7]. The shifted f 0 still fulfils (4.5), now with eigenvalue 30c 2 , corresponding to z = 4. We are unaware, however, of any KE 6 space for which this eigenvalue is permissible.
Alternatively, following [8,9,10], rather than setting µ 2 to be proportional to the Kähler form, one may take it to be primitive and transverse 4 . Setting, for convenience, µ 0 = C 1 = 0, a calculation shows that the configuration (4.2), (4.3) is a solution to D = 11 supergravity provided
∆ KE f 0 + 2(z 2 − 1)c 2 f 0 = c 4 4 |µ 2 | 2 + c 2 2z 2 |dµ 2 | 2 , ∆ KE µ 2 = 1 2 z(z + 2)c 2 µ 2 ,(4.9)
where |µ 2 | 2 = 1 2! µ 2 ab µ ab 2 , etc. Now, f 0 has devolved the Laplacian eigenvector character upon µ 2 , which corresponds to a two-form eigenfunction with eigenvalue 1 2 z(z + 2)c 2 . In the special case KE 6 = CP 3 , the eigenvalues of the Laplacian acting on transverse, primitive (1, 1)-forms (respectively, (2, 0)-forms) are (k + 2)(k + 3)c 2 (respectively, (k + 3)(k + 4)c 2 ), for k = 0, 1, . . . [23,15]. We thus see that solutions to (4.9) correspond to NRC gravity duals with dynamical exponents bounded below by z ≥ −1 + √ 13 (respectively, z ≥ 4), if µ 2 is a chosen to be (the real part of) a (1, 1)-form (respectively, (2, 0)-form). See [10] for a discussion of a solving technique for systems of equations like (4.9). It would be interesting to study the stability of this class of solutions.
Final comments
We have constructed solutions of D = 11 supergravity dual to NRC field theories in 2 spatial dimensions and with different values of the dynamical exponent z. They correspond to suitable deformations of the class of solutions AdS 5 × KE 6 , that break the SO(2, 4) symmetry down to its Schrödinger subalgebra Sch z (1, 2). Important insight was obtained by first dealing with a simpler, particular solution with z = 4.
Specifically, D = 11 supergravity reduced on the internal KE 6 truncates consistently to a D = 5 gravity theory involving a massive vector. A suitable solution of this theory, with z = 4, was found and subsequently uplifted to eleven-dimensions. We also discussed a more general class of D = 11 supergravity solutions, locally invariant under Sch z (1, 2), that contains this solution, along with other examples that can no longer be obtained upon uplift. We are able to find explicitly a solution with z = 2, a class of solutions with dynamical exponents z ≥ √ 3, and implicitly, solutions with z ≥ −1 + √ 13 and z ≥ 4.
The Schrödinger algebra Sch z (1, d) is not the only NRC symmetry one may consider. In fact, there also exists a conformal version of the Galilean algebra that, unlike Sch z (1, d), can be obtained as an Inönü-Wigner contraction of the relativistic conformal algebra so(2, d + 2). Some issues regarding the Galilean conformal algebra have been recently discussed, including its supersymmetrisation [24,25,26] and its implementation, both in the dual field theories and the gravity bulk [27,28]. As pointed out in [28], a drawback of backgrounds with this conformal Galilean symmetry is that, in contrast to Sch z (1, d)-invariant ones, their metrics exhibit a non-Lorentzian signature. While this would require better understanding, progress on the way NRC symmetries are implemented in gravity duals may be achieved by a systematic characterisation [19] of Type IIB and M-Theory backgrounds with Sch z (1, d) symmetry, for generic values of z and d.
U
= H 4 = H 2 = 0, ds 2 5 = ds 2 (AdS 5 ). More generally, direct substitution of (2.1), (2.2) into (1.1)-(1.3) shows that the KK ansatz also solves the D = 11 supergravity field equations provided the D = 5 fields satisfy
(3.4) only reproduces the equations (3.2), (3.3) and not the light-like condition (3.1). Since (3.1), (3.2), (3.3) can be consistently obtained upon truncation of D = 11 supergravity on KE 6 , any choice of five-dimensional metric and lightlike B 1 (thus subject to (3.1)) which also solves the equations of motion (3.2), (3.3) that derive from the Lagrangian (3.4), can be safely uplifted to D = 11.
latter can be taken to be proportional to the Kähler form on KE 6 , as for the uplifted z = 4 solution (3.8), but other choices are also possible (see subsection 4.2 below). Indeed, the solution (3.8) is recovered from (4.2), (4.3) by setting C 1 = 0, f 0 = 1 2 c 2 α 2 , µ 0 = 12α c 2 and µ 2 = −2 √ 2αJ, for some constant α. More generally, the non-trivial mixing of external and KE 6 coordinates in the metric (4.2) will prevent it from being obtainable as the uplift of any D = 5 metric. The requirement that (4.2), (4.3) do solve the equations of motion (1.1)-(1.3) of D = 11 supergravity leads to
This is to be constrasted with the analog situation for skew-whiffed Freund-Rubin backgrounds:in spite of also breaking all supersymmetry, they do allow for a consistent truncation to a supergravity theory[6].
A (p, q)-form Y p,q on a Kähler space is said to be primitive if its contraction with the Kähler form vanishes, J mn Y p,q mn... = 0, and transverse if * d * Y p,q = 0.
AcknowledgementsWe would like to thank Ido Adam, Dumitru Astefanesei, José A.
. D T Son, arXiv:0804.3972Phys. Rev. D. 7846003hep-thD. T. Son, Phys. Rev. D 78 (2008) 046003 [arXiv:0804.3972 [hep-th]].
. K Balasubramanian, J Mcgreevy, arXiv:0804.4053Phys. Rev. Lett. 10161601hep-thK. Balasubramanian and J. McGreevy, Phys. Rev. Lett. 101, 061601 (2008) [arXiv:0804.4053 [hep-th]].
. A Adams, K Balasubramanian, J Mcgreevy, arXiv:0807.1111JHEP. 081159hep-thA. Adams, K. Balasubramanian and J. McGreevy, JHEP 0811, 059 (2008) [arXiv:0807.1111 [hep-th]].
. C P Herzog, M Rangamani, S F Ross, arXiv:0807.1099JHEP. 081180hep-thC. P. Herzog, M. Rangamani and S. F. Ross, JHEP 0811, 080 (2008) [arXiv:0807.1099 [hep-th]].
. J Maldacena, D Martelli, Y Tachikawa, arXiv:0807.1100JHEP. 081072hep-thJ. Maldacena, D. Martelli and Y. Tachikawa, JHEP 0810, 072 (2008) [arXiv:0807.1100 [hep-th]].
. J P Gauntlett, S Kim, O Varela, D Waldram, arXiv:0901.0676hep-thJ. P. Gauntlett, S. Kim, O. Varela and D. Waldram, arXiv:0901.0676 [hep-th].
. S A Hartnoll, K Yoshida, arXiv:0810.0298JHEP. 081271hepthS. A. Hartnoll and K. Yoshida, JHEP 0812, 071 (2008) [arXiv:0810.0298 [hep- th]].
. A Donos, J P Gauntlett, arXiv:0901.0818JHEP. 0903138hepthA. Donos and J. P. Gauntlett, JHEP 0903, 138 (2009) [arXiv:0901.0818 [hep- th]].
. N Bobev, A Kundu, K Pilch, arXiv:0905.0673hep-thN. Bobev, A. Kundu and K. Pilch, arXiv:0905.0673 [hep-th].
. A Donos, J P Gauntlett, arXiv:0905.1098hep-thA. Donos and J. P. Gauntlett, arXiv:0905.1098 [hep-th].
. S Kachru, X Liu, M Mulligan, arXiv:0808.1725Phys. Rev. D. 78106005hep-thS. Kachru, X. Liu and M. Mulligan, Phys. Rev. D 78, 106005 (2008) [arXiv:0808.1725 [hep-th]];
. S , Sekhar Pal, arXiv:0808.3232hep-thS. Sekhar Pal, arXiv:0808.3232 [hep-th];
. C Duval, M Hassaine, P A Horvathy, arXiv:0809.3128Annals Phys. 3241158hep-thC. Duval, M. Hassaine and P. A. Horvathy, Annals Phys. 324, 1158 (2009) [arXiv:0809.3128 [hep-th]];
. M Schvellinger, arXiv:0810.3011JHEP. 08124hep-thM. Schvellinger, JHEP 0812, 004 (2008) [arXiv:0810.3011 [hep-th]];
. L Mazzucato, Y Oz, S Theisen, arXiv:0810.3673JHEP. 090473hep-thL. Mazzucato, Y. Oz and S. Theisen, JHEP 0904, 073 (2009) [arXiv:0810.3673 [hep-th]];
. A Adams, A Maloney, A Sinha, S E Vazquez, arXiv:0812.0166JHEP. 090397hep-thA. Adams, A. Maloney, A. Sinha and S. E. Vazquez, JHEP 0903, 097 (2009) [arXiv:0812.0166 [hep-th]];
. M Taylor, arXiv:0812.0530hep-thM. Taylor, arXiv:0812.0530 [hep-th];
. S S , arXiv:0901.0599hep-thS. S. Pal, arXiv:0901.0599 [hep-th];
. M Alishahiha, A Davody, A Vahedi, arXiv:0903.3953hep-thM. Alishahiha, A. Davody and A. Vahedi, arXiv:0903.3953 [hep-th];
. N Bobev, A Kundu, arXiv:0904.2873hep-thN. Bobev and A. Kundu, arXiv:0904.2873 [hep-th];
. S S , arXiv:0904.3620hep-thS. S. Pal, arXiv:0904.3620 [hep-th].
. B Dolan, Phys. Lett. B. 140304B. Dolan, Phys. Lett. B 140 (1984) 304.
. C N Pope, P Van Nieuwenhuizen, Commun. Math. Phys. 122281C. N. Pope and P. van Nieuwenhuizen, Commun. Math. Phys. 122 (1989) 281.
. J P Gauntlett, D Martelli, J Sparks, D Waldram, arXiv:hep-th/0402153Class. Quant. Grav. 214335J. P. Gauntlett, D. Martelli, J. Sparks and D. Waldram, Class. Quant. Grav. 21 (2004) 4335 [arXiv:hep-th/0402153].
. J E Martin, H S Reall, arXiv:0810.2707JHEP. 09032hep-thJ. E. Martin and H. S. Reall, JHEP 0903, 002 (2009) [arXiv:0810.2707 [hep-th]].
. J P Gauntlett, O Varela, arXiv:0707.2315Phys. Rev. D. 76126007hep-thJ. P. Gauntlett and O. Varela, Phys. Rev. D 76 (2007) 126007 [arXiv:0707.2315 [hep-th]].
. J P Gauntlett, E Colgain, O Varela, arXiv:hep-th/0611219JHEP. 070249J. P. Gauntlett, E. O Colgain and O. Varela, JHEP 0702, 049 (2007) [arXiv:hep-th/0611219].
. J P Gauntlett, O Varela, arXiv:0712.3560JHEP. 080283hep-thJ. P. Gauntlett and O. Varela, JHEP 0802 (2008) 083 [arXiv:0712.3560 [hep-th]].
Work in progress. Work in progress.
. H Ooguri, C S Park, arXiv:0905.1954hep-thH. Ooguri and C. S. Park, arXiv:0905.1954 [hep-th].
. M J Duff, C N Pope, Nucl. Phys. B. 255355M. J. Duff and C. N. Pope, Nucl. Phys. B 255 (1985) 355.
. E O Colgain, H Yavartanoo, arXiv:0904.0588hep-thE. O. Colgain and H. Yavartanoo, arXiv:0904.0588 [hep-th].
. A Ikeda, Y Taniguchi, Osaka J. Math. 15515A. Ikeda and Y. Taniguchi. Osaka J. Math 15 515 (1978).
. M Sakaguchi, arXiv:0905.0188hep-thM. Sakaguchi, arXiv:0905.0188 [hep-th].
. J A De Azcarraga, J Lukierski, arXiv:0905.0141math-phJ. A. de Azcarraga and J. Lukierski, arXiv:0905.0141 [math-ph].
. A Bagchi, I , arXiv:0905.0580hep-thA. Bagchi and I. Mandal, arXiv:0905.0580 [hep-th].
. A Bagchi, R Gopakumar, arXiv:0902.1385hep-thA. Bagchi and R. Gopakumar, arXiv:0902.1385 [hep-th].
. D Martelli, Y Tachikawa, arXiv:0903.5184hep-thD. Martelli and Y. Tachikawa, arXiv:0903.5184 [hep-th].
| []
|
[
"Recent advancements of the NEWS-G experiment",
"Recent advancements of the NEWS-G experiment"
]
| [
"\nSchool of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTUnited Kingdom\n"
]
| [
"School of Physics and Astronomy\nUniversity of Birmingham\nB15 2TTUnited Kingdom"
]
| []
| NEWS-G (New Experiments With Spheres-Gas) is an experiment aiming to shine a light on the dark matter conundrum with a novel gaseous detector, the spherical proportional counter. It uses light gases, such as hydrogen, helium, and neon, as targets to expand dark matter searches to the sub-GeV/c 2 mass region. NEWS-G produced its first results with a 60 cm in diameter detector installed at LSM (France), excluding at 90% C.L. cross-sections above 4.4 · 10 37 cm 2 for dark matter candidates of 0.5 GeV/c 2 mass. Currently, a 140 cm in diameter detector is being built at LSM and a commissioning run is underway, prior to its installation at SNOLAB (Canada) at the end of the year. Presented here are developments incorporated in this new detector: a) sensor technologies using resistive materials and multianode read-out that allow high gain and high pressure operation; b) gas purification techniques to remove contaminants (H2O, O2); c) reduction of 210 Pb induced background through copper electroforming methods; d) utilisation of UV-lasers for detector calibration, detector response monitoring and estimation of gas related fundamental properties. This next phase of NEWS-G will allow searches for low mass dark matter with unprecedented sensitivity. | 10.1088/1742-6596/1468/1/012058 | [
"https://arxiv.org/pdf/2004.12795v1.pdf"
]
| 216,553,766 | 2004.12795 | 516057bb6a2b3ce4eb54b5382dafb7932f5d0dfc |
Recent advancements of the NEWS-G experiment
School of Physics and Astronomy
University of Birmingham
B15 2TTUnited Kingdom
Recent advancements of the NEWS-G experiment
Ioannis Katsioulas on behalf of the NEWS-G collaboration
NEWS-G (New Experiments With Spheres-Gas) is an experiment aiming to shine a light on the dark matter conundrum with a novel gaseous detector, the spherical proportional counter. It uses light gases, such as hydrogen, helium, and neon, as targets to expand dark matter searches to the sub-GeV/c 2 mass region. NEWS-G produced its first results with a 60 cm in diameter detector installed at LSM (France), excluding at 90% C.L. cross-sections above 4.4 · 10 37 cm 2 for dark matter candidates of 0.5 GeV/c 2 mass. Currently, a 140 cm in diameter detector is being built at LSM and a commissioning run is underway, prior to its installation at SNOLAB (Canada) at the end of the year. Presented here are developments incorporated in this new detector: a) sensor technologies using resistive materials and multianode read-out that allow high gain and high pressure operation; b) gas purification techniques to remove contaminants (H2O, O2); c) reduction of 210 Pb induced background through copper electroforming methods; d) utilisation of UV-lasers for detector calibration, detector response monitoring and estimation of gas related fundamental properties. This next phase of NEWS-G will allow searches for low mass dark matter with unprecedented sensitivity.
Introduction
The NEWS-G collaboration is searching for light dark matter (DM) [1] where many new theoretical approaches such as the asymmetric dark model and dark sector predict DM candidates. These searches are performed using a gaseous particle detector, the Spherical Proportional Counter (SPC) [2] filled with light mass gases [3], such as neon, methane, and helium.
The SPC consists of a grounded metallic spherical shell shown in Fig. 1. A small sensor is placed at the center of the sphere supported by a grounded metallic rod, and is held at positive high voltage. The resulting electric field is mostly radial, except near the sensor rod which disturbs the field, and falls as 1/r 2 . The low capacitance of the sensor, which allows for low electronic noise, in combination with the large amplification of the signal, provides single electron detection and therefore makes the SPC a powerful detector for low energy nuclear recoils. NEWS-G operated its prototype, SEDINE a 60-cm diameter SPC made from pure copper at the Laboratoire Souterrain de Modane (LSM), in France, primarily to prove the concept of using large SPCs to search for low-mass dark matter. To further mitigate background from external radiation, SEDINE was put inside a multi-layered cubic shielding composed of, from the inside to the outside, 8 cm of copper, 15 cm of lead and 30 cm of polyethylene. At the center of SEDINE, a grounded copper rod is holding a 6.3-mm silicon sensor held at high-voltage. Between April and May 2015, while it was filled to a pressure of 3.1 bar with a mixture of 90.3% neon and 0.7% methane, SEDINE ran in dark matter search mode uninterruptedly for 42.7 days. In 2017 NEWS-G set new constraints on the spin-independent WIMP-nucleon scattering cross-section below 0.6 GeV/c 2 and excluded at 90% C.L. a cross-section of 4.4 · 10 37 cm 2 for a 0.5 GeV/c 2 light DM candidate mass. The details for the experimental setup, result, pulse treatment, data analysis, were shown in [4]. The NEWS-G collaboration is planning to install of a 140-cm diameter SPC made from ultra-pure copper (commercial C10100) (figure 7). This detector is the largest SPC to date and has been approved for installation at SNOLAB in Canada. The design of the shielding is much more advanced and compact than SEDINE's, comprised from the outside to the inside, 40 cm of borated polyethylene, and 22 cm of low activity lead (including the inner most 3 cm made from archaeological lead). The lead shield is placed into a stainless steel envelope that will be flushed with pure nitrogen to mitigate the presence of radon. All of the shielding and detector will be sitting on a seismic platform as a precaution for seismic events. A commissioning of the experiment took place during summer 2019 at LSM, while the polyethylene shielding is being fabricated in Canada. The construction of a neutron shielding based on a concentric cylindrical water tank at LSM will allow for a first short dark matter search run, before the detector was shipped to SNOLAB. It is expected that the dark matter search at SNOLAB will begin in Winter 2020.
ACHINOS -The multi-anode SPC sensor
The ACHINOS (Greek for sea-urchin) multi-anode sensor consists of a set of anode balls uniformly distributed around a central sphere at an equal distance from the center of the detector, supported by insulated wires through which HV can be applied (HV1) on them. The central sphere is used as a bias electrode by applying (HV2) on its surface which helps to optimise the electric field configuration. The example of an 11-anode ACHINOS is displayed in Fig. 2. The motivation for the development of such an instrument is to provide an increased electric field in the large radii of large spherical proportional counters which in the case of single anode sensors can be below 0.1 V/cm. By increasing the electric field magnitude electrons and ions are collected faster making operation of the detector less sensitive to attachment induced by O 2 , H 2 presence. This task its achieved with ACHINOS without high gain operation burdened by using low diameter anodes (below 2 mm) and in the same time increase the strength of the electric field in the detector volume by increasing the number of anodes and their distance from the central secondary electrode. The effect of multiple anodes being used in an ACHINOS sensor versus a single anode sensor is displayed in figure Fig. 3. The electric field close to the surface of the shell of the detector is higher in the case of an ACHINOS sensor (approximately 9 times higher for an 11-ball ACHINOS) than in the case of the single ball sensor, an effect depicted in the measured reduction of the maximum risetime of pulses [5]. Magnitude of electric field of ACHINOS sensors with 5, 11, and 33 anodes compared to the electric field of a single ball sensor with its anode in the center of the detector, and in the same potential [5]. Signal comparison of 5.9 keV X-rays in a spherical proportional counter filled with 600 mbar of He:CH 4 (90%:10%) gas with and without filtering [6].
Gas purification
Gas contaminants with electronegative molecules lead in signal reduction, degradation of the energy resolution and background discrimination capabilities.
Their effect particularly pronounced in regions of low electric field. Gas filtering using Messer Oxisorb or Saes MicroTorr Purifier was introduced to ensure that oxygen and water induced effects were minimised. Fig. 4 shows the pulse amplitude for 5.9 keV photons measured with a spherical proportional counter filled with filtered and unfiltered gas. Filtering improved recorded resolution (σ/E), from 21.3 ± 0.7% to 9.4 ± 0.3%. During the process, it was found that nonnegligible amounts of 222 Rn were introduced. This has previously been reported by several experiments [7] and work is ongoing to incorporate a carbon filter, inserted after the oxygen filter in the gas system to remove any emanated 222 Rn.
Pulsed laser for detector monitoring and calibration
The NEWS-G collaboration has recently reported on a novel precision laser-based calibration that allowed for the measurement of the single-electron-response (SER) in SPCs. A monochromatic UV laser beam with a variable intensity was used to extract single photo-electrons from the cathode of the SPC. The SPC data acquisition is triggered using the laser signal in a photo detector. This allows for the precise measurement of electron transport parameters such as drift time, diffusion coefficients, and electron avalanche gain. A schematic of the experimental setup is shown in 5. These studies are complement with an internal 37 Ar source calibration for measurements of the gas W-value and Fano factor. The calibration system can be used during the dark matter search to monitor the detector response. Additionally, the trigger efficiency can be measured using events triggered by the laser photo detector. The details Figure 5. Schematic of laser-based calibration showing the principle of operation and example pulses [8].
of this technique and results are presented in [8].
Background reduction
One strength of the SPC is that it allows for simple construction using solely radiopure materials. NEWS-G built the new 140-cm in diameter detector for SNOLAB out of 4N (99.99% pure) Aurubis copper [6]. Pure copper has no long-lived radioisotopes making it an ideal construction material for a NEWS-G detector. Recent measurements demonstrated that the 4N copper contained unacceptable amounts of 210 Po and 210 Pb coming from the same decay chain as 210 Rn [9], reducing the sensitivity of the experiment, due to a contribution of 4.6 dru below 1 keV in the background rate from decays of 210 Pb and 210 Bi, an order of magnitude larger contribution than any other background. Thus a 500 μm layer of ultra-pure copper was electroplated onto the detector inner surface (with a rate of 0.036 mm/day), which was estimated to reduce this background to 2.0 dru below 1 keV. The electroplating was carried out in the underground laboratory at LSM where the detector vessel was stored.
Summary
The NEWS-G SPC filled with light gases provides a window to search the search for light DM in the 0.1-10 GeV 2 range. Recent results from SEDINE provide competitive constraints on the WIMP-nucleon cross section below 1 GeV/c 2 . In the future operation of the 140-cm low background SPC at SNOLAB, with novel sensor technology and detector monitoring will extend the sensitivity by orders of magnitudes and could shed light on the nature of dark matter.
Figure 1 .
1SPC design and principle of operation.
Figure 2 .
2The design of an 11anode ACHINOS and the implementation covering the 3D printed central electrode with a resistive paste.
Figure 3 .
3Figure 3. Magnitude of electric field of ACHINOS sensors with 5, 11, and 33 anodes compared to the electric field of a single ball sensor with its anode in the center of the detector, and in the same potential [5].
Figure 4 .
4Figure 4. Signal comparison of 5.9 keV X-rays in a spherical proportional counter filled with 600 mbar of He:CH 4 (90%:10%) gas with and without filtering [6].
AcknowledgementsThis project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement no 841261.
. J L Feng, Annu. Rev. Astron. Astrophys. 495545J.L. Feng 2010 Annu. Rev. Astron. Astrophys 495 -545
. I Giomataris, J. Instrum. 39007I. Giomataris et al. 2008 J. Instrum. 3 P09007
. G Gerbier, arXiv:1401.7902[astro-ph.IMG. Gerbier et al. 2014 arXiv:1401.7902 [astro-ph.IM]
. News-G, Astroparticle Physics. 10NEWS-G collaboration 2018 Astroparticle Physics 10 54 -62
. A Giganon, J. Instrum. 1212031A. Giganon et al. 2017 J. Instrum. 12 P12031
. P Knights, NEWS-G collaborationJ. Phys.: Conf. Ser. 131212009P. Knights and NEWS-G collaboration 2019 J. Phys.: Conf. Ser. 1312 012009
. J Calvo, J. Cosmol. Astropart. Phys. 3003Calvo J et al. 2017 J. Cosmol. Astropart. Phys 003003
. Q Arnaud, Physical Review D. 9910102003Q. Arnaud et al. 2019 Physical Review D, 99(10) 102003
. K Abe, Nucl. Instrum. Methods Phys. Res. A. 884161Abe K et al. 2018 Nucl. Instrum. Methods Phys. Res. A 884 157 161
| []
|
[
"Prepared for submission to JHEP The two-loop six-point amplitude in ABJM theory",
"Prepared for submission to JHEP The two-loop six-point amplitude in ABJM theory"
]
| [
"S Caron-Huot \nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n\nNiels Bohr International Academy and Discovery Center\nThe Niels Bohr Institute\nBlegdamsvej 17DK-2100CopenhagenDenmark\n",
"Yu-Tin Huang [email protected] \nSchool of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA\n\nDepartment of Physics and Astronomy\nUCLA\n90095-1547Los AngelesCAUSA\n\nMichigan Center for Theoretical Physics\nRandall Laboratory of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA\n"
]
| [
"School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA",
"Niels Bohr International Academy and Discovery Center\nThe Niels Bohr Institute\nBlegdamsvej 17DK-2100CopenhagenDenmark",
"School of Natural Sciences\nInstitute for Advanced Study\n08540PrincetonNJUSA",
"Department of Physics and Astronomy\nUCLA\n90095-1547Los AngelesCAUSA",
"Michigan Center for Theoretical Physics\nRandall Laboratory of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA"
]
| []
| In this paper we present the first analytic computation of the six-point two-loop amplitude of ABJM theory. We show that the two-loop amplitude consist of corrections proportional to two distinct local Yangian invariants which can be identified as the treeand the one-loop amplitude respectively. The two-loop correction proportional to the treeamplitude is identical to the one-loop BDS result of N = 4 SYM plus an additional remainder function, while the correction proportional to the one-loop amplitude is finite. Both the remainder and the finite correction are dual conformal invariant, which implies that the twoloop dual conformal anomaly equation for ABJM is again identical to that of one-loop N = 4 super Yang-Mills, as was first observed at four-point. We discuss the theory on the Higgs branch, showing that its amplitudes are infrared finite, but equal, in the small mass limit, to those obtained in dimensional regularization.Amidst the shadow of tremendous progress in N = 4 super Yang-Mills (SYM 4 ) amplitudes, three-dimensional Chern-Simons matter (CSM) theory has recently enjoyed a quiet surge of interest. This reflects an interesting dual aspect of the latter: On the one hand it is a close cousin to SYM theory in four-dimensions and thus provides a fruitful arena to apply the methods that was developed there-in. On the other, while scattering amplitudes of SYM 4 theory, both perturbative and non-perturbative, are closely related to string theory scattering amplitudes, such relations for CSM theory are either obscure or in some cases simply absent as the proper correspondence is with M-theory instead. The latter is intriguing in that it implies that certain novel properties that is shared between the scattering amplitudes of both Yang-Mills and CSM may in fact have a deeper purely field theoretical origin.A prominent example is the N = 6 theory constructed by Aharony, Bergman, Jafferis and Maldacena (ABJM)[1]. Being dual to type IIA string theory in AdS 4 ×CP 3 background, it is very similar to SYM 4 in terms of providing an exact AdS/CFT pair. This similarity inspired the discovery of many common features between the two theories such as the presence of a hidden Yangian symmetry [2] (or equivalently dual superconformal symmetry [3-5]) of the tree-and planar loop-amplitudes [6], 1 as well as the realization that the leading singularities of both theories are encoded by the residues of a contour integral over Grassmaniann manifolds[9,10].In many aspects, ABJM amplitudes are simpler than its four-dimensional relative. This simplicity is already reflected in the fact that only even legged amplitudes are non-trivial[11]. Furthermore, all one-loop amplitudes consist solely of rational functions[12][13][14][15](multiplied by π, in a natural normalization) while the two-loop amplitudes are of transcendentality-twofunctions[16,17]. This should be compared to transcendentality -two-and four-functions for one and two-loop amplitudes respectively in SYM 4 . As all one-loop amplitudes can be conveniently expressed in terms of a basis of massive triangle integrals, whose coefficients can be directly computed via recursion relations [18], the one-loop amplitude for ABJM theory with arbitrary multiplicity is effectively "solved".On the other hand some properties of CSM theory, while shared with YMs theory, demand an alternative explanation other than the stringy origin currently available for the latter. Consider the color-kinematic duality[19], which leads to non-trivial amplitude relations for YMs and relates the amplitudes of the gauge theory to that of the corresponding gravity theory to all order in perturbation theory[20,21]. For CSM, it was shown that similar duality, although based on three-algebra [22], is also present for the N = 8 [23] and N < 8 [35] theory. While the relations implied by the duality in YMs can be traced back to monodromy relations of string amplitudes [24], such correspondence does not exist for CSM theory since the amplitudes are not directly related to any open string amplitudes in a flat back-ground. 1 At this stage it is unclear what role, if any, AdS/CFT plays in the existence of these symmetries, as explicit attempts at proving self-T-duality [7] on the string or supergravity side have encounter technical difficulties and have not been fully carried out [8]. | 10.1007/jhep03(2013)075 | [
"https://arxiv.org/pdf/1210.4226v1.pdf"
]
| 118,444,935 | 1210.4226 | f0035cb89d8a63c977557f1214fc1647621335de |
Prepared for submission to JHEP The two-loop six-point amplitude in ABJM theory
16 Oct 2012
S Caron-Huot
School of Natural Sciences
Institute for Advanced Study
08540PrincetonNJUSA
Niels Bohr International Academy and Discovery Center
The Niels Bohr Institute
Blegdamsvej 17DK-2100CopenhagenDenmark
Yu-Tin Huang [email protected]
School of Natural Sciences
Institute for Advanced Study
08540PrincetonNJUSA
Department of Physics and Astronomy
UCLA
90095-1547Los AngelesCAUSA
Michigan Center for Theoretical Physics
Randall Laboratory of Physics
University of Michigan
48109Ann ArborMIUSA
Prepared for submission to JHEP The two-loop six-point amplitude in ABJM theory
16 Oct 2012
In this paper we present the first analytic computation of the six-point two-loop amplitude of ABJM theory. We show that the two-loop amplitude consist of corrections proportional to two distinct local Yangian invariants which can be identified as the treeand the one-loop amplitude respectively. The two-loop correction proportional to the treeamplitude is identical to the one-loop BDS result of N = 4 SYM plus an additional remainder function, while the correction proportional to the one-loop amplitude is finite. Both the remainder and the finite correction are dual conformal invariant, which implies that the twoloop dual conformal anomaly equation for ABJM is again identical to that of one-loop N = 4 super Yang-Mills, as was first observed at four-point. We discuss the theory on the Higgs branch, showing that its amplitudes are infrared finite, but equal, in the small mass limit, to those obtained in dimensional regularization.Amidst the shadow of tremendous progress in N = 4 super Yang-Mills (SYM 4 ) amplitudes, three-dimensional Chern-Simons matter (CSM) theory has recently enjoyed a quiet surge of interest. This reflects an interesting dual aspect of the latter: On the one hand it is a close cousin to SYM theory in four-dimensions and thus provides a fruitful arena to apply the methods that was developed there-in. On the other, while scattering amplitudes of SYM 4 theory, both perturbative and non-perturbative, are closely related to string theory scattering amplitudes, such relations for CSM theory are either obscure or in some cases simply absent as the proper correspondence is with M-theory instead. The latter is intriguing in that it implies that certain novel properties that is shared between the scattering amplitudes of both Yang-Mills and CSM may in fact have a deeper purely field theoretical origin.A prominent example is the N = 6 theory constructed by Aharony, Bergman, Jafferis and Maldacena (ABJM)[1]. Being dual to type IIA string theory in AdS 4 ×CP 3 background, it is very similar to SYM 4 in terms of providing an exact AdS/CFT pair. This similarity inspired the discovery of many common features between the two theories such as the presence of a hidden Yangian symmetry [2] (or equivalently dual superconformal symmetry [3-5]) of the tree-and planar loop-amplitudes [6], 1 as well as the realization that the leading singularities of both theories are encoded by the residues of a contour integral over Grassmaniann manifolds[9,10].In many aspects, ABJM amplitudes are simpler than its four-dimensional relative. This simplicity is already reflected in the fact that only even legged amplitudes are non-trivial[11]. Furthermore, all one-loop amplitudes consist solely of rational functions[12][13][14][15](multiplied by π, in a natural normalization) while the two-loop amplitudes are of transcendentality-twofunctions[16,17]. This should be compared to transcendentality -two-and four-functions for one and two-loop amplitudes respectively in SYM 4 . As all one-loop amplitudes can be conveniently expressed in terms of a basis of massive triangle integrals, whose coefficients can be directly computed via recursion relations [18], the one-loop amplitude for ABJM theory with arbitrary multiplicity is effectively "solved".On the other hand some properties of CSM theory, while shared with YMs theory, demand an alternative explanation other than the stringy origin currently available for the latter. Consider the color-kinematic duality[19], which leads to non-trivial amplitude relations for YMs and relates the amplitudes of the gauge theory to that of the corresponding gravity theory to all order in perturbation theory[20,21]. For CSM, it was shown that similar duality, although based on three-algebra [22], is also present for the N = 8 [23] and N < 8 [35] theory. While the relations implied by the duality in YMs can be traced back to monodromy relations of string amplitudes [24], such correspondence does not exist for CSM theory since the amplitudes are not directly related to any open string amplitudes in a flat back-ground. 1 At this stage it is unclear what role, if any, AdS/CFT plays in the existence of these symmetries, as explicit attempts at proving self-T-duality [7] on the string or supergravity side have encounter technical difficulties and have not been fully carried out [8].
As the color-kinematic identity allows one to obtain the amplitudes of gravity-matter theory from that of CSM theory, 2 the fact that gravity amplitudes can be extracted from close string amplitudes, render the role of string theory even more mysterious.
In this paper our main focus is the loop amplitudes of ABJM theory, in particular, the sixpoint one-and two-loop amplitude, in the planar ('t Hooft) limit. It was shown in ref [16,17] that the two-loop four-point amplitude has the same functional dependence as that of the one-loop four-point SYM 4 amplitude. This equivalence was latter shown to persist to all orders in expansion [25]. As the four-point amplitude can be uniquely determined by the dual conformal anomaly equation [26,27], this results states that the anomaly equation for both ABJM and SYM 4 , up to four-points, are identical. However, taking into account the fact that the theory is conformal, the simplicity of four-point kinematics and the transcendental requirement of the finite function, this result might be deemed accidental (although not for the all order correspondence). At six-point, it is nontrivial that the anomaly equations should match. Furthermore, the anomaly equations fixes the result only up to homogenous terms and six-point is the first place where non-trivial invariant remainder functions might appear. Thus the six-point computation is an important piece of data to clarify these issues.
As ABJM theory consists of matter fields transforming under the bi-fundamental representation of the gauge group SU(N) k ×SU(N) −k , the amplitude has a definite parity under the exchange of Chern-Simons level k ↔ −k. More precisely and L-loop amplitude is weighted by a factor of (4π/k) L+1 , and hence (odd-)even-loop amplitudes are parity (even)odd. Assuming that parity is non-anomalous, in order for odd-loops to give an acceptable contribution it must compensate for its opposite parity. Since the exchanging of k ↔ −k can be translated into the exchange of the gauge group, this implies that a non-vanishing odd-loop all scalar-amplitude must pickup a minus when cyclicly shifted by one-site. This is indeed the case.
We construct the six-point integrand using leading singularity methods. Since it was shown in ref. [6] that there is only one pair of leading singularity at six-points, it is straightforward to construct the integrand by choosing an integral basis consisting of integrands with uniform leading singularities. At one-loop there are two types, the one-loop box and massive triangle integrals with loop momentum dependent and independent numerators respectively. There are two distinct combinations of the leading singularity pair, the difference and the sum. The former is simply the tree amplitude while the latter is the conjugate tree-amplitude, with even and odd sites now belonging to the conjugate multiplet, denoted as A tree 6,shifted since the identification of multiplets are shifted by one site. We find that those two objects do appear in integrand.
In ref [13][14][15], the one-loop amplitude was given solely in terms of triangle integrals, proportional to A tree 6,shifted . This is valid up to order O( ) as the box-integrals integrate to zero at O(1). Having a result at one-loop that is valid to all orders in will be extremely important for the construction of the two-loop amplitude. 3 The integrated result is proportional to a step function, which as we will see nicely captures the non-trivial topology of 3d massless kinematics. More precisely, massless kinematics in three-dimensions can be parameterized by points on S 1 . For color ordered amplitudes, distinct kinematic configurations can be categorized by a "winding number" which can be unambiguously defined. The sign function then simply distinguishes the configurations with even or odd winding number, for a given kinematic channel.
With the one-loop integrand in hand, we construct the two-loop amplitude by simply requiring that on the maximal cut of one of the sub-loops, one obtains the full one-loop integrand. This fixes the integrand up to possible double triangles, which are further fixed by soft-collinear constraints. We compute the integrals using both dimensional reduction regularization as well as mass regularization. This mass regulator can actually be given a physical interpretation in terms of moving to the Coulomb branch of the theory and giving the scalars a vev, similar to that used for SYM 4 [28]. Interestingly, while the result for the individual integrals differ between the two schemes, they give, up to additive constant, identical results when combined into the final physical amplitude.
Using five-dimensional embedding formalism, the tree amplitude is multiplied by fivedimensional parity even integrals, while the conjugate tree-amplitude is multiplied by parity odd integrals. Introducing the cross-ratios (only two of the these are algebraically independent)
u 1 = (1 · 3)(4 · 6) (1 · 4)(3 · 6)
, u 2 = (2 · 4)(5 · 1) (2 · 5)(4 · 1) , u 3 = (3 · 5)(6 · 2) (3 · 6)(5 · 2) , (1.1) the two-loop amplitude is given as: where BDS 6 is the one-loop MHV amplitude for N = 4 SYM [29,30], with proper rescaling of the regulator to account for the fact this is at two-loops, and the remainder function R 6 is given as
R 6 = −2π 2 + 3 i=1 Li 2 (1 − u i ) + 1 2 log u i log u i+1 + (arccos √ u i ) 2 .
The χ i are little-group-odd cross-ratios defined in (7.4); we warn the reader that these variables may require some care when analytically continuing to Minkowski kinematics. An alternative form of the amplitude with explicit dependence on conventional invariants is given in eq. (7.3). The presence of the BDS result demonstrates that infrared divergence and the dual conformal anomaly equation of the two-loop ABJM theory is identical are that of one-loop SYM 4 . Furthermore, similar to SYM 4 , using the mass regulator we show how the anomaly relevance of the O( ) pieces can also be understand from unitarity cuts, where such terms might combine with collinear singularity of the tree amplitudes to give non-trivial two loop contribution. equation can be converted into a statement of exact dual conformal symmetry in higher dimensions, with the mass playing the role of the extra dimension. This paper is organized as follows: In section (2) we lay out some basic conventions, while in section (3) we begin with the discussion of general one-loop dual conformal integrand and its integration in the embedding formalism. We then explicitly construct the one-loop sixpoint integrand and well as the integrated result. We end with a more detailed discussion of the properties of the one-loop amplitude in terms of the topological properties of threedimensional kinematics. In section (4), we employ leading singularity methods and softcollinear constraints to fix the two-loop integrand. In section (5) we briefly discuss two regularization schemes, dimensional reduction regularization and higgs mass regulation, with special emphasis on the latter. In section (6) we will use mass regularization to explicitly compute the integrals. In section (7) we combine the integrated expressions and give the complete six-point two-loop amplitude. We give a brief conclusion and discussion for future directions in section (8).
Conventions
Since we will be interested in planar amplitudes, it is useful to define the dual coordinates
x i+1 − x i = p i . (2.1)
Special interest in the x i coordinates resides in the fact that planar amplitudes in ABJM theories are invariant under the so-called dual conformal transformations, which act as conformal transformations of the x i . To make the action of this symmetry simplest, and at the same time trivialize several operations which occur when doing loop computations, we will systematically use the so-called embedding formalism [31] (for more recent discussion, see [32]). The idea is to uplift three-dimensional x i 's to (projectively identified) null five-vectors
y i := (x i , 1, x 2 i ) (2.2)
such that inverse propagators become the (2,3)-signature inner product
(i · j) := y i · y j := (x i − x j ) 2 . (2.3)
The group of conformal transformations SO(2,3) of three-dimensional Minkowski spacetime is then realized linearly as the transformations of the y i which preserve this inner product. It was shown in ref. [6] that the tree-level amplitude and loop-level integrand in ABJM inverts homogeneously under dual conformal inversion:
I [A n ] = n i=1 x 2 i A n . (2.4)
Due to the fact that at weak coupling the theory only has N = 6 supersymmetry, the on-shell states are organized into two different multiplets:
Φ(η) = φ 4 + η I ψ I + 1 2 IJK η I η J φ K + 1 3! IJK η I η J η K ψ 4 , Ψ(η) =ψ 4 + η Iφ I + 1 2 IJK η I η JψK + 1 3! IJK η I η J η Kφ 4 ,(2.5)
where η I are Grassmann variables in the fundamental of U(3)∈SU(4). The kinematic information are encoded in terms of SL(2,R) spinors λ α , with
s ij = − ij 2 , ij := λ α i λ β j αβ (2.6) where s ij = x 2 i,i+2 when j = i+1.
Note that x 2 ij is positive when the corresponding momentum is spacelike, while ij 2 is negative in that case. For more detailed discussion of the on-shell variables (λ α i , η I i ) see ref. [2]. In this paper, we will use the convention where the barred multiplet sits on the odd sites. The four-point amplitude is given as [2]:
A 4 (1234) = 4π k δ 3 (P ) 3 I=1 δ 2 (Q I ) 12 23 with δ 2 (Q I ) := 1≤i<j≤4 η I i ij η I j .
(2.7)
One-loop integrand and amplitude
Dual conformal symmetry restricts the integral basis to be constructed of SO(2,3) invariant projective integrals. At one-loop, this restricts the n-point amplitude to be expanded on the basis of scalar triangles with appropriate numerators, as well as scalar box integrals with numerator constructed from the five-dimensional Levi-Cevita tensor:
I box (i, j, k, l) = a (a, i, j, k, l) (a · i)(a · j)(a · k)(a · l) . (3.1)
The integral in eq. (3.1) is analogous to the four-dimensional pentagon integral described in ref. [33] and it integrates to zero up to order in dimension regularization [16]. To demonstrate how dimensional regularization is employed in the embedding formalism, we explicitly demonstrate this result in the following. We first note that eq. (3.1) can be rewritten using Feynman parametrization as
I box (i, j, k, l) = − dF (i, j, k, l, ∂ Y ) a Γ[3] (a · Y ) 3 (3.2)
where dF := 4 i=1 dα i δ(1 − i α i ) and Y := α 1 y i + α 2 y j + α 3 y k + α 4 y l . We now focus on the inner integral, which for the purpose of dimensional regularization, we define in D-dimensions:
I 0 = Γ[3] a 1 (a · Y ) 3 := Γ[3] d D+2 a δ(a 2 ) i(2π) D Vol(GL(1)) 1 (a · Y ) 3 (a · I) D−3 . (3.3)
Let us illuminate this definition of a by comparing it with (2.2). First, the GL(1) symmetry can be gauge-fixed by setting the next-to-last component of a to 1, at the price of a unit Jacobian. Then the δ(a 2 ) factor forces the last component of a to equal x 2 , thus reducing a to the usual loop integration
d D x i(2π) D .
Finally, the factor of i is removed by the Wick rotation from Minkowski to Euclidean space.
The key feature away from D = 3 is the factor (a · I) where y I := ( 0 D , 0, 1) is the infinity point. This signals the breaking of dual conformal symmetry, and is required to maintain the projective nature of the integrand (the GL(1) invariance) for arbitrary D. This feature remains clearly visible when switching to the easily-obtained integrated expression:
I 0 = Γ 3 − D 2 (4π) D 2 1 (I · Y ) D−3 ( 1 2 Y 2 ) 3− D 2 . (3.4)
Plugging this into eq. (3.2), we find that the box integral gives:
I box (i, j, k, l) = dF (4π) D 2 Γ 4 − D 2 (i, j, k, l, Y ) (I · Y ) D−3 ( 1 2 Y 2 ) 4− D 2 + (D − 3) Γ 3 − D 2 (i, j, k, l, I) (I · Y ) D−2 ( 1 2 Y 2 ) 3− D 2 . (3.5)
The first term vanishes due to the fact that Y is a linear combination of the four external coordinates, while the second term is at least O( ) with D = 3 − 2 .
As the one loop box integral vanishes, dual conformal symmetry implies that the amplitude, up to O( ), can be solely expressed in terms of scalar triangles. However as discussed in the introduction, for the purpose of constructing the two-loop integrand it will be extremely useful (and actually essential) to have a one-loop integrand valid beyond O( ). In the following, we will derive the full one-loop six-point integrand that includes both the scalar triangle and the tensor box integrals. We note that the form of the amplitude in terms of scalar triangles were given in [13,14].
Leading singularity and the one-loop integrand
At six-point there are three possible box integrals, the one mass box, two-mass-easy and two-mass-hard box integrals. 4 Using the five term Schouten identity of the five-dimensional Levi-Cevita tensor one finds the following linear identity for the box integrals:
I box (1, 3, 4, 6) = I box (3, 4, 5, 6) + I box (4, 5, 6, 1) + I box (1, 3, 5, 6) + I box (1,3,4,5) .
(3.6)
Thus the two-mass-easy integral can be expressed in terms of linear combinations of the twomass-hard and one mass integrals. We will use the later two as the basis for box integrals. The relative coefficient of the box integrals can be easily fixed by requiring that the two particle cuts which factorize the amplitude into two five-point tree amplitudes, must vanish. Cutting in the x 2 14 -channel, shown in fig. (1), this requires four box integrals to come in the following combination: I box (3, 4, 5, 1) + I box (1, 2, 3, 4) − I box (4, 5, 6, 1) − I box (6,1,2,4) .
(3.7) 0 x 3
x 4
x 5
x 1
x 6
x 2
x 1 x 4
x 1
x 2
x 3
x 4 x a
x a x a x a
x 4
x 5
x 6
x 1 Figure 1. The particular combination of tensor box integrals in eq. (3.7) combines to give vanishing two particle cut x 2 a1 = x 2 a4 = 0. This cut must vanish as the amplitude factorizes a five-point tree amplitude, which vanishes.
Using the Schouten identity, one can show that this combination is actually invariant under cyclic permutation by one site up to overall sign. This extra sign will be important as we will discuss shortly.
The other allowed scalar integrals are the massive triangles. Their coefficients along with that of the boxes can be fixed by the two triple-cuts C 1,2 (and their conjugate C * 1,2 ), where the subscripts correspond to the the two distinct maximal cut, indicated as channel (1) (2) in fig. (2). Explicitly they are given by:
C 1 = 3 I=1 dη I l 1 dη I l 2 dη I l 3 A 4 (1, 2,l 2 , −l 1 )A 4 (3, 4,l 3 , −l 2 )A 4 (5, 6,l 1 , −l 3 ) C 2 = 3 I=1 dη I l 1 dη I l 2 dη I l 3 A 4 (−l 1 , 2,3, l 2 )A 4 (−l 2 , 4,5, l 3 )A 4 (−l 3 , 6,1, l 1 )
We note that there is always an ambiguity in distinguishing C 1 versus C * 1 , since they arise from the two solutions of a quadratic equation. However, two convention-independent combinations always exist. One is the average of the two cuts C 1 + C * 1 and the other is the average of the leading singularities, LS 1 + LS * 1 = (C 1 − C * 1 )/[4 det(l 1 , l 2 , l 3 )(C 1 )], e.g., the numerators weighted by the Jacobian. Independence of the second combination follows from sign flip of the Jacobian on the two solutions, det(l 1 , l 2 , l 3 )(C 1 ) = − det(l 1 , l 2 , l 3 )(C * 1 ). The leading singularities have the following analytic form [6]:
LS 1 = δ 3 (P )δ 6 (Q) 3 I=1 (α +I ) 2c + 25 c + 41 c + 63 , LS * 1 = LS 1 (+ → −) . (3.8)
The functions c ± ij and α ±I are defined as
c ± ij := i|p 135 |j ∓ i i + 2, i − 2 j − 2, j + 2 p 2 135 , α ±I := −( ījk ī ,j η Ī k ± i lmn l, m η I n ) p 2 135 ,
where in the definition of α ±I , the (un-barred)barred indices indicate (odd)even labels. One can conveniently fix the convention of C 1 and C * 1 as: As one can check, the two combinations LS 1 + LS * 1 and C 1 + C * 1 both have the correct little group weights for an amplitude.
C 1 := 2 12 34 56 LS 1 , C * 1 = −C 1 (+ → −) .
The leading singularities of ABJM have a dual presentation as the residues of an integral over orthogonal Grassmanian [10]. As discussed in ref. [6] at n = 2k-point there are (k − 2)(k − 3)/2 number of integration variables in the orthogonal Grassmanian. This implies that at six-point, there are no integrals to be done and one has a unique leading singularity from the Grassmanian (plus its complex conjugate due to the orthogonal condition). This implies that the second maximal cut C 2 and C * 2 must be related to C 1 and C * 1 . Indeed one can check that
C 1 + C * 1 12 34 56 = C 2 + C * 2 23 45 61 = −2iA tree 6,shifted ,(3.10)
where we have further identified the combination as the tree amplitude rotated by one, A tree 6,shifted (123456) := A tree (234561). Note that all objects in this equation have the same little group weights (odd under reversal of the even λ's) so the identification makes sense.
A remarkable feature of 6-point kinematics is that the expressions for C 1 are explicit in terms of angle brackets, that is they contain no square roots. This reflects the fact that at six-points the cut solutions can be expressed explicitly in terms of angle brackets. Let us see this explicitly. At the same time, this will make apparent the following connection between the leading singularities and the BCFW form of the six-point tree amplitude,
A tree 6 = LS 1 + LS * 1 = C 1 − C * 1 2 12 34 56 = LS 2 + LS * 2 ,(3.11)
in line with the original BCF logic [34] and as explained recently in [15]. The main point is that the on-shell condition l 2 1 = l 2 2 = 0 in channel (1) of fig. (2) indicates that the loop momentum spinors can be parameterized as λ l 1 = λ 1 sin θ + λ 2 cos θ, λ l 2 = i(λ 1 cos θ − λ 2 sin θ) . (3.12) This is precisely the BCFW parameterization discussed in [6]. On the double-cut there are three poles as a function of cos θ, whose residues are respectively LS 1 , LS * 1 , and −A tree 6 . (The latter is located at cos θ = 0 and a computation of its residue is detailed in subsection (4.3), as part of our determination of the two-loop integrand.) The desired relation then follows from the fact that the three residues must sum up to zero by Cauchy's theorem. For completeness, we record here the explicit solution which corresponds to C 1
sin θ = ic + 45 / (c + 36 ) 2 − (c + 45 ) 2 and cos θ = c + 36 / (c + 36 ) 2 − (c + 45 ) 2 . (3.13)
From eq. (3.9), one also sees that C i has a non-uniform weight under conformal inversion:
I [C 1 ] = C 1 6 i=1 (x 2 i )x 2 1 x 2 3 x 2 5 , I [C 2 ] = C 2 6 i=1 (x 2 i )x 2 2 x 2 4 x 2 6
.
(3.14)
We are now ready to use the maximal cut to completely fix the integrand. Two types of integrals contribute to the cut in channel (1) in fig. (2), the massive triangles as well as the "two-mass-hard" box integrals. As there are two solutions for the maximal cut in channel (1), giving different cut results C 1 and C * 1 , the massive triangle by itself cannot simultaneously reproduce both. This implies the need for the box integrals. On the cut the box integrals give:
I box (3, 4, 5, 1) C 1 = √ 2 12 34 56 , I box (3, 4, 5, 1) C * 1 = − √ 2 12 34 56 ,(3.15)
where | C 1 indicates the maximal cut it is evaluated on. A simple way to verify these formulas, up to a common sign, is to compare their square with the square of (a, 3, 4, 5, 1)/(a · 4) on the cut, using the identity (i 1 , . . . , i 5 ) (j 1 , . . . , j 5 ) := det (i i · j j ) , (3.16) which in fact defines our normalization of the Levi-Cevita tensor. The sign can be computed by a judicious use of eq. (A.2). Since the one-mass box must combine with the two-mass-hard box in the combination given in eq. (3.7), this fixes the final integrand that reproduces the correct maximal cut to be (stripping a loop factor 4πN/k):
A 1-loop 6 =
A tree 6 √ 2 I box (3, 4, 5, 1) + I box (1, 2, 3, 4) − I box (4, 5, 6, 1) − I box (6, 1, 2, 4) (2,4,6) .
+ C 1 + C * 1 2 I tri (1, 3, 5) + C 2 + C * 2 2 I triC 1 (φ 4 φ 4φ 4 φ 4φ 4 φ 4 ) i→i+1 = −C * 2 (φ 4 φ 4φ 4 φ 4φ 4 φ 4 ) . (3.18)
These additional signs are important for a non-vanishing one-loop amplitude as we now discuss. In ABJM, the tree and even-loop six-point amplitudes are parity even under k → −k, while odd-loops are parity odd. As parity is believed to be non-anomalous, this naively forbids non-trivial corrections from odd-loops unless these are odd under parity. Due to the change from k → −k, we are really exchanging the two gauge group U(N) k ×U(N) −k , and thus resulting in a cyclic shift in the identification of the barred and unbarred-multiplet. Thus if the one-loop amplitude picks up a minus sign under the cyclic shift, this will compensate for the parity odd nature, and gives an acceptable one-loop correction. This aspect of the one-loop amplitude has been discussed previously in ref. [12][13][14].
The one-loop amplitude
The box integrals integrate to zero, thus the one-loop amplitude, at order O( 0 ) is given solely by the massive triangles 5 :
A 1-loop 6 = N k π (C 1 + C * 1 ) 4 (1 · 3) (5 · 3) (1 · 5) + π (C 2 + C * 2 ) 4 (2 · 4) (4 · 6) (6 · 2)
The fact that (i · i+2) = − ii+1 2 motivates the following definition [14]: Thus the one-loop amplitude is proportional to the tree-amplitude shifted by one-site multiplied by a step function. This result has been obtained previously in [12][13][14].
sgn c ij := ij i − ij 2 − i = ±1.
In closing, we note that at six-point there are only two distinct Yangian invariant, the sum and the difference of the leading singularity and it's conjugate. Interestingly, both combinations are local quantities, with the difference appearing as the tree-amplitude, while the sum appears as the one-loop amplitude. From eq. (3.8) this property is rather obscure, however due to the following non-trivial identity, equivalent to eq. (A.4), Thus the denominators of the leading singularities are in fact local propagators. 6 However, only the sum of the leading singularities has the correct little group weights to appear in an amplitude. The difference does not, unless it is multiplied by sign functions, which explains why it can appear only at loop level.
i|j + k|l 2 + (p i + p j + p k + p l ) 2 jk 2 = (p i + p j + p k ) 2 (p j + p k + p l ) 2
Analytic properties of the one-loop amplitude
The one-loop result (3.20) displays some remarkable properties which are worth spending some time on. In particular, step functions are rarely seen in loop amplitudes, so we need to understand well why they are allowed to appear in three space-time dimensions. First, we would like to give some topological interpretation to the region where the amplitude is nonzero. In Minkowski space as null momenta can be parameterized as p i = E i (1, sin θ i , cos θ i ), the kinematic configuration of the scattering can be projected to a set of points on S 1 . The first thing to notice is that the invariant ij flips sign whenever the points i, j on S 1 crosses each other, as was also noted in [12][13][14]. This is easy to see by writing the invariants in terms of coordinates on S 1 :
ij = 2 sin θ 2 − θ 1 2 E i + i E j + i . (3.23)
Thus the function changes sign whenever point j crosses point i on S 1 . It is thus natural to divide the phase space into chambers depending on the ordering of the angles of the particles; the one-loop amplitude is locally constant in each of these chambers. Given that the product of sign functions changes sign whenever two angles cross, the angular dependence can be given a simple topological interpretation in terms of a "winding number" counting the number of angle crossings compared to the color ordering. This can be defined as follows: if multiples of 2π are added to angles such that they are strictly increasing, 0 < θ i+1 − θ i < 2π, i = 1 . . . 7, then w := (θ 7 − θ 1 )/(2π). Then one can show sgn c 12 sgn c 34 sgn c 56 sgn c 23 sgn c 45 sgn c 61
= (−1) w (−1) k . (3.24)
The second factor (−1) k has a kinematical origin and originates from the factors (i · j) − i which can be real or imaginary depending on whether the given channel is space-like or timelike, respectively. The number k then simply equals the number of positive-energy timelike two-particle channels. We see that the 1-loop amplitude is a highly intricate function of the kinematical configuration.
As discussed in ref. [14], the fact that the one-loop amplitude is a step function can be readily understood from superconformal anomaly equations. Using free representation for θ δ -δ allowed allowed not θ allowed allowed not
θ~e -#k/N (a) (b) Figure 3. (a)
The amplitude F δ in the presence of a small mass. It can be continued from θ < 0 to θ > 0 through a narrow window of size ∼ δ, which shrinks to zero size in the massless limit. (b) The advocated behavior in the massless setup, at a small but finite value of the coupling. A branch cut covers the whole imaginary axis but the discontinuity across it tends to zero at the origin.
the OSp(6|4) superconformal generators, it was shown that acting on the one-loop six-point amplitude with the linear generators, one must obtain an anomalous term that is proportional to δ( ij ), i.e. it has support on regions where two external legs become collinear. As the generators are linear, single derivatives in the on-shell variables, this implies that the amplitude must be proportional to step functions, or equivalently, sign functions. However, we are rather disturbed by the notion of an amplitude vanishing in an open set but nonzero elsewhere -this would seem to clash with the amplitude being an analytic function of the external momenta. In the rest of this section, we will propose that the step functions behavior are not actually incompatible with analyticity of the amplitude, but are likely only an artifact of fixed-order perturbation theory.
It is useful to first ask what would happen if we added small masses to the internal propagators still keeping the external lines massless. This could arise naturally by giving a vacuum expectation value to some of the scalars of the theory as discussed in section (5). In that case, the sign function singularity would split into two threshold singularities at θ = ±δ
with δ = 4m 2 /E 1 E 2 . Schematically, sgn(θ 2 − θ 1 ) → F δ (θ 2 − θ 1 ) (3.25)
where F δ (θ 2 − θ 1 ) is an analytic function with an analytic window of width 2δ around the origin. 7 This amplitude is plotted in the complex θ plane in fig. (3). We see that as long as 7 The precise form of F δ can be worked out from the following exact expression for the internally massive
loop integral, writing ∆ = x 2 ij x 2 ik x 2 jk + m 2 (2x 2 ij x 2 ik + 2x 2 ij x 2 jk + 2x 2 ik x 2 ik − x 4 ij − x 4 ik − x 4 jk ) 1/2 : a 1 [(a · i) + m 2 ][(a · j) + m 2 ][(a · k) + µ 2 ] = 1 8πi∆ log i∆ + m(x 2 ij + x 2 ik + x 2 jk + 8m 2 ) −i∆ + m(x 2 ij + x 2 ik + x 2 jk + 8m 2 )
.
(3.26)
In the collinear regime x 2 ij ∼ m 2 , this exhibits on the first sheet a pair of logarithmic branch points at the threshold x 2 ij + 4m 2 = 0. However, on the second sheet there is also a square-root branch point at m = 0 there exists a small window of width δ around the origin along which the amplitude can be rightfully continued.
We also see clearly why such behavior is possible in three space-time dimensions but not in higher dimensions. In three dimensions the physical (real) phase space for a set of massless particles splits into chambers which are separated by singular, collinear configurations. To analytically continue from one chamber to the next one must avoid the singularity, since the amplitude is not required to be analytic around that point. But attempts to avoid the singularity by passing through the complex plane may fail: the singularity can be surrounded by cuts.
At the massless point but at the nonperturbative level, we expect an analogous situation but with a nonperturbatively small window of width δ ∼ e −# k N . Indeed, in a theory where soft and collinear quanta are copiously produced, as is ABJM, we find it unlikely for a sharp feature such as a sign function to remain unwashed. Rather, the backreaction of the radiation on the ongoing hard quanta should smear the small angle behaviour. In perturbation theory this would become visible through large logarithms N/k log 1/θ, which would have to be resumed at small angles. Indeed such logarithms will come out of our two-loop computation. Thus a more faithful model for the small angle behavior at small but finite coupling should be a function of the sort
sgn(θ 2 − θ 1 ) → θ 2 − θ 1 ((θ 2 − θ 1 ) 2 ) 1 2 −#N/k (3.27)
which can be happily continued from the left region to the right region. It would be very interesting to investigate the small-angle behavior quantitatively and confirm that the discontinuity across the cut goes to zero as θ 2 − θ 1 → 0.
The two-loop six-point integrand
We shall now proceed to determine the two-loop six-point integrand from a variety of on-shell constraints. In ABJM theory we get a large number of constraints just from the fact that there are no 3-and 5-point on-shell amplitudes. This gives a large number of cuts on which the integrand must vanish. In addition, there are some very simple non-vanishing triple-cuts associated with soft gluon exchanges which can be used to fix the remaining freedom. Our first goal in this section is thus to determine the two-loop hexagon integrand using just the following constraints: 0. The integrand is dual conformal invariant.
1. Cuts isolating a five-point amplitude must vanish.
2. Cuts isolating a three-point vertex must vanish.
x 2 ij = m 2 (x 2 ik −x 2 jk ) 2 x 2 ik x 2 jk
. The latter could be visible with physical Minkowski space kinematics, depending on whether the x ik and x jk channels are time-like or not.
x a x i
x i+1 4. Absence of non-factorizable collinear divergences.
x i−1 x i+1 x i−1 x i x a
As an example, we now show that by simply using steps 2 and 3, one completely fixes the four-point two-loop integrand to be that constructed in ref. [16]. This also illustrate the importance of obtaining the one-loop amplitude beyond O( ). The one-loop four-point integrand is given by
A tree 4 √ 2 (a1234) (a · 1)(a · 2)(a · 3)(a · 4) , (4.1)
where the unpleasant-looking factor of √ 2 is due to our normalization of the five-dimensional -symbol as discussed around eq. (3.16). Now consider the triple cut of a double-box integral in fig. (5). On the cut, x a approaches x 2 and in this limit one should recover eq. (4.1). With a little thought one sees that the following double-box numerator does the job:
A tree 4 2 (a123 * )(b341 * ) (a · 1)(a · 2)(a · 3)(a · b)(b · 3)(b · 4)(b · 1) (4.2)
where (a, i, j, k, * ) (b, l, m, n, * ) := (a, i, j, k, µ ) (b, l, m, n, µ ). The detailed behavior of such numerator under the cut condition will be discussed in subsection (4.3). This however, is not complete as one sees that there is a non-trivial contribution to the cut (a · 3) = (a · b) = (b · 3) = 0. This separates out a three-point tree amplitude and hence must vanish. On this cut, using (3.16) and setting y a = y b to restrict to an easy subcase, the double box gives a nontrivial contribution
− A tree 4 2 (1 · 3) 2 (a · 1)(b · 1)
.
(4.3)
One can easily see that this contribution can be cancelled by a double triangle integral. Thus combining requirements (2) and (3) uniquely fixes the two-loop four-point integrand to be:
A 2-loop 4 = A tree x a x 2 x 3 x 1 x 2 x a x b
x b
x 3
x 1 Figure 5. The triple cut of the double box integral. As x a approaches x 2 on the cut condition, one should obtain the one-loop integrand given in eq. (4.1).
Integrand basis
We begin by constructing the most general algebraic basis of dual-conformal integrals at two loops. In three dimensions, the most general two-loop integral is a double-box
I ijk;lmn 2box [(v 1 · a)(v 2 · b)] := a,b (v 1 · a)(v 2 · b) (a · i)(a · j)(a · k)(a · b)(b · l)(b · m)(b · n)
where v 1 and v 2 are some 5-vectors. Note that the presence of the numerator is required by dual conformal invariance, as the integrand must have scaling weight -3 with respect to both a and b. 8 Numerators v 1 proportional to y i , y j or y k are reducible, which would leave a-priori 2 distinct numerators on each side. However, at 6 points constraint 2 above is very powerful as it requires the numerator to have zeros on any double cut isolating a massless external leg. For dual conformal invariant integrals, this restricts the numerators to be of the -type
I ijk;lmn 2box [ (a, i, j, k, * ) (b, l, m, n, * )] or I ijk;kli 2box [ (a, i, j, k, b)] ,
where the second possibility is allowed only when (k ·l) and (j ·l) are both nonvanishing. Note that this latter parity-odd double-box integral has excessive weight on (i, k, l). At six-point, this can be naturally absorbed by the extra weights of C 1 + C * 1 shown in eq. (3.14). Six-point double-box integrals with a three-legged massive corner, and some with two legged massive corners, will have two-particle cuts that factories into a product of five-point amplitudes as shown in fig. (6). Since five-point amplitudes vanish to all order in , the contributions of these double-box integrals must cancel out on such cuts or they are not allowed in the integral basis. It is straight forward to see that the contributions are distinct and cannot cancel. Thus by imposing conditions 1 and 2 on one-loop subdiagrams the allowed parity even double box integrals are restricted to 8 The absence of pentagon-boxes or more complicated topologies can be easily proved as follows. A pentagon would need a numerator quadratic in a. Let's assume the five external propagators involving a are a1, . . . a4 and a b . Then we can expand the numerator in terms of products (a · v1)(a · v2) where the (a · vi) are chosen lie in the following basis (a · 1), (a · 2), (a · 3), (a · 4) and (a1234). All numerators in this basis trivially cancel some propagator, except for ( (a1234)) 2 , which would appear to be irreducible. However, this can be reduced using the Gram identity (3.16) together with a 2 = 0.
x 1
x 1 x 1 x 1 Figure 6. Possible box integrals that have non-trivial two-particle cut which would correspond to factorization channel that factorize the amplitude into a product of 5-pt amplitudes. The contributions to the cut from each box integral are distinct, leading to the conclusion that they will not appear.
I 2mh even (i) := a,b (a, i, i + 1, i + 2, * ) (b, i + 2, i − 2, i, * ) (a · i)(a · i + 1)(a · i + 2)(a · b)(b · i + 2)(b · i − 2)(b · i) I crab (i) := a,b (a, i, i + 1, i + 2, * ) (b, i − 2, i − 1, i, * ) (a · i)(a · i + 1)(a · i + 2)(a · b)(b · i − 2)(b · i − 1)(b · i) I critter (i) := a,b (a, i, i + 1, i + 2, * ) (b, i + 3, i + 4, i + 5, * ) (a · i)(a · i + 1)(a · i + 2)(a · b)(b · i + 3)(b · i + 4)(b · i + 5) I 2mh odd (i) := a,b (a, i, i + 1, i + 2, b) (a · i)(a · i + 1)(a · i + 2)(a · b)(b · i + 2)(b · i − 2)(b · i) (4.5)
where the subscript even and odd denotes the two-mass hard integrals with parity-even and -odd numerators. The same conditions also leave box-triangle integrals
I ijk;lm box;tri [ (a, i, j, k, l)] := a,b (a, i, j, k, l) (a · i)(a · j)(a · k)(a · b)(b · l)(b · m)
.
(4.6)
Other choices for the numerator here, such as the other natural choice (a, i, j, k, m), would be related by a Schouten identity plus double triangle integrals. The box-triangles again have excessive weight which would imply that they should come with factors of C 1 + C * 1 . We will see that they indeed arise in this way.
Finally, the conditions applied so far leave only three double-triangle integrals, namely
I i,i+2;i+2,i 2tri := a,b (i · i + 2) 2 (a · i)(a · i + 2)(a · b)(b · i)(b · i + 2) I i,i+2;i−2,i 2tri := a,b (i · i + 2)(i · i − 2) (a · i)(a · i + 2)(a · b)(b · i − 2)(b · i) I i,i+2;i−3,i−1 2tri := a,b (i · i + 2)(i − 1 · i − 3) (a · i)(a · i + 2)(a · b)(b · i − 3)(b · i − 1)
.
(4.7)
We now finish to implement constraint 2, the vanishing of all three-point sub amplitudes.
Constraints from vanishing three point sub amplitudes
We consider the cut (a · 3) = (a · b) = (b · 3) = 0 which separates out a three-point amplitude and thus must vanish. Two types of double boxes contribute to such cut, I 2mh and I crab , and they contribute:
(1) 1 2 3 5 a b → (1 · 3)(b · 2)[(a · 1)(3 · 5) − (a · 5)(3 · 1)] (a · 2)(a · 1)(b · 1)(b · 5) (2) 1 2 3 5 a b 4 → − (b · 2)(1 · 3)(a · 4)(3 · 5) (a · 1)(a · 2)(b · 5)(b · 4) (3) 3 4 5 1 a b → (b · 4)(3 · 5)[(a · 5)(3 · 1) − (a · 1)(3 · 5)] (a · 4)(a · 5)(b · 5)(b · 1) (4) 1 2 3 5 a b → 0 ,
where we've indicated the non-vanishing remainder on the cut. This was obtained by using eq. (3.16) to reduce the -symbols to dot products and dropping terms which vanish on the cut. This can be simplified further when we take into account that the general solution to the cut is parametrized by y a,b = y 3 + τ a,b v, where v is any null five-vector such that v·3 = 0. Physically, on the cut the two loop momenta are collinear with each other. Then one finds that the following combination of double-box and double-triangle integrals vanish on the cut and are thus allowed
I 2mh even (1) + I 1,3;3,1 2tri − I 1,3;3,5 2tri − I 1,3;5,1 2tri I crab (1) + I 1,3;3,5 2tri I critter , I i,i+2;i−1,i−3 2tri , I i,i+1,i+2;i−1,i−3 box;tri
and I 2mh odd .
In addition the integral I i,i+1,i+2;i+2,i−2 box;tri is immediately ruled out. In the following, we will use constraint 3 to fix the relevant coefficient of the double box integrals.
Constraints from one-loop leading singularity
The particular cut we will be interested in is the maximal cut of one of the sub loops with adjacent massless legs. This cut corresponds to a kinematic configuration where there is a soft-exchange between the two external legs, as can be deduced from eq. (3.12) using the fact that the (a · 2) only gives a pole at cos θ → 0. In terms of dual regions, this correspond to when the loop region y a → y i as illustrated in fig. (4).
More specifically, we compute the leading singularity (a · 1) = (a · 2) = (a · 3) = 0 of (a, 1, 2, 3, * ) (a · 1)(a · 2)(a · 3)(a · i) .
Normally there are two solutions to such a cut constraint, but let us verify explicitly that here there is only one solution y a = y 2 as claimed. To do so we expand a over a natural basis, such as a = a 1 y 1 + y 2 + a 3 y 3 + a 4 y 4 + a y where y := (1, 2, 3, 4, * ). Imposing the two cuts (a · 1) = (a · 3) = 0 gives that a 1 = 0 and a 3 /a 4 = −(1 · 4)/(1 · 3), and thus a 3 ∝ a 2 due to the y 2 = 0 constraint. Then (a · 2) ∼ a 2 so the only solution is a = 0. Let us work out the details. Normalizing the leading singularities in a convenient way
F (a) (a · i)(a · j)(a · k) residue i,j,k := 4 a δ((a · i)δ((a · j))δ((a · k))F (a),(4.8)
we have here
4 a = 4 d 5 a δ(a 2 ) vol(GL(1)) = N da 1 da 3 da 4 da δ(a 2 )
where N = √ 2 (y 1 , y 2 , y 3 , y 4 , y ) = √ 2y 2 . After taking the first two cuts and evaluating the Jacobian from the δ-functions we get a δ((a · 1))δ((a · 3)) (a · 2) = N y 2 (1 · 3) 2 (2 · 4) da a 2 (4.9)
where a( ) = y 2 +a y +a 2 y 2 2(2·4) (1·4) (1·3) y 3 −y 4 (this could be mapped to the BCFW parametrization (3.12), since this solves the same cut constraints). Multiplying by (a, 1, 2, 3, * )/(a · i) and taking the residue at a = 0, we immediately get the box leading singularity (a, 1, 2, 3, * )
(a · 1)(a · 2)(a · 3)(a · i) residue 1,2,3 = √ 2 (2 · * ) (2 · i) . (4.10)
Note that although the numerator vanishes on the cut solution y a = y 2 , reflecting the absence of three-point vertices in this theory, a nonvanishing residue remains due to the double pole in (4.9). The residue reflects the the exchange of a zero-momentum Chern-Simons field. This physical origin implies that these leading singularities are "universal", and must reduce to the lower-loop integrand with the loop variable y a omitted. Thus, with a normalization easily fixed from the 1-loop integrand,
A −loop n residue 1,2,3 = A ( −1)−loop n .
(4.11)
Similarly, but with an opposite sign due to k → −k,
A −loop n residue 2,3,4 = −A ( −1)−loop n .
(4.12)
These relations can easily be verified to hold for the one-loop integrand (3.17), where the right-hand side reduces to the tree amplitude. However, these relations must hold at any loop order. In a sense they are analogous to the so-called rung rule [36]. At two loops, this requires to see the one-loop integrand emerge on the cut, i.e. eq. (3.17). Indeed one finds that the various pieces of the one-loop integrand do appear from the doublebox and the box-triangle integrands. More specifically, for the cut (a · 1) = (a · 2) = (a · 3) = 0, omitting the common √ 2 factor, we find the following contributions:
+ 6 i=1 α i I i,i+2;i−3,i−1 2tri
where cyclic × 2 implies cyclic by two sites and C 1,2 , C * 1,2 are defined as before. The presence of the one-loop integrand on the cut (a · 1) = (a · 2) = (a · 3) = 0 are shown in fig. (7). We see that at this point, the only remaining freedom is the triangle integrals I i,i+2;i−3,i−1 2tri . However, as we will now see these integrals are "badly" collinear divergent and so they are constrained by other physical considerations.
Collinear divergences and the ABJM two-loop integrand
As was demonstrated in [37] (in the context of planar N = 4), the exponentiation of divergences leads to constraints which can be formulated in a very simple way at the level of the integrand, e.g., before even performing any integral. We will now formulate similar constraints in ABJM theory, but these will have a somewhat different flavor due to the absence of one-loop divergences. . The terms that contribute to the leading singularity (a · 1) = (a · 2) = (a · 3) = 0, which has to reduce to the one-loop integrand. The blue lines indicate the one-loop propagators that remain and uncancelled after the cut. The term in the bottom of each diagram is the numerator factor. One can see that the combination is precisely the one-loop answer.
In ABJM theory, the twist-two anomalous dimensions which control the collinear and soft-collinear divergences begin at order (k/N ) 2 , e.g. two-loops. Thus the divergences at two-loop are the leading ones and must be proportional to the tree amplitude in a specific way. Some qualitative constraints can be deduced in a simple way as follows: We note that soft divergences can be computed by replacing the external states by Wilson lines. Just this fact imposes two simple constraints. First, the coefficient of proportionality of the 1/ 2 divergence must be a pure number, e.g. independent of the kinematics (ultimately, 6 times the so-called cusp anomalous dimension). Second, kinematic dependence of the subleading 1/ divergence, which can arise from soft wide-angle radiation but not collinear radiation (and hence is controlled by the Wilson lines) can only be of the simple "dipole" invariants of the form [1/ ] log
x 2 i,i+1 µ 2 IR
. We will call divergences of these forms "factorizable". These are rather general constraints that any physically acceptable amplitude must possess and we will see that they impose nontrivial constraints on the integrand.
We will consider the collinear divergence from the region collinear to momentum p 3 . To have a divergence we need both loop momenta to be collinear, thanks to the special -numerators, so we consider the limit y a → y 3 + τ a y 4 , y b → y 3 + τ b y 4 .
(4.14)
A first requirement is that the integrals proportional to the parity-odd structure, e.g. C i +C * i , be finite. For the box-triangles, we find the following combination is free of divergences: To see that this combination is indeed finite in the collinear region, note that in the limit eq. (4.14), factoring out the divergent factors 1/(a · 3)(a · b)(b · 4) one has
(C 1 + C * 1 ) I 4,(C 1 + C * 1 ) (3, 4, 5, 6, 1) (5 · 1)(a · 1)(3 · 5)(b · 6) − (C 2 + C * 2 ) (4, 1, 2, 3, 6) (2 · 6)(a · 1)(4 · 2)(b · 6)
.
where we've symmetrized in (a ↔ b). The above combination vanishes thanks to the following identity:
C 1 + C * 1 C 2 + C * 2 = − (6, 1, 2, 3, 4)(3 · 5)(5 · 1) (3, 4, 5, 6, 1)(6 · 2)(2 · 4) . (4.15)
This identity is proven in appendix (A). Thus we conclude that the parity-odd part of the integrand in eq. (4.14) is already complete, provided that the box-triangle numerators are chosen as there.
We now turn to the parity-even sector. We need to study the divergences of the yetunconstrained integral I 1,3;4,6 2tri in more detail. Integrating out the remaining variables around the limit (4.14) one easily obtains the divergent contribution from the collinear region
a,b 1 (a · 3)(a · b)(b · 4)(a · 1)(b · 6) ∝ log µ 2 0<τa<τ b <∞ dτ a dτ b τ a (τ b − τ a )(a · 1)(b · 6)
where a and b are as in (4.14). Such a divergence violates factorizability in two ways: it depends on y 1 through (a·1) and on y 6 through (b·6). This leads, for instance, to dependence on the cross-ratio u 1 . A quick look at (4.14) reveals that the only other integral with potentially similar dependence on τ a,b is I critter . However the divergence cancels exactly, pre-integration, in the combination I critter (1) + I 1,3;4,6 2tri .
Thus we finally arrive at the complete integrand for two-loops six-point amplitude in ABJM theory:
A 2-loop 6 = 4πN k 2 A
Interlude: Infrared regularization using the Higgs mechanism
The two-loop amplitude is infrared divergent and must be regulated in some way. For inspiration we can look at the four-dimensional sibling of ABJM, N = 4 SYM. In that theory there exists a canonical and self-contained infrared regularization, associated to giving small vacuum expectation values to the scalar fields of the theory [28]. The fields running in loops then acquire masses through the Higgs mechanism, rendering the loop integrations finite. Does a similar regularization exist in ABJM theory? As was shown in the original paper [1], this theory has a moduli space (C 4 /Z k ) N where N characterizes the SU(N)× SU(N) gauge group and k is the level. It is described simply by diagonal vacuum expectation values (vevs) for the 4 scalar fields, φ A = diag(v A i ) (with corresponding vevs for the conjugate fields,
φ A = diag((v A i ) † )
). The Z k identifications will play no role in what follows, although we will be able to see that our formulas are invariant under it.
The first question to address is what is the spectrum of the theory at a given point on the moduli space. While we have not found the general answer to this question in the literature, this can easily be answered in perturbation theory in the usual way by studying the linearized action for fluctuations around the vacuum. (Due to the amount of supersymmetry, it is plausible that the resulting spectrum is valid for all values of the coupling, although this will not be important for us.) To be safe we have computed the linearized action for both scalar, fermion and gauge field fluctuations, and confirmed that the spectra are related by supersymmetry as required. These computations are reproduced in appendix (B). From the linearized action it is then possible to find the poles in the propagators and read off the spectrum.
The result is very simple. We find that the diagonal fields remain massless, while the off-diagonal fields stretching between i and j acquire the mass squared
m 2 ij = (v i ·v i + v j ·v j ) 2 − 4v i ·v j v j ·v i . (5.1)
Note that this vanishes when v i = v j , as expected. 9 Furthermore, when m 2 ij is nonzero, the computation in appendix demonstrates that the corresponding components of the gluon propagator lack a pole at zero momentum. Thus all modes acquire a mass. This is in contrast with the mass-deformed supersymmetric CSm amplitudes discussed in [11].
Following ref. [28], this can be used to regulate planar amplitudes. The idea is to split SU ( However, as long as none of the SU(M) vevs vanishes, all outermost propagators in a Feynman diagram will be massive as depicted in fig. (8). This ensures the finiteness of the For the purpose of regularization we will restrict to the simplest setup, taking all nonzero vevs to be aligned in the SU (4) directions: v A i = δ A 1 v i . Then the mass formula reduces to
m 2 ij = (m i − m j ) 2 (aligned vevs). (5.2)
where m i := |v i | 2 . We see that the ABJM masses behave exactly like the extra-dimensional coordinates in N = 4 SYM discussed in [28]! In the remainder of this section, we discuss how to implement this regulator in a simple way within the embedding formalism. This will be applied to numerous examples in the next section.
Following the extra-dimensional interpretation of the masses it is natural to enlarge the external five-vectors y i to six-vectors
y (6) i = (x i , 1, x 2 i + m 2 i , m i ) (5.3)
with a inner product defined such that (i · i) (6) = 0 for vectors of this form. Then one can verify that (i · j) (6) = (x i − x j ) 2 + (m i − m j ) 2 , automatically generating the correct internal masses provided that the 6-dimensional product is used in propagators. Furthermore, the on-shell constraints are simply (i · i+1) (6) = 0. Regarding loop integrations, we set the extradimensional component of loop variables to zero, a = (x, 1, x 2 , 0), e.g. the loop variables remain 5-dimensional. Then all propagators come out correctly. Since only the five-dimensional components of vectors y i couple to the loop variables, it is immediate that all Feynman parametrization formulas in section (3) go through unchanged. One must simply continue to use the five-dimensional inner product (i · j) := (x i − x j ) 2 + m 2 i + m 2 j in them. The five-dimensional inner product of the external y's obeys the following identity
(i · i+1) 2 − (i · i)(i+1 · i+1) = 0 (5.4)
which can be seen to be equivalent to the on-shell relation (5.2) for the external states.
A simple consequence of this procedure is that as long as integrands written in terms of the y (6) are SO(2,3)-covariant, resulting amplitude will be invariant under the modified dual conformal generator
K µ A n = 0, where K µ = n i=1 x 2 i ∂ ∂x µ i − 2x µ i x i · ∂ ∂x i − 2x µ i m i ∂ ∂m i − x µ i . (5.5)
This equation is essentially trivial by assumption, and will remain true as long as the integrals are indeed rendered finite by the regularization. An important question is whether the SO(2,3) symmetry is an actual property of the ABJM integrand even for finite values of the masses. We expect this to be the case, although we cannot prove it. The logic is that the dual conformal symmetry SO(2,3) is associated with integrability, which we do not expect to vanish into thin air just because one moves away from the origin of moduli space. Indeed, physically, the Higgs branch can be explored by considering amplitudes at the origin of moduli space but with soft scalars added, as was demonstrated in the context of tree amplitudes in refs. [38]. By exploring such a construction in three dimensions, it might even be possible to establish whether the dual conformal symmetry of tree amplitudes, hence presumably of loop integrands by unitarity, holds away from the origin of moduli space. 10 In the present paper we work only to lowest order in the masses, e.g. we keep only the logarithmic dependence on them. At that level the SO(2,3) symmetry is more or less tautological as it is the same as the existing dual conformal symmetry. Thus to logarithmic accuracy in the masses the validity of (5.5) is already guaranteed by existing results.
Let us elaborate on eq. (5.5). A consequence of it together with the on-shell condition (5.4) is that the dependence on the individual m i can be determined simply from consideration of conformal weights. For instance, suppose an amplitude is known in the case that all internal masses are equal and all external masses vanish. Then, the most general "aligned" case with internal masses m i (and thus generic external masses) can be obtained (to the same order in the small mass expansion) through the simple substitution
x 2 ij µ 2 IR −→ x 2 ij m i m j .
The point is that there are no ratios of the masses invariant under (5.5). Therefore, with no loss of generality, the Higgs regulator to logarithmic accuracy (and perhaps more generally) can be summarized by the simple rule
y i −→ y i + µ 2 IR y I , e.g. ( x i , 1, x 2 i ) −→ ( x i , 1, x 2 i + µ 2 IR ) (5.6)
for each external region momenta, where we only need to keep track of the five-dimensional components of the y i , and where the massless external momenta remain undeformed. This recipe is the main result of this section.
Alternatively, infrared divergences can be regulated using dimensional regularization. Due to the presence of Levi-Cevita tensors in Chern-Simons theory, dimensional regularization has always been used with a great deal of caution. However, as one can use tensor algebra in three-dimensions to covert the Levi-Cevita tensors into Lorentz invariant scalar dot products and analytically continue to D = 3 − 2 , this is more similar to dimensional reduction regularization, commonly applied to supersymmetric theories. This regularization scheme has been shown to be gauge invariant up to three-loops for Chern-Simons like theories in ref. [41], and has also been applied to Wilson-loop computations in ref. [42,43] establishing duality with the amplitude result. We will demonstrate below that the individual dimensionally regulated integrals differ from the mass regulated result functionally. However, when combined into the physical amplitude, the two regulated results agree.
Computation of two-loop integrals
In this technical section we describe our computation of the two-loop integrals relevant for the two-loop hexagon. Although we feel that some of the tricks employed here can find application elsewhere, the reader not interested in these details can safely skip to the next section.
Certain integrals, or combinations of integrals, are absolutely convergent and can be computed directly in D = 3. Examples are I 2mh odd , I critter (1) + I 1,3;4,6 2tri , or a certain combination of the odd box-triangles described below. It is very convenient to treat these combinations separately since they can be evaluated without regularization. Furthermore they automatically give rise to functions of the cross-ratios u i . On the other hands, IR divergent integrals cannot be avoided. In this section we will use the Higgs regulator described in the previous section.
We will describe the various steps in our integration method, starting from the steps common to all integrals.
A Feynman parametrization trick
There is a particular version of Feynman parameterization which is particularly effective for our calculations. It is inspired by a formula obtained in [44] using intuition from Mellin space techniques, but can be derived very simply by a judicious change of variable in standard Feynman parameter space as was demonstrated in ref. [45].
We illustrate it in detail in the case of the double triangle I 1,3;3,1 2tri , the other integrals are entirely similar. The first step starting from the definition (4.7) is the usual Feynman (Schwinger) trick
I 1,3;3,1 2tri = Γ(3) 2 ∞ 0 [d 1 a 1 a 3 ] vol(GL(1)) [d 1 b 1 b 3 ] vol(GL(1)) a,b (1·3) 2 (a · A) 2 (a · b)(b · B) 2 . (6.1) where A = i=1,3 a i y i and B = i=1,3 b i y i .
A word about the notation. The 1/vol(GL(1)) symbol means to break the projective invariance (a i , b i ) → α(a i , b i ) by inserting any factor which integrates to 1 on GL(1) orbits.
Standard choices include δ( a i − 1), which give the Feynman parameter measure dF in section (3), or δ(a 1 − 1) which give rise to Schwinger parameters. The choice of a gauge-fixing function will play no role in what follows, and for all practical purposes it can be gleefully ignored it until the final step.
The loop integrals over a, b can all be done using only the one-loop integral (3.4). Since the Higgs regulator already renders the integrals finite, we set D = 3 immediately to obtain (to avoid cluttering the formulas in this section, we will strip a factor 1/(4π) for each loop)
Γ(3) a 1 (a·A) 3 → 1 4( 1 2 A · A) 3/2 . (6.2)
By repeatedly using this formula and its corollary valid for b 2 = 0
a 1 (a · A) 2 (a · b) = Γ(3) ∞ 0 df a 1 (a · (A + f b)) 3 = 1 2 1 1 2 A · A(A · b) , (6.3)
we derive the following, key formula:
a,b 1 (a · A) 2 (a · b)(b · B) 2 = 1 2 b 1 1 2 A · A(A · b)(b · B) 2 = 1 8 ∞ 0 de 1 2 A · A( 1 2 (B + eA) · (B + eA)) 3/2 = ∞ 0 dc 4π √ c ∞ 0 de 1 (c 1 2 A · A + 1 2 (eA + B) · (eA + B)) 2 . (6.4)
The key idea here is the introduction of the new Feynman parameter c in the last step, as done in [45]. Although it could be removed immediately, it will prove advantageous to leave it untouched until the final stage. For example, this will allow us to postpone dealing with square roots until the very end. Upon substituting (6.4) into (6.1), one notes that the variable e is charged under both GL(1) symmetries. Therefore, it is allowed to gauge-fix one of them by setting e = 1, which effectively locks the two GL(1) together. This will always be the case: the variable e is always removable in this way. Thus we have
I 13;31 2tri = ∞ 0 dc 4π √ c [d 3 a 1 a 3 b 1 b 3 ] vol(GL(1)) (1 · 3) 2 (1 + c) 1 2 A · A + A · B + 1 2 B · B 2 . (6.5)
As mentioned, this is similar to the formula for the double-box obtained in [44]. So far all we have done is rewrite the standard Feynman parameter integral in some specific form. As we will now see, in all cases the variables a i , b i can be integrated out rather straightforwardly, and will generate some logarithms or dilogarithms to be integrated over c. The c integration in the final step then poses no particular difficulty.
Divergent double-triangles
Let us see carry out the remainder of this procedure for I 1,3;3,1 2tri starting from (6.5). Notice that we haven't said anything about the regularization yet. This is because everything is fully accounted for by the rules (5.6). According to it, we simply have to take (i · j) −→ x 2 ij + 2µ 2 IR . After evaluating the dot products and doing a simple rescaling of the integration variables, the double-triangle is thus easily seen to depend only on the ratio := µ 2 IR x 2
13
:
I 13;31 2tri = ∞ 0 dc 4π √ c [d 3 a 1 a 3 b 1 b 3 ] vol(GL(1)) 1 (a 1 +b 1 )(a 3 +b 3 ) + ca 1 a 3 + ((a 1 +a 3 +b 1 +b 3 ) 2 + c(a 1 +a 3 ) 2 ) 2 .
We are interested in the small mass limit 1. The only sensitivity to comes from the two regions where a 1 , b 1 ∼ or a 3 , b 3 ∼ ; these correspond physically to collinear configurations. However, everywhere else we can ignore . Consequently, let us parametrize the variables as
a 1 = 1, b 1 = x, a 3 = a, b 3 = ay, [d 3 a 1 a 3 b 1 b 3 ] vol(GL(1)) = adadxdy
such that the dangerous regions are a = 0 and a = ∞. Since the region a > 1 contributes the same as a < 1, by symmetry, we need only consider the former and multiply it by 2. Furthermore, in that region, we can neglect a in the terms proportional to since they are only needed when a → 0. Thus is entirely similar. The general formula (6.4) then gives directly, after a simple rescaling of the variables,
I 13;31 2tri = 2 ∞ 0 dc 4π √ c 1 0 ada ∞ 0 dxdy (a((1 + x)(1 + y) + c) + ((1 + x) 2 + c)) 2 = dc 4π √ c ∞ 0 dxdy log (1+x)(1+y)+c ((1+x) 2 +c) − 1 (1 + x)(1 + y) + c 2 = − logI 13;35 2tri = ∞ 0 dc 4π √ c [d 3 a 1 a 3 b 3 b 5 ] vol(GL(1)) × 1 (a 1 +b 5 )(a 3 +b 3 ) + a 1 b 5 + ca 1 a 3 + ((a 1 +a 3 +b 3 +b 5 ) 2 + c(a 1 +a 3 ) 2 ) 2 with = µ 2 IR x 2 15 x 2 13 x 2 35
. The dangerous region is the collinear region a 1 → 0 and b 5 → 0, so up to power corrections in we can drop a 1 and b 5 in the terms multiplying . These can then be easily integrated out, leaving:
I 13;35 2tri = ∞ 0 dc 4π √ c ∞ 0 db 3 log (1+b 3 )(1+b 3 +c) (1+b 3 ) 2 +c − log (1 + b 3 )(1 + b 3 + c) = 1 − 1 2 log 4µ 2 IR x 2 15
x 2 13 x 2
35
+ O(µ IR ). (6.7)
A dual conformal integral: I critter
As our next example, we turn to the integral I critter (1). This integral is collinear divergent, but it becomes absolutely convergent after combining it with I 1,3;4,6 2tri as explained in section (4). Therefore, we will only consider the sum I critter (1) := I critter (1) + I 1,3;4,6
2tri = 4 ∞ 0 [d 2 a 1 a 2 a 3 ] vol(GL(1)) [d 2 b 4 b 5 b 6 ] vol(GL(1)) a,b
(a, 1, 2, 3, * ) (b, 4, 5, 6, * )
+ (1 · 3)(4 · 6)(a · 2)(b · 5) (a · A) 3 (a · b)(b · B) 3 . (6.8)
Note that we have combined the two integrals into a common Feynman parameter integral, by inserting the inverse propagators (a.2)(b.5) into the numerator of the double-triangle. This allows us to immediately set the regulating masses to zero, since we are dealing with an absolutely convergent integral.
To apply the formula (6.4), we use the familiar fact that numerators turn into derivatives in Feynman parameter space. Thus for instance for the first term in (6.8)
4
(a, 1, 2, 3, * ) (b, 4, 5, 6, * )
(a · A) 3 (a · b)(b · B) 3 = (∂ A , 1, 2, 3, * ) (∂ B , 4, 5, 6, * ) 1 (a · A) 2 (a · b)(b · B) 2 .
Thus, after setting e = 1 in (6.4) to remove one of the GL(1) symmetries as done previously, we obtaiñ
I critter (1) = ∞ 0 dc 4π √ c ∞ 0 [d 5 a 1 a 2 a 3 b 4 b 5 b 6 ] vol(GL(1)) (∂ A , 1, 2, 3, * ) (∂ B , 4, 5, 6, * ) + (1 · 3)(4 · 6)(2 · ∂ A )(5 · ∂ B ) 1 (c + 1) 1 2 A · A + A · B + 1 2 B · B 2 . (6.9)
To proceed from here, we simply integrate over the variables a i , b i one at a time. This can be done in an essentially automated way using the method described in detail in a fourdimensional context in [45]. The idea is that at each stage the integral can be decomposed into a rational factor which takes the form dx/(x − x i ) n with n ≥ 1, times logarithms or polylogarithms with arguments that are rational functions of x. Such integrals can be performed, at the level of the symbol, in a completely automated way. After this is done, we integrate the symbol and obtain the c-integrand as described in [45].
We applied this method, doing the integrals in the order a 2 , b 5 , a 1 , b 6 and a 3 , to obtain the symbol of a function to be integrated over c. After a step of integration by parts in c to remove degree-three components, we obtained the symbol of a degree-two function, which could easily promoted to a functioñ
I critter (1) = 2 ∞ 0 dc 4π √ c π 2 3 − Li 2 1 − u 1 (c + 1) − Li 2 (1 − u 2 ) − Li 2 (1 − u 3 ) − log u 2 log u 3 c + 1 = − 1 2 Li 2 (1 − u 2 ) − 1 2 Li 2 (1 − u 3 ) − 1 2 log u 2 log u 3 − (arccos √ u1) 2 + π 2 3 . (6.10)
Here all non-constant terms come out of the symbol computation, while the π 2 /3 term is a beyond-the-symbol ambiguity. We have fixed it by an analytic computation at the symmetrical point u 1 = u 2 = u 3 = 1, where the integral simplifies dramatically. Assuming the principle of maximal transcendentality for this integral, this is the only possible ambiguity. As a cross-check, we have verified that this result agrees with a direct numerical evaluation of eq. (6.9), to 6 digit numerical accuracy at several random kinematical points with Euclidean kinematics, which we take to confirm our assumptions. 11 6.4 Another divergent integral: I 2mh even
We now consider a somewhat more nontrivial divergent integral,
I 2mh even (1) = ∞ 0 dc 4π √ c [d 5 a 1 a 2 a 3 b 3 b 5 b 1 ] vol(GL(1)) ( (∂ A , 1, 2, 3, * ) (∂ B , 3, 5, 1, * )) (c + 1) 1 2 A · A + A · B + 1 2 B · B 2 (6.11)
where the derivative operators are understood to act on the rational function underneath, to avoid an unnecessary lengthening of the formula. This integral requires regularization, and as in the rest of this section we use the Higgs regularization described in section (5). The procedure has the following precise meaning here. In both the numerator and denominator, we use the shifted five-vectors defined in eq. (5.6), so the formula amounts to (i · j) −→ (x i − x j ) 2 + 2µ 2 IR . A first observation is that in all divergent regions b 5 → 0. Thus we can drop b 5 from terms multiplying the mass in the denominator, which allows us to integrate out b 5 explicitly: 12
I 2mh even (1) = ∞ 0 dc 4π √ c [d 4 a 1 a 2 a 3 b 1 b 3 ] vol(GL(1)) a 2 (2 · 5) − 2((A + B) · 5) (2 · 5)/(1 · 3)/((A + B) · 5) 2 (1 + c)a 1 a 3 + a 1 b 3 + a 3 b 1 + b 1 b 3 + µ 2 IR (1·3) X 2 where X := ( a + b) 2 + c( a) 2 .
To proceed further, we need a small bit of physical intuition about this integral. It has collinear divergences in the region a 3 , b 3 → 0 (both loop momenta collinear to p 1 ) and in the region a 1 , b 1 → 0 (both loop momenta collinear to p 2 ). In addition, there are soft-collinear divergences where these two regions meet. Thus a reasonable strategy is to subtract something which has the same divergent behavior as µ 2 IR → 0 but which is simpler to integrate. A good candidate is I 2mh even (1) := I 2mh even (1) X −→ a 2 2 (1 + c) (6.12) 11 Numerics with this level of accuracy can be easily obtained starting directly from (6.9) and performing the a2, b5 and c integrals analytically, which are readily done using computer algebra software such as Mathematica.
The remaining 3-fold numerical integration poses no particular problem. 12 Strictly speaking the numerator derived from eq. (6.11) contains terms proportional to µ 2 IR . However, due to the special properties of the -symbol numerators, one can see that these terms only give rise to power-suppressed contributions. That is, they are never accompanied by compensating 1/µ 2 IR power infrared divergences which would render them relevant. We have verified that the same is true also for the integral I crab considered below. since this remains finite and has identical soft and soft-collinear regions. But thanks to the simplified denominator, this can be integrated more easily. Indeed after a shift a 1 + b 1 → b 1 , a 3 + b 3 → b 3 together with a simple rescaling of the variables, it can be seen to depend only on a single parameter := : (1))
I 2mh even (1) = − ∞ 0 dc 4π √ c a 1 <b 1 a 3 <b 3 [d 4 a 1 a 2 a 3 b 1 b 3 ] vol(GLa 2 + 2b 1 + 2b 3 (a 2 + b 1 + b 3 ) 2 (b 1 b 3 + a 1 a 3 c + a 2 2 (1 + c)) 2 = 2 − 7π 2 12 − 1 4 log 2 + O( ). (6.13)
From this point we omit further details on the computation of integrals, as they proceed using the same strategy as in previous examples. It remains to correct for the error introduced by eq. (6.12) in the hard collinear regions. At fixed a 2 , a 1 , b 1 ∼ 1 one can see that the region a 3 , b 3 → 0 produces a logarithm whose cutoff depends on X. The error is given by the change in the logarithmic cutoff
I coll (y) := ∞ 0 dc 4π √ c a 1 <b 1 [d 2 a 1 a 2 b 1 ] vol(GL(1))
y(a 2 y + 2b 1 ) log (a 2 +b 1 ) 2 +c(a 1 +a 2 ) 2
a 2 2 (1+c) b 1 (b 1 + a 1 c)(a 2 y + b 1 ) 2 = π 2 6 − Li 2 (1 − y) (6.14)
so that I 2mh even (1) = I 2mh
even (1) + I coll (x 2 25 /x 2 15 ) + I coll (x 2 25 /x 2 35 ). (6.15)
This gives the result quoted in appendix (C).
The integral I crab
The final divergent integral we have to compute is
I crab (1) = ∞ 0 dc 4π √ c [d 5 a 1 a 2 a 3 b 5 b 6 b 1 ] vol(GL(1)) ( (∂ A , 1, 2, 3, * ) (∂ B , 5, 6, 1, * )) (c + 1) 1 2 A · A + A · B + 1 2 B · B 2 . (6.16)
Its evaluation is extremely similar to that in the previous subsection. The regions which diverge as µ 2 IR → 0 are the p 6 -collinear and p 1 -collinear regions, and their intersection, the soft-collinear region A, B → y 1 . Therefore, if we denote the µ 2 IR -containing terms in the denominator by µ 2 IR X, we see that we can neglect a 3 and b 5 in X:
X = (a 1 + a 2 + b 6 + b 1 ) 2 + c(a 1 + a 2 ) 2 .
Then we proceed as in the previous example: we replace the integral by the simpler one ) I crab (1) = −1 − π 2 12
I crab (1) := I crab (1) X −→ (a 1 + b 1 ) 2 + ca 2 1 ,(6.+ 1 2 (1 + log u 3 ) log 1 − 1 4 log 2 1 + 1 2 Li 2 (1 − 1/u 3 ).
The error introduced by (6.17) is by construction localized to the hard collinear regions and turns out to be given by the same eq. (6.14): I crab (1) = I crab (1) + I coll (x 2 15 /x 2 25 ) + I coll (x 2 13 /x 2 36 ). Collecting the terms gives the result recorded in appendix (C).
Parity odd box-triangles
As shown in section (4) The Feynman parametrization formula (6.4) reads in this case
I odd box;tri (1) = ∞ 0 dc 4π √ c [d 5 a 1 a 2 a 3 b 4 b 5 b 6 ] vol(GL(1)) (∂ A , 1, 2, 3, 6) (1, 2, 3, 4, 6) (2 · 4)(5 · ∂ B ) + (2 · ∂ A )(3 · 5) (∂ B , 4, 5, 6, 1) (4, 5, 6, 1, 3) (1 · 4)(3 · 6) (c + 1) 1 2 A 2 + A · B + 1 2 B 2 2 .
where we have combined the two integrals under a common Feynman parameter integral sign. This may now be evaluated directly without regularization. One can see that the integrations over a 2 , b 5 do not produce any transcendental functions, which suggests to do them first. In the process, the -symbols neatly cancel out:
I odd box;tri (1) = ∞ 0 dc 4π √ c [d 3 a 1 a 3 b 4 b 6 ] vol(GL(1)) a 3 (3 · 5) a 1 (1 · 5) + a 3 (3 · 5) − b 4 (2 · 4) b 4 (2 · 4) + b 6 (2 · 6) ×
(1 · 4)(3 · 6) (c + 1)a 1 a 3 (1 · 3) + a 1 b 4 (1 · 4) + a 3 b 6 (3 · 6) + b 4 b 6 (4 · 6) 2 .
The three integrations over a i , b i remain elementary, and we obtain a pleasingly simple result
I odd box;tri (1) = ∞ 0 dc 4π √ c log u 2 u 3 log(u 1 (c+1)) 1 − u 1 (c+1) = log u 3 u 2 arccos( √ u 1 ) 2 u 1 (1 − u 1 ) . (6.19) 7
The six-point two-loop amplitude amplitude of ABJM We now construct the final integrated result. We first consider the parity even part, i.e. terms in eq. (4.16) proportional to A tree 2 . We begin by summing all the divergent integrals, or more specifically, 6 i=1 (I 2mh
even (i) + I crab (i) + I i,i+2;i+2,i 2tri − I i,i+2;i+2,i−2 2tri
), using the formulas recorded in appendix (C). This gives
1 2 6 i=1 log 2 x 2 i,i+3 x 2 i+1,i+3 − log 2 x 2 i,i+3 µ 2 IR x 2 i,i+2 x 2 i+1,i+3 + 3 i=1 −Li 2 (1 − u i ) + log u i log x 2 i−1,i+2 µ 2 IR := BDS 6 − π 2 . (7.1)
Pleasingly, we find that all terms of non-uniform transcendentality have canceled in the sum! Furthermore, the sum gives nothing but the BDS Ansatz [29,30] in the Higgs regulator!. That is, up to the constant term and the substitution µ 2 IR → 4µ 2 IR ! The evaluation of the parity even terms will be complete upon adding the dual conformal integrals − 3 i=1 (I critter (i) + I i,i+2;i+3,i−1 2tri ). As a cross-check on our evaluation of the integrals, we have evaluated the above combination of integrals using dimensional regularization, which has been successfully implemented in obtaining the two-loop four-point result [16,17]. While we find that the results for individual integrals differ functionally, we find perfect agreement for the combination just considered, up to an expected scheme-dependent constant. This constant is given in (D.3).
We next consider the parity odd part, i.e. terms in eq. (4.16) proportional to the sum of one-loop maximal cuts. The I 2mh odd integrals integrate to zero at order O( ), thus we have: Using the identity (A.1) together with the definition (6.18), this can be expressed in terms of the dual conformal invariant finite integral
− C 1 + C * 1 2 √ 2 I 4,− C 1 + C * 1 2 √ 2 (3, 4, 5, 6, 1) (1 · 4)(3 · 6)(1 · 5)(3 · 5) I odd box;tri (1) + cyclic × 2 = C 1 + C * 1 2 √ 2 (3, 4, 5, 6, 1) (1 · 4)(3 · 6)(1 · 5)(3 · 5) log u 2 u 3 arccos( √ u 1 ) 2 u 1 (1 − u 1 ) + cyclic × 2 . (7.2)
Due to Yangian invariance, the coefficients of the transcendental functions must be expressible in terms of the leading singularities LS 1 and LS * 1 defined in section (3.1). This can be verified thanks to the remarkable identity (A.5), together with (3.10): 3) where the "remainder" function R 6 is given as
(C 1 + C * 1 ) (3, 4, 5, 6, 1) 2 √ 2(1 · 4)(3 · 6)(1 · 5)(3 · 5) u 1 (1 − u 1 ) = A tree 6,R 6 = −2π 2 + 3 i=1 Li 2 (1 − u i ) + 1 2 log u i log u i+1 + (arccos √ u i ) 2 .
We like to stress that the remainder function, up to an additive constant, is given entirely by the dual-conformal finite integralĨ critter (i) for the parity even structure (and by I odd box;tri (i) for the parity-odd structure). This is in contrast with N = 4 SYM, where the remainder function is mixed with BDS and spread across a number of divergent integrals.
The part proportional to A tree 6,shifted can be written in a more compact way if we assume certain restrictions on the kinematics. We will assume so-called Euclidean kinematics, e.g. all non-vanishing invariants (i · j) are spacelike. (This is a nonempty region even for real Minkowski momenta.) In that case, it is correct to naively rewrite the original expression in terms of angle-brackets: Indeed, in that region, u 1 > 0 and the first expression is always real and positive. This is also the case for the second expression, as can be seen from the fact that 12 and 45 are real, while ( 34 46 + 35 56 ) is either real or smaller in magnitude than 12 45 . Note that, defining cross-ratios χ 2 (χ 3 ) from cyclic shifts by minus 2 (plus 2) of this expression, it can be shown that χ 1 χ 2 χ 3 = 1. This allows us, in these kinematics, to simplify the answer to
arccos( √ u 1 ) u 1 (1 − u 1 ) := 1 2i log √ u 1 +i √ 1−u 1 √ u 1 −i √ 1−u 1 u 1 (1 − u 1 ) −→ (1 · 4)(3 · 6) log χ 12iA 2-loop 6 = N k 2 A tree 6 2 BDS 6 + R 6 + A tree 6,shifted 4i log u 2 u 3 log χ 1 + cyclic × 2 . (7.5)
While strictly derived from eq. (7.3) in Euclidean kinematics, we expect this expression to be valid in other kinematic regions for a suitable analytic continuation of the variables χ i . Note that as χ i → χ −1 i under the Z 2 little group transformation of any external leg, the little group weight of log χ 1 is exactly what is needed to compensate the little group mismatch of A tree 6,shifted .
Conclusions
In this paper, we construct the two-loop six-point amplitude of ABJM theory. The result can be separated into a two-loop correction proportional to the tree amplitude, and a correction proportional to the shifted tree amplitude, which are distinct Yangian invariants. The correction proportional to the first is infrared divergent and we use mass regularization. The result shows that the infrared divergence is identical to that of N = 4 super Yang-Mills and is thus completely captured by the BDS result. This establishes that the dual conformal anomaly equation is identical between the one-loop SYM 4 and two-loop ABJM, which was first observed at four-points and we conjecture will persists to all points. The correction multiplying the shifted tree amplitude is completely finite.
As a comparison, we also computed the divergent integrals using dimensional reduction. We find that the individual integrals give different functional answer between the two regularization schemes. However, when combined into the amplitude, they give the same result up to a physically expected constant.
We find in addition to the BDS result a nonzero (dual-conformal invariant) remainder function. This implies that the six-point ABJM amplitude cannot be dual to a bosonic Wilson-loop, which only captures the BDS part [43] 13 . This does not rule out a possible duality with a suitable supersymmetric Wilson loop, however. The reason is that, if SYM 4 is to be of any guidance [46], the correct Wilson loop dual for amplitudes with n ≥ 6 particles should reproduce, at lowest order in the coupling, the n-point tree amplitude 14 . Since no candidate Wilson loop with this property, or even just the correct quantum numbers, are presently available in the literature, we find it hard to say anything conclusive about the duality. Our results demonstrate that the dual conformal symmetry persists at the quantum level up to an anomaly which is identical to that of a Wilson loop. We interpret this as strong evidence for the existence of a dual Wilson loop which remains to be constructed.
We list a number of open questions for future work. A first one concerns the status of the dual conformal symmetry away from the origin of moduli space, e.g. in the Higgsed theory. As demonstrated in section (5), to lowest order in the masses (logarithmic accuracy), the Higgsed theory enjoys an exact dual conformal symmetry under which the masses transform in a nontrivial way. It is not clear whether this symmetry extends all the way into the moduli space; for one thing, the origin of the symmetry is mysterious and the original string theory argument in [28] does not apply in ABJM due to difficulties with the T-duality. As discussed in the main text, a key step here would be to settle this question for the tree amplitudes.
We note that a 3-loop computation of the 4-point and 6-point amplitude in ABJM would probably be feasible with the same techniques, although a more sustained effort would be required. For instance, we expect only degree-3 transcendental functions in the result. Furthermore, the only divergences should be double-logarithms multiplying the 1-loop amplitude. Given the absence of overlapping divergences, the integration technology developed in section (6) might thus plausibly be sufficient.
An interesting property of our remainder function R 6 is that it does not vanish in collinear limits, contrary to the case in SYM 4 . In fact, it even diverges logarithmically in the 'simple' collinear limit (six point goes to five), this even though the five-point amplitude is zero. This does not violate any physical principle, since the A tree 6 and A tree 6,shifted prefactors do not have any pole in this limit. In the absence of a pole, there is no need for the amplitude to factorize into a product of lower-point amplitudes. In other words, the leading term in the collinear limit in ABJM is similar to subleading, power-suppressed terms in the collinear limits in D = 4. The factorization theory for these terms is more complicated, and in fact it has only been worked out recently in the dual Wilson loop language [47]. It would be very interesting to work out 13 Note that the vanishing of 1-loop Wilson loops was obtained numerically in [42]. Given the subtle analytic properties discussed in section (3.3), it could be worthwhile to supplement this by an analytic computation.
14 At least up to δ 3 (P )δ 6 (Q) and a purely bosonic factor, akin to the Parke-Taylor denominator in SYM4. the general structure of this limit, using field theory arguments, as this should place strong constraints on the amplitudes. In subsection (3.3), for instance, we have conjectured from analyticity of the scattering amplitudes that a certain discontinuity of the amplitude should vanish in the collinear limit, but we have no idea how this could be established. Also interesting are the 'double-collinear' limit (six point goes to four), or factorization limits (p 2 123 goes to zero, but momenta p 1,2,3 do not become collinear). Since it was not clear to the authors what kind of field theory predictions are available for these limits, we did not discussed them on our 6-point result. However, it is possible that this could shed further light on our result itself, for instance by giving a physical interpretation for the relative signs between different terms. These limits may also yield some interesting constraints on the higher-point amplitudes.
Another interesting direction for future work concerns the rest of the Yangian algebra at loop level. As one easily sees from [3], the Yangian algebra in ABJM is generated by the bosonic dual conformal symmetry together with the (ordinary) superconformal symmetry. Since the former is presently conjectured to become anomaly-free to all loops after dividing by the BDS Ansatz, the crux is the superconformal anomaly. By analogy with SYM 4 , the properly understood symmetry at the quantum level should uniquely determine the amplitudes, providing for an efficient way to compute them. Our two-loop result (7.5) should thus provide an important data point to understand the quantum symmetries of ABJM, perhaps combining the 1-loop ABJM analysis in [14] with the all-loop SYM 4 analysis in [48].
Aknowledgement
We would like to thank Johannes Henn for many enlightening discussions. Y-t would like to thank N. Arkani-Hamed for invitation as visiting member at the Institute for Advanced Study at Princeton, where this work was initiated. SCH gratefully acknowledges support from the Marvin L. Goldberger Membership and from the National Science Foundation under grant PHY-0969448. This work was supported in part by the US Department of Energy under contract DEFG03 91ER40662.
A Identities
In this appendix, we aim to prove a series of identities used in the text. First consider the following identity:
C 1 + C * 1 C 2 + C * 2 = − (6, 1, 2, 3, 4)(3 · 5)(5 · 1) (3, 4, 5, 6, 1)(6 · 2)(2 · 4) . (A.1)
The strategy is to express the five-dimensional symbol in terms of angle brackets. To do so, we start from the definition of the -symbol as a determinant and use the manifest translation invariance of the formula to set x 2 = 0. In doing so, we must remember to normalize the determinant such that (i, j, k, l, m) 2 agrees with the Gram determinant formula (3.16), since this is the convention used in the main text; this requires an extra factor of 2i √ 2. Thus (6, 1, 2, 3, 4) := 2i √ 2 det(y 6 , y 1 , y 2 , y 3 , y 4 ) = 2i
√ 2 det − p 6 − p 1 − p 1 0 p 2 p 2 + p 3 1 1 1 1 1 − 61 2 0 0 0 − 23 2 .
where the first three rows are real in Minkowski signature. This determinant can now be evaluated in terms of three-dimensional ones, which in turn give two-brackets: det( p i , p j , p k ) :=
B ABJM theory on the Higgs branch
The action of ABJM takes the form (see for instance [49] for an explicit component form):
L = k 4π L kin + L 4 + L 6 . (B.1)
To describe the spectrum of the theory on the Higgs branch, we begin by describing the fermion mass matrix. The interactions of the fermions can be written, following [49] but as can also be verified directly by comparing against various components of the four-point amplitude (2.7),
L 4 = Tr[ψ Aψ B φ Cφ D ] − Tr[ψ AφD φ CψB ] (2δ A C δ D B − δ A B δ D C ) + ABCD Tr[φ AψB φ CψD ] + ABCD Tr[ψ AφB ψ CφD ]. (B.2)
As already mentioned in the main text, the moduli space (C 4 /Z k ) N of this theory is characterized by diagonal vacuum expectation values for the scalar fields. Let us denote the fields above (below) the diagonal with a plus (minus) superscript, so that (ψ ± A ) † =ψ A∓ . As one can easily see from the action (B.2), the diagonal fermions remain massless while ψ + B and ψ B+ mix with each other. Upon inserting a diagonal vev for the scalars, the mass term thus takes the form (ψ A− , ψ − A )M f (ψ + B ,ψ B+ ) T where M f is the 8 × 8 Hermitian matrix
M f = 2(xx − yȳ) B A − δ B A (x·x − y·ȳ) 2 ABCD x C y D 2 ABCDȳ CxD 2(yȳ − xx) A B − δ A B (y·ȳ − x·x)
.
In this appendix, (x, y) A := (v i , v j ) A will denote the diagonal vevs coupled to off-diagonal components under consideration. As one can verify M 2 f = 1 8 m 2 with
m 2 = (x·x + y·ȳ) 2 − 4x·ȳy·x,
showing that all 8 off-diagonal fermions acquire the same mass. The scalar potential was described in detail in ref. [1],
L 6 = Tr[φ Aφ [A φ Bφ C] φ Cφ B ] − 1 3 Tr[φ Aφ [A φ Bφ B φ Cφ C] ]
where here the square bracket means antisymmetrization in the indices. Again one can see that the diagonal fluctuations remain massless while off-diagonal ones δφ A+ and δφ + A mix with each other. It follows that the mass term takes the form (δφ − A , δφ A− )M 2 s (δφ B+ , δφ + B ) T , and a computation gives the 8 × 8 Hermitian matrix as where P 8 = x T y T −ȳ T −x T · x·x + y·ȳ −2x·ȳ −2y·xȳ x·x + y·ȳ · x −ȳ y −x /m 2 is an orthogonal projector onto the two would-be Goldstone bosons (δφ + , δφ + ) ∼ (x, −ȳ) and (δφ + , δφ + ) ∼ (y, −x). We conclude that six of the eight scalars acquire the same mass squared as the fermions, while the remaining two acquire no mass, although they are soon to be "eaten" by the gauge fields through the Higgs mechanism. Finally, we consider the gauge fields, which acquire mass terms through the scalar kinetic term ∼ TrD µ φD µφ . First we discuss the off-diagonal components. As one can see again the fields A + 1,2 from the two gauge groups mix with each other, so the mass term is characterized by a 2 × 2 Hermitian matrix (A − 1 , A − 2 )M g (A + 1 , A + 2 ). However, in this case the kinetic term is also characterized by a nontrivial matrix d(A − 1 , A − 2 ) ∧ K g (A + 1 , A + 2 ). These two matrices are K g = 1 0 0 −1 and M g = x·x + y·ȳ −2x·ȳ −2y·x x·x + y·ȳ .
Fortunately, to obtain the propagator it is not necessary to diagonalize these two matrices simultaneously -as pointed out in [50]
D Integrals using dimensional regularization
In this appendix, we present the integrated result of infrared divergent integrals using dimensional regularization. Here all integrals are again multiplied by 16π 2 . After obtaining the integrals in terms of Feynman parameters, we integrate by converting the integrand into Mellin-Barnes representation, and implement the Mathemtica package MB.m [51] to obtain the result up to O( ). The result is expressed in terms of zero-, one-and two-dimensional integrals in Mellin space. The one and two-dimensional integrals are analytically evaluated by performing sum over residues. That such sum can be carried out analytically, is simply due to the fact that the two-loop amplitude should be of transcendental two functions. The two mass hard integral gives:
I 2mh even (1) = (a, 1, 2, 3, * ) (b, 3, 5, 1 * ) (a · 1)(a · 2)(a · 3)(a · b)(b · 3)(b · 5)(b · 1) = e −γ 2 (4π) −2 (1) = (a, 1, 2, 3, * ) (b, 5, 6, 1, * ) (a · 1)(a · 2)(a · 3)(a · b)(b · 5)(b · 6)(b · 1) = e −γ 2 (4π) −2 While the integrated results appears to be regularization scheme dependent (compare with eqs. (C.1)), when combined into amplitudes they give identical result up to additive constants. In particular, the arcsin functions completely cancel. Considering the sum of infrared divergent integrals in eq. (7.1) one obtains:
6 i=1 I 2mh even (i) + I crab (i) + I i,i+2;i+2,i 2tri − I i,i+2;i+4,i 2tri = 6 i=1 − e −γ2 (8π) −2 (x 2 i,i+2 ) −2 (2 ) 2 − log x 2 ii+2 x 2 ii+3 log x 2 i+1i+3 x 2 ii+3 + 1 4 log 2 x ii+3 x i+1i+4 − 1 2 Li 2 1 − x 2 ii+4 x 2 i+1i+3 x 2 ii+3 x 2 i+1i+4 − π 2 ( 23 8 − 12 * a) = BDS 6 ( → 2 ) − π 2 ( 31 8 − 12 * a) (D.3)
where BDS 6 is the one-loop six-point MHV amplitude of N = 4 sYM [30] with replaced by 2 , reflecting the two-loop nature of the result. Thus the dimensionally regulated infrared divergent integrals combine to give the BDS answer, just as in the mass regulated result.
Figure 2 .
2The two maximal cuts at one-loop six-point.
.(3.11) and(3.15), one can see that all maximal cut are correctly reproduced. An important feature of the integrand in eq.(3.17) is that it picks up a minus sign under a cyclic shift of the all scalar component amplitude by one-site. For the box integrals, this is a consequence of the linear combination dictated by the vanishing two-particle cut in eq. (3.7). For the triangles, this is a consequence of their coefficients: If one considers the all scalar φ 4 φ 4φ 4 φ 4φ 4 φ 4 component of the amplitude, from the explicit form of C 1,2 in eqs. (3.8)-(3.10), one sees that under a cyclic shift:
. (3.10) the one loop six-point ABJM amplitude can thus be rewritten as shifted (sgn c 12 sgn c 34 sgn c 56 + sgn c 23 sgn c 45 sgn c 61 ) . (3.20)
Figure 4 .
4The triple cut of consecutive massless corners corresponds to soft exchange between the two external lines. In dual space, this correspond to the loop region x a approaching x i .3. Triple-cuts of consecutive massless corners, as shown infig. (4), correspond to soft gluon exchange and must reduce to the 1-loop integrand.
I 2mh even ( 1 )
1→ I box (3, 5, 1, 2) I crab (1) → I box (5, 6, 1, 2), I crab (3) → I box (3, 4, 5, 2) I critter (1) → I box (;tri [ (a, 1, 2, 3, i)] → (2 · i)I tri(2,4,6), I 2mh odd (1) → I tri(1,3,5) .
can see, all one-loop integrals appearing in eq. (3.17) are present. Thus the constraint of reproducing the one-loop integrand on the one-loop maximal cut, combined with previous results derived from constraint 2, fixes the two-loop integrand to be the following combination: even (1) + I crab (1) − I critter (1) + I
Figure 7
7Figure 7. The terms that contribute to the leading singularity (a · 1) = (a · 2) = (a · 3) = 0, which has to reduce to the one-loop integrand. The blue lines indicate the one-loop propagators that remain and uncancelled after the cut. The term in the bottom of each diagram is the numerator factor. One can see that the combination is precisely the one-loop answer.
N) as SU(M)× SU(N-M) with M N and turn on vevs only within the smaller SU(M), restricting attention to external states within that SU(M). Many variants are possible. For instance if the vev preserves the SU(M) symmetry all external states remain massless. Or a generic vev can break SU(M) down to U(1) M , rendering the external states massive according to eq. (5.1).
Figure 8 .
8Pattern of masses for the Higgsed theory following[28]. The loop propagators in the interior of the graph remain massless while those at the boundary, represented in bold, acquire a mass. External states can be chosen to remain massless or not, depending on whether the m i are equal or not. corresponding integral. (At least for all integrals that have been considered in the literature so far.)
(
This was most readily done by evaluating the integrals in the following order: y, x and c.) The second type of double-triangle I
, parity odd box-triangles appear in the six-point amplitude only in absolutely-convergent combinations of the form I odd box;tri (1) := (1·4)
this may not even be possible in general. In the and (see eq. , we find for the four-point double-box using the same regularization:
presence of arcsin functions with non-conformal cross-ratios as arguments. Such function did not appear in the mass regulated result and marks a stark distinction between the two regularizations. The ArcSin functions always come in the combinationarcsin( √ m) − arcsin 2 ( √ m)/π .This particular combination is necessary for the integral to remain real. For completeness, we list the double triangle result:
Using momentum conservation, the parenthesis can be shown to equal −1, proving the desired formula using (3.10).Another remarkable algebraic identity is which one might call a Dirac matrix trace identity, and follows from squaring eq. (A.2) and using that the square should give the Gram determinant. Using this identity we have that which was used around eq. (7.2).1
2 ij jk ik . This way we obtain
(6, 1, 2, 3, 4) = i
√
2 61 12 23 31 16 + 32 26 .
(A.2)
Performing a similar computation for (3, 4, 5, 6, 1) and using that (3 · 5) = − 34 2 etc., we
thus find
(6, 1, 2, 3, 4)(3 · 5)(5 · 1)
(3, 4, 5, 6, 1)(6 · 2)(2 · 4)
=
12 34 56
23 45 61
31 16 + 32 26
34 46 + 35 56
.
(A.3)
x 2
14 x 2
36 − x 2
13 x 2
46 = ( 34 46 + 35 56 ) 2
(A.4)
12 34 56 (3, 4, 5, 6, 1)
(1 · 4)(3 · 6)(1 · 5)(3 · 5) u 1 (1 − u 1 )
=
i
√
2 12 45 34 46 + 35 56
12 2 45 2 34 46 + 35 56
2
(A.5)
Both pure Chern-Simons and gravity in three-dimensions are topological.3 This was already seen for the four-point amplitude[16] where the one-loop result vanishes up to O( ), yet it has a nontrivial box integrand. This integrand later becomes the seed of the two-loop integrand. The
Here we borrow the nomenclature of four-dimensional box integrals to denote the propagator structure.
The basic integral with massless internal lines, which follows easily from (3.4) with D = 3, is a 1 (a·i)(a·j)(a·k) = 1/[8 (i · j) (i · k) (j · k)].
The locality of the leading singularities at six-point has been recently understood as a special property of the orthogonal Grassmaniann[35].
2 a,b (a123 * )(b341 * ) + (a · 2)(b · 4)(1 · 3) 2 (a · 1)(a · 2)(a · 3)(a · b)(b · 3)(b · 4)(b · 1) + (s ↔ t) .One can see that the above also satisfy requirement(1) and is equivalent to that of[16].
More generally, this vanishes whenever the (Z k ) N -invariant combination vi ⊗vi − vj ⊗vj vanishes.
Dual conformal symmetry of maximal super-Yang-Mills at finite values of the masses can also be established by considering the symmetry as a property of the higher dimensional parent theory[39,40].
Note that in refs.[50] it was further shown that the field (A1 − A2) can be integrated out in a systematic expansion in 1/m, yielding a Yang-Mills term kF 2 /m for the remaining gauge field plus other terms. But since for us m ∼ |vi| 2 is an infrared scale, not an ultraviolet scale, such a (in any case not strictly necessary) procedure would be inappropriate in our context.
present case, one can verify that (K g M g ) 2 = m 2 1 2 and this suffices in order to write down the propagator in a simple way. To see this, let us first add a gauge-fixing term to the action (∂ µ A µ− − ξv·(δφ − ))K g (∂ µ A µ+ − ξv·(δφ + ))/ξ, designed to remove the mixing between the gauge bosons and the scalar fields, where ξ is some arbitrary scale. Then the two unphysical scalars acquire masses squared ∼ ξm, and a short computation gives the gluon propagator asIn particular, this formula shows that there are no singularities at zero momentum provided m 2 = 0, as required in the main text. Finally, we discuss the diagonal gauge fields. Since only the combination (A 1 − A 2 ) receives a mass term in this case, we have that M g ∝ (1, −1) ⊗ (1, −1) T which is effectively nilpotent: (K g M g ) 2 = 0. As the above propagator shows, even though the mass matrix is nonzero, no massive states appear in the spectrum (as required by supersymmetry). This situation has been discussed in detail in[50].15C Integrals using the mass regularizationHere we summarize the results obtained in section (6) for the integrals defined in eqs. (4.5), multiplied by 16π 2 , evaluated using a small internal mass to regulate infrared divergences as defined in section(5). In addition, we have the following two absolutely-convergent integrals:I critter (1)+I 13;46 2tri = −
. O Aharony, O Bergman, D L Jafferis, J Maldacena, arXiv:0806.1218JHEP. 081091hep-thO. Aharony, O. Bergman, D. L. Jafferis and J. Maldacena, gravity duals," JHEP 0810, 091 (2008) [arXiv:0806.1218 [hep-th]].
. T Bargheer, F Loebbert, C Meneghelli, arXiv:1003.6120Phys. Rev. D. 8245016hep-thT. Bargheer, F. Loebbert and C. Meneghelli, Phys. Rev. D 82, 045016 (2010) [arXiv:1003.6120 [hep-th]].
. Y. -T Huang, A E Lipstein, arXiv:1008.0041JHEP. 101176hep-thY. -t. Huang and A. E. Lipstein, JHEP 1011, 076 (2010) [arXiv:1008.0041 [hep-th]].
. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, arXiv:0807.1095Nucl. Phys. B. 828317hep-thJ. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, Nucl. Phys. B 828, 317 (2010) [arXiv:0807.1095 [hep-th]];
. J M Drummond, J Henn, V A Smirnov, E Sokatchev, hep-th/0607160JHEP. 070164J. M. Drummond, J. Henn, V. A. Smirnov and E. Sokatchev, JHEP 0701, 064 (2007) [hep-th/0607160];
. J M Drummond, J M Henn, J Plefka, arXiv:0902.2987JHEP. 090546hep-thJ. M. Drummond, J. M. Henn and J. Plefka, JHEP 0905, 046 (2009) [arXiv:0902.2987 [hep-th]].
. D Gang, Y Huang, E Koh, S Lee, A E Lipstein, arXiv:1012.5032JHEP. 1103116hep-thD. Gang, Y. -t. Huang, E. Koh, S. Lee and A. E. Lipstein, JHEP 1103, 116 (2011) [arXiv:1012.5032 [hep-th]].
. N Berkovits, J Maldacena, arXiv:0807.3196JHEP. 080962hep-thN. Berkovits and J. Maldacena, JHEP 0809, 062 (2008) [arXiv:0807.3196 [hep-th]];
. N Beisert, R Ricci, A A Tseytlin, M Wolf, arXiv:0807.3228Phys. Rev. D. 78126004hep-thN. Beisert, R. Ricci, A. A. Tseytlin and M. Wolf, Phys. Rev. D 78, 126004 (2008) [arXiv:0807.3228 [hep-th]].
. I Adam, A Dekel, Y Oz, arXiv:0902.3805arXiv:1008.0649JHEP. 0904110JHEP. hep-thI. Adam, A. Dekel, Y. Oz, JHEP 0904, 120 (2009). [arXiv:0902.3805 [hep-th]], JHEP 1010, 110 (2010) [arXiv:1008.0649 [hep-th]];
. P A Grassi, D Sorokin, L Wulff, arXiv:0903.5407JHEP. 090860hep-thP. A. Grassi, D. Sorokin, L. Wulff, JHEP 0908, 060 (2009). [arXiv:0903.5407 [hep-th]].
. I Bakhmatov, arXiv:1011.0985Nucl. Phys. B. 84738hep-thI. Bakhmatov, Nucl. Phys. B 847, 38 (2011) [arXiv:1011.0985 [hep-th]].
. N Arkani-Hamed, F Cachazo, C Cheung, J Kaplan, arXiv:0907.5418JHEP. 100320hep-thN. Arkani-Hamed, F. Cachazo, C. Cheung and J. Kaplan, JHEP 1003, 020 (2010) [arXiv:0907.5418 [hep-th]].
. S Lee, arXiv:1007.4772Phys. Rev. Lett. 105151603hep-thS. Lee, Phys. Rev. Lett. 105, 151603 (2010) [arXiv:1007.4772 [hep-th]].
. A Agarwal, N Beisert, T Mcloughlin, arXiv:0812.3367JHEP. 090645hep-thA. Agarwal, N. Beisert and T. McLoughlin, JHEP 0906, 045 (2009) [arXiv:0812.3367 [hep-th]].
Recent Advances in Scattering Amplitude. Y. -T Huang, INI Cambridge. Y. -t. Huang, "Recent Advances in Scattering Amplitude" INI Cambridge, http://www.newton.ac.uk/programmes/BSM/seminars/040409001.html
. M S Bianchi, M Leoni, A Mauri, S Penati, A Santambrogio, arXiv:1204.4407hep-thM. S. Bianchi, M. Leoni, A. Mauri, S. Penati and A. Santambrogio, arXiv:1204.4407 [hep-th].
. T Bargheer, N Beisert, F Loebbert, T Mcloughlin, arXiv:1204.4406hep-thT. Bargheer, N. Beisert, F. Loebbert and T. McLoughlin, arXiv:1204.4406 [hep-th].
. A Brandhuber, C Wen, G Travaglini, arXiv:1205.6705hep-thA. Brandhuber, C. Wen and G. Travaglini, arXiv:1205.6705 [hep-th].
. W. -M Chen, Y. -T Huang, arXiv:1107.2710JHEP. 111157hep-thW. -M. Chen and Y. -t. Huang, JHEP 1111, 057 (2011) [arXiv:1107.2710 [hep-th]].
. M S Bianchi, M Leoni, A Mauri, S Penati, A Santambrogio, arXiv:1107.3139JHEP. 120156hep-thM. S. Bianchi, M. Leoni, A. Mauri, S. Penati and A. Santambrogio, JHEP 1201, 056 (2012) [arXiv:1107.3139 [hep-th]].
. A Brandhuber, G Travaglini, C Wen, arXiv:1207.6908hep-thA. Brandhuber, G. Travaglini and C. Wen, arXiv:1207.6908 [hep-th].
. Z Bern, J J M Carrasco, H Johansson, arXiv:0805.3993Phys. Rev. D. 7885011hep-phZ. Bern, J. J. M. Carrasco and H. Johansson, Phys. Rev. D 78, 085011 (2008) [arXiv:0805.3993 [hep-ph]].
. Z Bern, J J M Carrasco, H Johansson, arXiv:1004.0476Phys. Rev. Lett. 10561602hep-thZ. Bern, J. J. M. Carrasco and H. Johansson, Phys. Rev. Lett. 105, 061602 (2010) [arXiv:1004.0476 [hep-th]].
. Z Bern, T Dennen, Y Huang, M Kiermaier, arXiv:1004.0693Phys. Rev. D. 8265003hep-thZ. Bern, T. Dennen, Y. -t. Huang and M. Kiermaier, Phys. Rev. D 82, 065003 (2010) [arXiv:1004.0693 [hep-th]].
. J Bagger, N Lambert, arXiv:0711.0955Phys. Rev. D. 7765008hep-thJ. Bagger and N. Lambert, Phys. Rev. D 77, 065008 (2008) [arXiv:0711.0955 [hep-th]];
. arXiv:0807.0163Phys. Rev. D. 7925002hep-thPhys. Rev. D 79, 025002 (2009) [arXiv:0807.0163 [hep-th]];
. A Gustavsson, arXiv:0709.1260Nucl. Phys. B. 81166hep-thA. Gustavsson, Nucl. Phys. B 811, 66 (2009) [arXiv:0709.1260 [hep-th]].
. T Bargheer, S He, T Mcloughlin, arXiv:1203.0562hep-thT. Bargheer, S. He and T. McLoughlin, arXiv:1203.0562 [hep-th].
. N E J Bjerrum-Bohr, P H Damgaard, P Vanhove, arXiv:0907.1425Phys. Rev. Lett. 103161602hep-thN. E. J. Bjerrum-Bohr, P. H. Damgaard and P. Vanhove, Phys. Rev. Lett. 103, 161602 (2009) [arXiv:0907.1425 [hep-th]].
. M S Bianchi, M Leoni, S Penati, arXiv:1112.3649JHEP. 120445hep-thM. S. Bianchi, M. Leoni and S. Penati, JHEP 1204, 045 (2012) [arXiv:1112.3649 [hep-th]].
. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, arXiv:0709.2368Nucl. Phys. B. 79552hep-thJ. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, Nucl. Phys. B 795, 52 (2008) [arXiv:0709.2368 [hep-th]].
. A Brandhuber, P Heslop, G Travaglini, arXiv:0905.4377arXiv:0906.3552JHEP. 090863JHEP. hep-thA. Brandhuber, P. Heslop and G. Travaglini, JHEP 0908, 095 (2009) [arXiv:0905.4377 [hep-th]], JHEP 0910, 063 (2009) [arXiv:0906.3552 [hep-th]].
. L F Alday, J M Henn, J Plefka, T Schuster, arXiv:0908.0684JHEP. 100177hep-thL. F. Alday, J. M. Henn, J. Plefka and T. Schuster, JHEP 1001, 077 (2010) [arXiv:0908.0684 [hep-th]].
. Z Bern, L J Dixon, V A Smirnov, arXiv:hep-th/0505205Phys. Rev. D. 7285001Z. Bern, L. J. Dixon and V. A. Smirnov, Phys. Rev. D 72, 085001 (2005) [arXiv:hep-th/0505205].
. Z Bern, L J Dixon, D C Dunbar, D A Kosower, hep-ph/9403226Nucl. Phys. B. 425217Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, Nucl. Phys. B 425, 217 (1994) [hep-ph/9403226].
. P A M Dirac, Annals Math. 37429P. A. M. Dirac, Annals Math. 37, 429 (1936);
. G Mack, A Salam, Annals Phys. 53174G. Mack and A. Salam, Annals Phys. 53, 174 (1969);
. S L Adler, Phys. Rev. D. 63821Erratum-ibid. DS. L. Adler, Phys. Rev. D 6, 3445 (1972) [Erratum-ibid. D 7, 3821 (1973)];
. R Marnelius, B E W Nilsson, Phys. Rev. D. 22830R. Marnelius and B. E. W. Nilsson, Phys. Rev. D 22, 830 (1980).
. W Siegel, arXiv:1204.5679hep-thW. Siegel, arXiv:1204.5679 [hep-th].
. W L Van Neerven, J A M Vermaseren, Phys. Lett. B. 137241W. L. van Neerven and J. A. M. Vermaseren, Phys. Lett. B 137, 241 (1984).
. R Britto, F Cachazo, B Feng, hep-th/0412308Nucl. Phys. B. 715499R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B 715, 499 (2005) [hep-th/0412308].
. Y. -T Huang, S Lee, arXiv:1207.4851hep-thY. -t. Huang and S. Lee, arXiv:1207.4851 [hep-th].
. Z Bern, J S Rozowsky, B Yan, hep-ph/9702424Phys. Lett. B. 401273Z. Bern, J. S. Rozowsky and B. Yan, Phys. Lett. B 401, 273 (1997) [hep-ph/9702424].
. J L Bourjaily, A Dire, A Shaikh, M Spradlin, A Volovich, arXiv:1112.6432JHEP. 120332hep-thJ. L. Bourjaily, A. DiRe, A. Shaikh, M. Spradlin and A. Volovich, JHEP 1203, 032 (2012) [arXiv:1112.6432 [hep-th]];
. J Golden, M Spradlin, arXiv:1203.1915JHEP. 120527hep-thJ. Golden and M. Spradlin, JHEP 1205, 027 (2012) [arXiv:1203.1915 [hep-th]].
. N Craig, H Elvang, M Kiermaier, T Slatyer, arXiv:1104.2050JHEP. 111297hep-thN. Craig, H. Elvang, M. Kiermaier and T. Slatyer, JHEP 1112, 097 (2011) [arXiv:1104.2050 [hep-th]];
. M Kiermaier, arXiv:1105.5385hep-thM. Kiermaier, arXiv:1105.5385 [hep-th].
. S Caron-Huot, D O'connell, arXiv:1010.5487JHEP. 110814hep-thS. Caron-Huot and D. O'Connell, JHEP 1108, 014 (2011) [arXiv:1010.5487 [hep-th]].
. T Dennen, Y. -T Huang, arXiv:1010.5874JHEP. 1101140hep-thT. Dennen and Y. -t. Huang, JHEP 1101, 140 (2011) [arXiv:1010.5874 [hep-th]].
. W Chen, G W Semenoff, Y S Wu, arXiv:hep-th/9209005Phys. Rev. D. 465521W. Chen, G. W. Semenoff and Y. S. Wu, Phys. Rev. D 46, 5521 (1992) [arXiv:hep-th/9209005].
. J M Henn, J Plefka, K Wiegandt, arXiv:1004.0226JHEP. 100832hep-thJ. M. Henn, J. Plefka, K. Wiegandt, JHEP 1008, 032 (2010) [arXiv:1004.0226 [hep-th]].
. K Wiegandt, arXiv:1110.1373Phys. Rev. D. 84126015hep-thK. Wiegandt, Phys. Rev. D 84, 126015 (2011) [arXiv:1110.1373 [hep-th]].
. M F Paulos, M Spradlin, A Volovich, arXiv:1203.6362JHEP. 120872hep-thM. F. Paulos, M. Spradlin and A. Volovich, JHEP 1208, 072 (2012) [arXiv:1203.6362 [hep-th]].
. S Caron-Huot, K J Larsen, K J Larsen, arXiv:1205.0801hep-phS. Caron-Huot, K. J. Larsen and K. J. Larsen, arXiv:1205.0801 [hep-ph].
. L J Mason, D Skinner, arXiv:1009.2225JHEP. 101218hep-thL. J. Mason and D. Skinner, JHEP 1012, 018 (2010) [arXiv:1009.2225 [hep-th]];
. S Caron-Huot, ; B Eden, P Heslop, G P Korchemsky, E Sokatchev, arXiv:1010.1167arXiv:1103.4353JHEP. 110758hep-th. hep-thS. Caron-Huot, JHEP 1107, 058 (2011) [arXiv:1010.1167 [hep-th]]; see also, B. Eden, P. Heslop, G. P. Korchemsky and E. Sokatchev, arXiv:1103.3714 [hep-th] and arXiv:1103.4353 [hep-th].
. L F Alday, D Gaiotto, J Maldacena, A Sever, P Vieira, arXiv:1006.2788JHEP. 110488hep-thL. F. Alday, D. Gaiotto, J. Maldacena, A. Sever and P. Vieira, JHEP 1104, 088 (2011) [arXiv:1006.2788 [hep-th]].
. S Caron-Huot, S He, arXiv:1112.1060JHEP. 1207174hep-thS. Caron-Huot and S. He, JHEP 1207, 174 (2012) [arXiv:1112.1060 [hep-th]].
. J A Minahan, K Zarembo, arXiv:0806.3951JHEP. 080940hep-thJ. A. Minahan and K. Zarembo, JHEP 0809, 040 (2008) [arXiv:0806.3951 [hep-th]].
. S Mukhi, arXiv:1110.3048JHEP. 111283hep-thS. Mukhi, JHEP 1112, 083 (2011) [arXiv:1110.3048 [hep-th]];
. S Mukhi, C Papageorgakis, arXiv:0803.3218JHEP. 080585hep-thS. Mukhi and C. Papageorgakis, JHEP 0805, 085 (2008) [arXiv:0803.3218 [hep-th]].
. M Czakon, hep-ph/0511200Comput. Phys. Commun. 175M. Czakon, Comput. Phys. Commun. 175, 559 (2006) [hep-ph/0511200].
| []
|
[
"Quest for Universal Integrable Models",
"Quest for Universal Integrable Models"
]
| [
"Partha Guha ",
"Mikhail Olshanetsky \nInstitute of Theoretical and Experimental Physics\n117259MoscowRussia\n",
"† S N Bose ",
"\nInstitut des Hautes Etudes Scientif iques\nNational Centre for Basic Sciences\nJD Block\nSector -3, Salt LakeCalcutta -700091India\n",
"\nLe Bois-Marie\n35, Route de ChartresF-91440Bures-sur-YvetteFrance\n"
]
| [
"Institute of Theoretical and Experimental Physics\n117259MoscowRussia",
"Institut des Hautes Etudes Scientif iques\nNational Centre for Basic Sciences\nJD Block\nSector -3, Salt LakeCalcutta -700091India",
"Le Bois-Marie\n35, Route de ChartresF-91440Bures-sur-YvetteFrance"
]
| [
"Journal of Nonlinear Mathematical Physics"
]
| In this paper we discuss a universal integrable model, given by a sum of two Wess-Zumino-Witten-Novikov (WZWN) actions, corresponding to two different orbits of the coadjoint action of a loop group on its dual, and the Polyakov-Weigmann cocycle describing their interactions. This is an effective action for free fermions on a torus with nontrivial boundary conditions. It is universal in the sense that all other known integrable models can be derived as reductions of this model. Hence our motivation is to present an unified description of different integrable models. We present a proof of this universal action from the action of the trivial dynamical system on the cotangent bundles of the loop group. We also present some examples of reductions. | 10.2991/jnmp.1999.6.3.5 | [
"https://arxiv.org/pdf/nlin/9907201v1.pdf"
]
| 3,889,997 | nlin/9907201 | d76636beee48c7b137829b3a013b237bd4b43e3d |
Quest for Universal Integrable Models
1999
Partha Guha
Mikhail Olshanetsky
Institute of Theoretical and Experimental Physics
117259MoscowRussia
† S N Bose
Institut des Hautes Etudes Scientif iques
National Centre for Basic Sciences
JD Block
Sector -3, Salt LakeCalcutta -700091India
Le Bois-Marie
35, Route de ChartresF-91440Bures-sur-YvetteFrance
Quest for Universal Integrable Models
Journal of Nonlinear Mathematical Physics
31999Received September 02, 1998; Revised December 28, 1998; Accepted January 08, 1999
In this paper we discuss a universal integrable model, given by a sum of two Wess-Zumino-Witten-Novikov (WZWN) actions, corresponding to two different orbits of the coadjoint action of a loop group on its dual, and the Polyakov-Weigmann cocycle describing their interactions. This is an effective action for free fermions on a torus with nontrivial boundary conditions. It is universal in the sense that all other known integrable models can be derived as reductions of this model. Hence our motivation is to present an unified description of different integrable models. We present a proof of this universal action from the action of the trivial dynamical system on the cotangent bundles of the loop group. We also present some examples of reductions.
Introduction
During the last two decades an essential progress has been achieved in the investigation of integrable models [3,5,7,8,19]. Recently one of us [16] proposed an universal action for integrable models. It turns out to be a sum of two Wess-Zumino-Witten-Novikov (WZWN) actions, corresponding to two different orbits of the coadjoint action of a loop group, and Polyakov-Weigmann cocycle [20] describing their interaction. The WZWN model is an universal object in the conformal field theory. It is conjectured that all conformal field theories are considered as some reductions of WZNW model in the spirit of Drinfeld-Sokolov [6], or of some appropriate coset construction, and all the symmetries of the conformal model are the symmetries of the WZNW model. In other words, all the algebraic structures (operator algebras) arising in different conformal field theories are considered as some reductions of the universal enveloping Kac-Moody algebras. Hence the theory of 2d conformal models is exhausted by the theory of the WZWN model.
Let M be a closed two dimensional manifold and let B denote a three dimensional manifold with boundary M .
The WZWN action is given by
S 0 (g) = − k 4π M d 2 z Tr g −1 ∂g 2 + k 12π B d 3 yǫ ijk Tr g −1 ∂ i gg −1 ∂ j gg −1 ∂ k g ,
where g : M −→Ĝ. The trace is the Killing form on the algebraĜ of the loop groupĜ, and k is the level of the affine algebra. There are two different ways to derive the WZWN actions. Firstly, it is an anamolous part of the effective action for fermions on a plane in a gauge field [20], and secondly it is obtained from the Kostant-Kirillov form on an orbit of the coadjoint action of a loop group [3]. In this paper we shall search for a similar type of construction for integrable models. The action considered here is universal in the sense that all known integrable models can be derived from it by reduction. This action can be interpreted as an effective action for free fermions on a torus with nontrivial boundary conditions, where the role of perturbating relevant operators is played by monodromies of the fermions.
This approach is useful from the point of view of string theory -the set of integrable models may play the role of configuration space in string dynamics [9,13].
In the late seventies M. Adler, B. Kostant and W. Symes [1,4,12,24] proposed a scheme to construct integrable Hamiltonian systems. The AKS scheme originated from their work and was subsequently developed by Reiman and Semenov-Tian-Shansky [21,22]. The scenario of the classical R-matrix was unveiled by Semenov-Tian-Shansky [22]. Recently one of us has proposed a hierarchy of this formalism [10].
Our approach is complimentary to the AKS formulation. The Euler-Lagrange equation of motion of our proposed universal action, based on the Lie-Poisson structure, yields a zero curvature equation. This is of course necessary, but does not fulfill the sufficient condition of integrability. Only the choice of a special Hamiltonian, prescribed by the Adler-Kostant-Symes scheme, guarantees the integrability in the Liouville sense. Also, by choosing different coadjoint orbits and different matrix entries, one can obtain various sets of integrable systems. Thus one can associate different integrable systems to various symmetric spaces (see for example [11,14,17,18]).
We organise this paper in the following way: In section 2 we discuss some background material like the Hamiltonian action, the moment map of the action of a loop group [23], etc. We describe how some canonical dynamical systems associate to a cotangent bundle of a Lie group [2]. A proof of our proposed universal action is presented in section three. We derive this action from the action of the trivial dynamical system on the cotangent bundles of the loop group. In the final section we give some explicit examples.
Since there are no derivative terms appearing in the kinetic part of the action, it seems that we can not produce various nontrivial mechanical systems. This is similar to the Hamiltonian system on the cotangent bundle (T * G, ω). The form ω is the symplectic form on the cotangent bundle. This does not have an immediate mechanical meaning (in the general sense). However, in both the cases they do enable us to produce an interesting family of Hamiltonian systems associated to a family of arbitrary Riemannian symmetric spaces. We explain this in one of our examples. We also add an appendix, where we present an explicit connection between the nonlinear Schrödinger equation and the Heisenberg ferromagnetic system, although this connection is known for some time (see for example [17,18]).
Preliminaries
Hamiltonian action and moment map
Let us start this section with some standard definitions of Hamiltonian mechanics [15].
Let G be any compact semi-simple Lie group, G its Lie algebra, and G * the dual space of G. The left and right translation
L g : h −→ gh, R g : h −→ hg induces a map dL * g (or dR * g ) : T * g G −→ T * e G ∼ = G * .
Thus if (g, κ g ) ∈ T * G, where κ g ∈ T * g G is the coordinate of the fibre, then
(g, κ g ) L −→ (g, l i ), l i = dL * g κ g , (g, κ g ) R −→ (g, r i ), r i = −dR * g κ g ,
where r i and l i are related by
r i = −Ad * g(l i ).
Hence we can identify
T * G L,R −→ G × G * .
LetĜ = C ∞ (S 1 , G) be the loop algebra andĜ the corresponding loop group. By left or right trivialization, induced from T * G, we can identify T * Ĝ ≃Ĝ ×Ĝ * .
Let us consider a Hamiltonian action
G × T * Ĝ −→ T * Ĝ , such that L h (g, l i ) = (hg, l i ), R h (g, l i ) = gh, Ad * h −1 (l i ) .
The Hamiltonian actions are given by
µ L X (g, l i ) = − X, Ad * g(l i ) , µ R X (g, l i ) = X, l i ,
where X ∈Ĝ, g ∈ T * Ĝ and l i ∈Ĝ * .
Then the corresponding moment maps associated to the Hamiltonian action are
µ L (g, l i ) = Ad * g(l i ), µ R (g, l i ) = l i .
Hence by symplectic reduction the reduced phase space is naturally identifiable with the coadjoint orbit.
Let α ∈Ĝ * be a constant element then there exist a canonical one form θ := α, g −1 dg and the symplectic form
Ω := d α, g −1 dg onĜ * .
We define a geometrical action S = θ on T * Ĝ as a functional of trajectories on T * Ĝ . The symmetries of this geometrical actions are:
α → α, g → h R g, α → h −1 L αh L , g → gh L ,
where h L and h R are constant elements ofĜ.
Principle Bundle Construction: IfÔ ν is the coadjoint orbit inĜ * through the point ν ∈Ĝ * , then there is a natural canonical imbedding
i ν :Ô ν −→Ĝ * .
The left translation is already indentified by T * Ĝ ≡Ĝ ×Ĝ * . Then the pullback map i * ν (T * Ĝ )|Ô ν , with restriction of T * Ĝ onÔ ν , is the principle bundle overÔ ν . LetĜ×Ô ν −→ O ν is the trivial bundle overÔ ν . Let f :Ĝ ×Ô ν −→Ĝ be an equivariant function. Let α g ∈ i * ν (T * Ĝ ) and dR * g (α g ) ∈ T * eĜ ≡Ĝ * then it is easy to prove Proposition 2.1. f (α g ) = dR * g (α g ) is the moment map associated to the action ofĜ on T * Ĝ .
In the next section we describe a canonical integrable system associated with T * G, which can easily be lifted to T * Ĝ .
A Universal Integrable System on T * G
In this section we present a brief description on the construction of a universal integrable system on the cotangent bundle of the Lie group [2].
The moment map µ : T * G ≃ G × G * −→ G, associated with the Hamiltonian action G × T * G → T * G, is a Poisson map whenever G * is endowed with a natural Poisson structure.
Def inition 2.2. A bivector Λ ∈ ∧ 2 G is called a Poisson bivector if it commutes with itself This is equivalent to the fact that any linear combination lΛ 1 + mΛ 2 is a Poisson bivector. This is called pencil of Poisson bivectors.
A well known example is the rigid body system. In this case the moment map is a Poisson map
µ : T * SO(3) −→ so(3) * ,
with the linear Poisson structure
Λ so(3) * = ǫ ijk p i ∂ j ⊗ ∂ k .
We consider differential 1-form η on G * , which is annihilated by the natural Poisson structure Λ G * on G * associated with the Lie bracket. Such a form is called a Casimir form.
Def inition 2.3. We define the vector field by Γ η = Λ(µ * (η)), and the dynamical system by
g −1ġ = η(g, α) = η(α), α = 0,
where Ω = d α, g −1 dg .
This system can be integrated by quadratures on each level set, obtained by fixing α's in G * , so that this particular dynamical system coincides with a one parameter group of the action of G on that particular level set.
Consider, for example, the rigid rotator
η = f dH, where H = i p 2 i /2 is the Hamilto- nian and f = f (p) is an arbitrary function. If {X i } is the basis of so(3). Then it is not difficult to see that Γ η = f (p)p iXi ,
whereX i are left invariant vector fields on SO(3). Hence, the dynamical system is given by
g −1ġ = f (p)p i X i , p i = 0.
In particular, if we restrict ourselves to the abelian Lie group R n , then µ : T * R n −→ (R n ) * , induced by the natural action of R n on itself by a translation, is a Poisson map. Let η = ν k dI k be a one form on (R n ) * in terms of action-angle variables. After pulling it back to T * R n we obtain a vector field Γ η = Λ(µ * (η)), where Λ is the canonical Poisson structure in the cotangent bundle. Then the associated equations of motion on T * R n or T * T n iṡ
I k = 0,φ k = ν k .
We can recover this from another point of view. Let the Ad * -invariance function H :
T * G −→ R satisfies H = 1 2 ||g −1ġ || 2 G .
This is a free particle Hamiltonian. Now it is easy to see that if (g, g −1ġ ) ∈ T * G, the equation assumes the forṁ (g −1ġ ) = 0.
If we assume
H(g, g −1ġ ) = 1 2 ||g −1ġ || 2 G − Ad * g α, β
for α ∈ G * , the equation becomeṡ
(g −1ġ ) = [Ad * g −1 (β), α]
. This equation is the nontrivial part of the canonical system of equations for the free particle of the Hamiltonian system (T * G, Ω, H).
Universal Integrable Model
Recently [16] Olshanetsky proposed an action based on WZWN theory which has the following form
S = S u (A) − H(A), where S u (A) = 2 tr(u∂gg −1 )d 2 z + kS W ZW N .
Here H(A) is a Hamiltonian and A is a current, given by
A = g −1 ug + g −1 ∂g.
It defines a point on the coadjoint orbit through a point (u, k) inĜ * .
Let us assume, for simplicity, that k = 1. The equation of motion, based on the Lie Poisson structure
(∂ − ad * Ā )A = 0, is given bȳ ∂A = [Ā, A] + ∂Ā,
whereĀ = grad H, the gradient of the Hamiltonian. This is a zero curvature equation, which is a necessary but not sufficient condition of integrablity. Only the special Hamiltonians guarantee this distinguish property. We must emphasize here that ∂/∂z arises along with the "time" derivative ∂ ∂z , when the central extension of the classical algebra is considered.
Conformal models are distinguished by their holomorphicity property: these are theories of massless scalars with the equation of motion
∂A = 0, and A = ∂φ.
We have already seen that in the WZWN model the role of A is played by the Kac-Moody currents. The Adler-Kostant-Symes scheme can be used to choose a particular subset of the zero curvature equation which are integrable in the Liouville sense.
Apparantly there is a drawback in this model, there appear no z derivative in the kinetic part of the action. This is the reflection of the lack of central charge in the R-algebra. This is exactly the necessary condition for the description of ordinary integrable systems. But we shall show how to overcome this difficulty by constructing the mechanical systems associated to the Riemannian symmetric spaces via Fordy-Kulish decomposition.
AKS Scheme and Zero Curvature Equations
LetĜ = gl(n, C) × C[λ, λ −1 ]
be the loop algebra of a semi-infinite formal Laurent series in λ with coefficients in gl(n, C). For example, an element X(λ) ∈Ĝ can be expressed as a formal series in the form
X(λ) = m i=−∞ x i λ i ∀ x i ∈ gl(n, C). The Lie bracket, with Y (λ) = l j=−∞ y j λ j , is given by [X(λ), Y (λ)] = m+l k=−∞ i+j=k [x i , y j ]λ k .
We define a nondegenerate bilinear two form onĜ
A(λ), B(λ) := Res λ=0 (λ −1 A(λ)B(λ)) = tr(A(λ)B(λ)) 0 .
There is a natural splitting in the loop algebraĜ =Ĝ + ⊕Ĝ − , whereĜ + denotes the subalgebra ofĜ, given by the polynomial in λ, andĜ − is the subalgebra of strictly negative series.
The above decomposition ofĜ do not correspond to a global decomposition of the loop groupĜ, but we have a dense open subset
G −Ĝ+ ⊂Ĝ,
consisting of all loops φ that can be factorized in the form
φ = φ − φ + with φ − ∈Ĝ − , φ + ∈Ĝ + .
We refer to this subset ofĜ as the big cell.
Let us consider the Grassmannian like homogeneous spaceĜ/Ĝ + . The image inĜ/Ĝ + of the complement of the big cell inĜ is a divisor inĜ/Ĝ + . It therefore corresponds to a holomorphic line bundle
L −→Ĝ/Ĝ + .
We denote by LG the automorphism group of L. The pullback of L to LG is canonically trivial. Hence LG turns out to be the central extension ofĜ by C × :
1 −→ C × −→ LG −→Ĝ −→ 1.
The loop algebra
LG =Ĝ ⊕ C satisfies the following commutation relation LG is called the central extension ofĜ, obtained through ω. In this particular case LG is also called a Kac-Moody algebra on S 1 .
In general, the map
κ :Ĝ → LG
is not a Lie algebra homomorphism, but only its restriction toĜ + is a Lie algebra homomorphism, since the central extension term vanishes identically. The corresponding induced map
κ :Ĝ + −→ LG
yields a canonical holomorphic trivialization of the part of the fibration lying overĜ. Let φ = φ − φ + be an element of the big cell. Then κφ satisfies Let R ∈ End G be the linear operator on G. The Kostant-Kirillov-Souriau R-bracket is given by
κ(φ) = κ(φ − )κ(φ + ), where κ(φ) is[X, Y ] R = 1 2 ([RX, Y ] + [X, RY ]) ∀ X, Y ∈ G.
This satisfies the Jacobi identity if R-satisfies the modified Yang-Baxter equation.
Def inition 3.1. Let (Ĝ, R) be a double loop algebra on which we define two algebraic structures. Suppose also that ω is a 2-cocycle onĜ. Then
ω R (X, Y ) = ω(RX, Y ) + ω(X, RY )
is a 2-cocycle onĜ R .
The gradient ∇F : G * → G is defined by Tr c d(∇ξ) dz
d dt F (U + tV )| t=0 = V, ∇F (U ) .+ [∇ξ, ∇χ], U dz ∀ ξχ ∈ C ∞ (Ĝ * ).
We observe that the central parameter c is fixed under the coadjoint action of the group. So LG * stratifies into Poisson submanifolds, corresponding to different values of the parameter.
The differential equation appears from the ad-invariant condition, which should be satisfied by the gradients of the local Hamiltonians
Derivation of the Action
Let us derive the generic form of the action for IM from the action of the trivial dynamical system on the cotangent bundle T * Ĝ Let (g, m; u, n) ∈ T * LG. Then by the left action ofĜ follows
g L → gh, m → m, n → n, ∀ h ∈Ĝ.
Since m, n are invariant under the action of group h ∈Ĝ, the action foliates T * LG into hyperplanes. Let us confine to a particular hyperplane m = 0, n = 1. We again consider the two form ω(g) onĜ
ω(g) = dz Tr dgg −1 , ∂(dgg −1 ) .
Let us replace u by a new field
h = P exp dz ′ u(z ′ ).
Def inition 3.5. A symplectic form on T * Ĝ is given by
Ω = ω(g) + ω(h) + 2 dz u, (dgg −1 ) 2 .
The corresponding one form β satisfies dβ = Ω. We define a Hamiltonian
H(v) = 2 d 2 z v, (hg) −1 ∂(hg) .
Hence the action is given by
S = dz ′ (β − H) ≃ S W ZW N (h) + S W ZW N (g) + 2 d 2 z u, dgg −1 − H.
After the gauge fixing condition we arrive at
S = S u (A) − H(v, A), where H(v, A) = d 2 z v, A .
It was demonstrated by Polyakov and Wiegmann [20], that an effective action from the fermionic Lagrangian on a plane
L =ψ L (∂ − A u )ψ L +ψ R (∂ − Av)ψ R + 1 2α 0 A u , Av
, gives rise to a sum of the WZWN action in a gauge invariant form. In this case we have Suppose g is any arbitrary element of the loop group. Then there exist a gauge transformation u −→ g −1 ug + g −1 ∂g.
log det(∂ − A u ) det(∂ − Av) det(∂ − u) det(∂ − v) = S − 2 d 2 z u, v .
Hence the matrices u, v satisfy the zero curvature equation
∂u = [v, u] + ∂v.
We assume that u and v depend on a spectral parameter λ which lives on a rational curve CP 1
u = u 0 + m 1 j=1 u j λ − a j , v = v 0 + m 2 k=1 v k λ − b k
, such that u and v satisfy the zero curvature condition.
Since the zero curvature condition is preserved under the gauge transformation u −→ u g = g −1 ug + g −1 ∂g = A, the corresponding linear equations ∂ψ = A u ψ,∂ψ = Avψ of the zero curvature equation satisfy Proposition 3.7. If ∂ψ = −uψ, then A also satisfies the same equation for g −1 ψ.
Sketch of the Proof:
∂(g −1 ψ) = −g −1 ∂gg −1 ψ + g −1 ∂ψ = −A(g −1 ψ).
If we consider fermions with monodromies, then due to the zero curvature condition the left and right function can be identified by ψ L = ψ R = ψ.
We may regard ψ as a function on R with values inĜ. Its value at x = 2π is called the monodromy matrix T A . The coadjoint orbits are described by Floquet's theorem. Proof. We know that
A = g −1 ug + g −1 ∂g, ∂A = −g −1∂ gA + g −1 u∂g + g −1 ∂∂g.
Let us substitute our ansatz∂g = gv − vg in the above equation: [∂,∂]g = 0,
∂A = −g −1∂ gA + g −1 ugv − g −1 uvg + g −1 ∂gv − g −1 v∂g = −g −1∂ gA + Av − g −1 uvg − g −1 v∂g = −g −1 (gv − vg)A + Av − g −1 uvg − g −1 v∂g. Since g −1 vgA = g −1 vug + g −1 v∂g,∂ 2 = 0 =∂ 2 .
Sketch of the Proof: Sincē
∂g = gv − vg
, it is easy to see that ∂∂g =∂∂g.
Let us consider the factorization of the matrix g(λ) such that
g(λ) = g + (λ)g −1 − (λ)
is the solution to the Riemann problem, where g + is an element of the group of all smooth functions from the unit circle S 1 to G that extend to holomorphic G-valued functions on the disk {λ : |λ| < 1}. Similarly, g −1 − (λ) is the element of the group of all smooth functions S 1 → G that extend holomorphically to the disk {λ : |λ| > 1} and take the value 1 at infinity.
Substituting g = g + g −1 − in∂g = gv − vg and ∂g = gA − ug. We obtain
g −1 − vg − + g −1 −∂ g − = g −1 + vg + + g −1 +∂ g + , g −1 − Ag − + g −1 − ∂g − = g −1 + ug + + g −1 + ∂g + .
Def inition 3.12. We define two new currents
A u = g −1 + ug + + g −1 +∂ g − , Av = g −1 − vg − + g −1 −∂ g − .
Consider a contour γ which consists of small circles around the points a j (j = 1, . . . , m 1 ). Let us modify the action S by
S −→ γ dλS.
Originally, Olshanetsky [???] generalized S by introducing the kinetic term ∂ λ in such a way that one obtains, in addition to the zero curvature equation, a new equation of motion in the form of the string equation:
[∂ + A(g), ∂ λ + v ′ ] = .
Earlier, Gerasimov et al [8,9] proposed a number of of programs for incorporating integrable models into the general framework of string theory. The string theory is understood as some dynamical theory on some configuration space which contains at least all the 2dimensional field theories as its points. They argued that for the universal description of all conformal models, it is necessary to treat various Kac-Koody algebras on the same ground, through their embedding intoĜL(∞) algebra. It may be described through dependence on some auxiliary variable λ. Hence this explains why λ appears in the equation.
A string equation is a sort of "quantum deformation" of a zero curvature equation. Uptil now the holomorphic dependence of the spectral parameter is quite artifical. Moreover, the geometrical meaning of this "deformed" zero curvature equation is still lacking. We now try to give a plausible explanation of this equation.
So far we have encountered three coordinates (z,z, λ), wherez appears along with z andz plays the role of "time". Apparantly it seems that the lack ofλ dependence may be a drawback from the point of view of integrable systems. But this can be managed in the following way:
Let (z, λ,z,λ) be the coordinates on R 4 , which are independent and real for signature (2,2). The self dual Yang-Mills equations are the compatibility conditions for the pair of operators
L 0 = (D z − ξDλ), L 1 = (D λ + ξDz),
where ξ ∈ C is an auxiliary complex spectral parameter and D z is the covariant derivative of some Yang-Mills connection in the direction ∂/∂z. When we impose one null symmetry along ∂/∂z and another along ∂/∂λ we obtain the Lax pair:
L 0 = ∂ ∂z + A(z, λ), L 1 = ∂ ∂λ + B(z, λ).
Def inition 3.13. Let g = g 1 g 2 , then the Polyakov-Wiegmann formula is defined by
S W ZW (g 1 g 2 ) = S W ZW (g 1 ) + S W ZW (g 2 ) + 1 2π d 2 z g −1 1∂ g 1 , ∂g 2 g −1 1 .
Hence from our previous computation we can assert:
Proposition 3.14. For g = g + g −1 − , the action becomes
S = S u (g + ) + Sv(g − ) + d 2 z A u (g + ), Av(g − ) ,
where S u (g + ) = 2 d 2 z u,∂g + g −1 + + S W ZW N (g + ),
Sv(g − ) = 2 d 2 z v, ∂g − g −1 − + S W ZW N (g −1 − ).
It is easy to see that S = S(g + g −1 − ) is a modified Polyakov-Weigmann formula. This action is gauge invariant under
g + −→ g + h, g − −→ g − h,
where h is independent of λ and the equation of motion is a zero curvature equation with a spectral parameter on an arbitrary Lie algebra G:
∂A u − ∂Av + [A u , Av] = 0.
This equation does not guarantee integrability of the system. In particular, if we choose
A u = A 0 + A 1 λ + A 2 λ 2 + · · · + A n λ n , Av = A u λ −1 ,
then one recovers the Adler-Kostant-Symes equation, where A u is considered to be a Lax operator L, and Av is the gradient of H = 1 2 tr(L 2 λ −1 ). In fact, the hierarchy of AKS system can be recasted into this zero curvature form.
Applications
In this section we will present some examples. We have already stated that our Euler-Lagrange equation is a zero curvature equation, and hence does not have an immediate mechanical meaning. We show that after imposing the Cartan decomposition of Lie algebras, we obtain a family nontrivial mechanical systems associated to an arbitrary Riemannian symmetric space.
Periodic Toda Lattice
Let (α 0 , α 1 , . . . , α n ) be a system of simple roots of the affine Lie algebraĜ, where (α 1 , . . . , α n ) are simple roots of the original finite dimensional algebra G, and −α 0 = n j=1 a j α j is the highest root.
Let (s 0 , s 1 , . . . , s n ) be a set of non-negative integers without a common divisor, and N = n j=1 a j s j be the order of σ, where σ N = 1.
Def inition 4.1.
A grading is a decomposition G = ⊕ j∈Z G j of the Lie algebra G into a direct sum of subspaces G j , such that
[G i , G j ] ⊂ G i+j mod N, σG k = ǫ k G k , ǫ = e 2πi N .
This automorphism is called Coxeter automorphism.
An invariant subalgebra is a direct sum
G 0 = R ⊕ · · · ⊕ R ⊕ G(k),
where G(k) is a semi-simple subalgebra generated by simple roots (α j 1 , . . . , α j k ) for which s j 1 = · · · = s j k = 0.
Def inition 4.2. When s 0 = s 1 = · · · = s n = 1, G 0 ∼ = H is a Cartan subalgebra and N = n j=0 a j = h is the Coxeter number.
Let (H j , E j , F j ) ∀ j = 0, . . . , n be the Cartan Weyl basis ofĜ. We define the following action of σ:
σE j = ǫ s j E j , σH j = H j , σF j = ǫ −s j F j .
Let us consider the zero curvature equation again for
A u = A 0 + A 1 λ, Av = A −1 λ −1 .
Then the zero curvature equation decomposes into (1)∂A 1 = 0,
(2) ∂A −1 = [A 0 , A −1 ],(3)∂A 0 + [A 1 , A −1 ] = 0.
One can readily identify
A 0 ∈ G 0 , A 1 ∈ G 1 , A −1 ∈ G N −1 .
Moreover A 1 ∈ G 1 is a constant matrix in G 1 .
Hermitian Symmetric Spaces and Integrablity
A Riemannian manifold M is called a globally symmetric Riemannian space, if every point p ∈ M is a fixed point of involutative isometry of M which takes any geodesic through p into itself as a curve but reverses its paramatrization. Let G be a semi-simple Lie group and G its Lie algebra. Let M be a homogeneous space of G such that M is a differentiable manifold on which G acts transitively. There is a homeomorphism of the coset space We can associate to these spaces a canonical connection with curvature and torsion. Curvature and torsion at a fixed point p ∈ G/H are given purely in terms of the Lie bracket:
(R(X, Y )Z) p = −[[X, Y ] H , Z] ∀ X, Y, Z ∈ M, T (X, Y ) p = −[X, Y ] M ∀ X, Y ∈ M.
Def inition 4.4. A Hermitian symmetric space is a coset space G/H for Lie groups whose associated Lie algebras are G and H , with the decomposition At this stage we project the zero curvature equation into the real plane and treat ∂ = ∂/∂x and ∂ = ∂/∂t. We assume that A u is the orbit L through u = λ 3 A, where A = i diag (1, −1) is an Cartan element of su (2). Let us derive the orbit via the coadjoint action
G = H ⊕ ML = B −1 (λ 3 A)B, where B = 4 i=1 (b i λ −i , e β i λ −i ).
Here b i 's are central elements and β i ∈ M. After an elaborated computation we obtain
L = λ 3 A + λ 2 Q + λ P − i 2 [Q − , Q + ] + TS = i 2 [P + − P − ] + cQ.
If we assume H = − 1 8 tr(L 2 λ −2 ), then Setting various coefficients of λ m equal to zero we obtain:
Av(= π + grad H) = −π + 1 4 Lλ −2 = − 1 4 (Aλ + Q).Q = [A, P ] − i 2 [A, [Q − , Q + ]].
We now apply the group decomposition properties of Hermitian symmetric spaces. Observe that [Q − , Q + ] ∈ h and that A is a constant matrix. Hence we obtain
P = − i 2 (Q + −Q − ).
Similarly
T = − 1 4Q + 1 4 [Q + , [Q − , Q + ]] − 1 4 [Q − , [Q − , Q + ]], S = 1 4 (Q + +Q − ) + cQ.
Finally equating the λ 0 coefficient we obtaiṅ
T + [S, Q] t = [Q, T ] + [Q, [S, Q]] + 1 4 Q x .
If we choose
Q = 0 q † −q 0 ,
then we get, from the zero curvature equation,
q ttt + 6q t |q| 2 + q x = 0.
This is a coupled KdV equation. When we consider the orbit L through u = λ 2 A we obtain
L = λ 2 A + λQ + P − i 2 [Q − , Q + ] .
A similar calculation yields the nonlinear Schrödinger equation
q tt + iq x + 2q|q| 2 = 0.
Geometric Action and Virasoro Group
The geometric action of the Virasoro group has the form of Polyakov's 2d quantum gravity
S grav = d 2 z∂ F ∂F ∂ 3 F ∂F − 2 ∂ 2 F ∂F 2 ,
(F ∈ Diff (S 1 )).
Let S 1 be the circle parametrized by x : 0 ≤ x ≤ 2π and Diff (S 1 ) be the group of all orientation preserving C ∞ diffeomorphisms of S 1 . It is natural to consider the Lie algebra of vector fields on S 1 Vect (S 1 ) as its Lie algebra. The dual of the Diff(S 1 )/S 1 is identified with the space of quadratic differential forms u(x)dx 2 by the following pairing u(x), ξ = 2π 0 u(x)ξ(x)dx, where ξ = ξ(x) d dx ∈ Vect (S 1 ). Let us consider the shift of (u(x), c), induced by S 1 diffeomorphism
x −→ s(x) = x + ǫf (x), (u(x), c) −→ s ′ (x) 3/2 (u(s(x)), c)s ′ (x) 1/2 = (ũ(x), c),
whereũ (x) = s ′ (x) 2 u(s(x)) + 1 2
s ′′′ s ′ − 3 2 s ′′ s ′ 2 .
The last term is known as Schwarzian S(s). After redefining, or adjusting, the coefficients we can define the current
A = u(F )(∂F ) 2 − c 24π
S(F ).
In this case we have the action in the form of S = S u (A) − H(A), where
S u = − d 2 zu(F )∂F∂F + c 48π S grav ,
which is linear with respect to A.
The equation of motion is
∂ Ad * F (u, c) = ad * v Ad * F (u, c),
where v = grad H ∈ Vect (S 1 ). This action can be derived from the canonical action on the cotangent bundle of the groupDiff S 1 . The above equation can be transformed to symmetric form by the Polyakov-Wiegmann formula for the group Diff S 1 . Unfortunately this equation can not be recasted to the zero curvature equation. Nevertheless, some integrable models can be described within this approach.
Appendix
In this appendix we present a connection between the nonlinear Schrödinger equation and the continuous Heisenberg ferromagnetic equation. The continuous Heisenberg ferromagnetic model is an important integrable model associated to some Hermitian symmetric spaces [17,18].
The action of the Heisenberg ferromagnetic model is given by S = d 2 z tr 2u∂gg −1 + ∂(g −1 kg)∂(g −1 kg) ,
where k ∈ H is a constant element. Let us define Q := g −1 kg, which immediately leads to Hence we obtain (∂gg −1 ) H upto some constant which we can always set to zero:
(∂gg −1 ) H = − 1 2 [∂gg −1 , [∂gg −1 , k]].
We finally derive the nonlinear Schrödinger equation
bivectors Λ 1 , Λ 2 are called compatible if they commute with one another [Λ 1 , Λ 2 ] = 0.
[
(A(λ), a), (B(λ), b)] := ([A, B](λ), ω(A, B)), where ω(A, B) is the Maurer-Cartan C-valued two cocycle A, [B, C]) + ω(B, [C, A]) + ω(C, [A, B]) = 0.
the dense open subset of LG that lies over the big cell ofĜ. We also define the bilinear form on LG (A, a), (B, b) = ab + s 1 tr(AB).
Lemma 3. 2 .. 3 .
23Let H be an ad-invariant function onĜ * . Then the gradient of H satisfies ad * (R∇H(α), a)(α, 1) = (ad * (R∇H(α))(α) + R∇H ′ , 0). Sketch of the Proof: By using the identity ad * R (X, a)(β, c), (Y, b) + (β, c), ad R (X, There exists a natural Poisson structure on the space C ∞ (Ĝ * , C) of smooth real valued functions onĜ * {ξ, χ}(U, c) = S 1
(
∂z − ad * α)∇H = 0. It is known that the good substitutes for local Hamiltonians are Casimir functions. Hence we choose Hamiltonian H = 1 2 tr(α 2 ). Theorem 3.4. Let α be the orbit. The Hamiltonian equations of motion on theĜ * , generated by the gradient of the Hamiltonian H (the ad-invariant function), have the form dα dz = Rd(∇H) dz + [R(∇H), α].
Proposition 3. 6 .
6The equation of motion, corresponding to S = S u (A) − H(v, A) for H(v, A) = d 2 z v, A , is ∂A = [v, A] + ∂v.
Theorem 3.8. (Floquet) Two periodic potentials A and A ′ are gauge equivalent if and only if the corresponding monodromy matrices T A , T A ′ are conjugate. Consider some particular cases: Remark 3.9. For the generic integrable models, the following relations hold automaticallȳ ∂u = ∂v = [u, v] = 0.
Proposition 3 . 10 .
310If an integrable model satisfies∂u = ∂v = [u, v] = 0, then the zero curvature equation reduces tō ∂g = gv − vg.
we get back the equation of motion∂A = [A, v]. Additionally we have ∂g = gA − ug,which follows from the current.
Proposition 4. 3 .
3Let η ∈ G N −1 and A 0 = q −1 ∂q. Then the above system of equations reduces to the periodic Toda lattice equation.Sketch of the Proof: From the first equation we let A 1 = η be a constant matrix in G 1 and, if ξ ∈ G N −1 , then from the second equation we obtain A −1 = q −1 ξq and A 0 = −q −1 ∂q. Finally from the third equation we obtain∂q −1 ∂q − [η, q −1 ξ −1 q] = 0,by the substitution of q = exp(φ), where φ ∈ H yields the periodic Toda lattice equation.
G/H onto M for some isotropy subgroup H at a point of M . Let H be the Lie algebra of H and G satisfy G = H ⊕ M and [H, H] ⊂ H, where M is a vector space complement of H. Furthermore, if H and M satisfy [H, M] ⊂ M then G/H is called the reductive homogeneous space.
For
Hermitian symmetric spaces the curvature satisfies (R(X, Y ), Z) p = −[[X, Y ], Z] ∀ X, Y, Z ∈ M, here [X, Y ] ∈ H is satisfied automatically due to [M, M]. Let k be an element in the Cartan subalgebra of G, whose centralizer in G is H = {l ∈ G : [k, l] = 0}. Let j = ad k = [k, * ] be a linear map j : T * (G/H) −→ T * (G/H) satisfying j 2 = −1 or [k, [k, m]] = −m for m ∈ M . Let us consider again the zero curvature equation ∂A u − ∂Av + [A u , Av] = 0.
Proposition 4. 5 .
5Let (O u , ω u ) be the symplectic orbit, where ω u is the Killing two form on the orbit. Then the Hamiltonian equations of motion, corresponding to H(L) = − 1 8 tr(L 2 λ −2 ), generates the system of third order partial differential equations in R n . In this case the zero curvature equation is dL dt = [Aλ + Q, L] + 1 4 (Aλ + Q) x .
Lemma 5. 1 .
1∂Q = [Q, g −1 ∂g],∂Q = [Q, g −1∂ g]. and ∂(∂gg −1 ) M −∂g(∂gg −1 ) + [∂gg −1 , (∂gg −1 ) H ] = 0 respectively.From the H part of the equation we obtain∂(∂gg −1 ) H − [∂gg −1 , [k, ∂(∂gg −1 )] = 0, ∂(∂gg −1 ) H + 1 2 ∂[∂gg −1 , [∂gg −1 , k]] = 0,
+ [S, Q],where
Q = [A, β 1 ],
P = [A, β 2 ] +
1
2
[Q, β 1 ],
T = [A, β 3 ] +
1
2
[[A, β 1 ], β 2 ] +
1
2
[[A, β 2 ], β 1 ] +
1
6
[[Q, β 1 ], β 1 ],
∂(∂gg −1 ) + [k, ∂ 2 (∂gg −1 )] + 1 2 [∂gg −1 , [∂gg −1 , [∂gg −1 , k]]] = 0.
AcknowledgementWe thank the Max Planck Institut für Mathematik, Bonn, for their kind hospitality and providing an excellent working condition during the initial stages of this work. One of us (PG) is also grateful to the organisers of the "Non-Perturbative Aspects of Quantum Field Theory" held at Isaac Newton Institute, Cambridge, and I.H.E.S. for their hospitality during the later stages of this work.
= 0 is gauge equivalent to ∂(∂gg −1 ) − [k,∂gg −1 ] = 0. Sketch of the Proof. ∈ M , + ∂[q ∂q], Lemma 5.2. When Q = g −1. Q, ∂Q] = [Q, [Q, g −1 ∂g. = g −1 [k, [k, ∂gg −1 ]g = −g −1 (∂gg −1 )g = −g −1 ∂g. subspaces. Hence the above equation reduces to [k, (∂gg −1 ) M ] − ∂(∂gg −1 ) = 0Lemma 5.2. When Q = g −1 kg ∈ M, then∂Q + ∂[Q, ∂Q] = 0 is gauge equivalent to ∂(∂gg −1 ) − [k,∂gg −1 ] = 0. Sketch of the Proof: [Q, ∂Q] = [Q, [Q, g −1 ∂g]] = g −1 [k, [k, ∂gg −1 ]g = −g −1 (∂gg −1 )g = −g −1 ∂g. subspaces. Hence the above equation reduces to [k, (∂gg −1 ) M ] − ∂(∂gg −1 ) = 0.
= −(∂gg −1 ) M . Sketch of the Proof. We know [k, (∂gg −1 ) M ] = ∂(∂gg −1 ). Lemma 5.3. [k, ∂(∂gg −1. k, [k, (∂gg −1 ) M ]] = [k, ∂(∂gg −1 )Lemma 5.3. [k, ∂(∂gg −1 )] = −(∂gg −1 ) M . Sketch of the Proof. We know [k, (∂gg −1 ) M ] = ∂(∂gg −1 ), [k, [k, (∂gg −1 ) M ]] = [k, ∂(∂gg −1 )].
= −(∂gg −1 ) M . Sketch of the Proof. We know [k, (∂gg −1 ) M ] = ∂(∂gg −1 ). Lemma 5.4. [k, ∂(∂gg −1. k, [k, (∂gg −1 ) M ]] = [k, ∂(∂gg −1 )Lemma 5.4. [k, ∂(∂gg −1 )] = −(∂gg −1 ) M . Sketch of the Proof. We know [k, (∂gg −1 ) M ] = ∂(∂gg −1 ), [k, [k, (∂gg −1 ) M ]] = [k, ∂(∂gg −1 )],
Let us decompose the zero curvature equation into H and M part: ∂(∂gg −1 ) H +. ∂gg −1 , (∂gg −1 ) M ] = 0, ReferencesLet us decompose the zero curvature equation into H and M part: ∂(∂gg −1 ) H + [∂gg −1 , (∂gg −1 ) M ] = 0, References
On a Trace Functional for Formal Pseudodifferential Operators and the Symplectic Structures for Korteweg-de Vries Type Equations. M Adler, Invent. Math. 50Adler M., On a Trace Functional for Formal Pseudodifferential Operators and the Symplectic Struc- tures for Korteweg-de Vries Type Equations, Invent. Math., 1979, V.50, 219-248.
Completely Integrable Systems: A Generalization. D V Alekseevsky, J Grabowski, G Marmo, P Michor, Mod. Phys. Lett. A. 1637Alekseevsky D.V., Grabowski J., Marmo G. and Michor P., Completely Integrable Systems: A Gen- eralization, Mod. Phys. Lett. A, 1997, V.12, 1637.
Path Integral Quantization of the Coadjoint Orbits of the Virasoro Group and 2d Gravity. A Alekseev, S Shatashvili, Nucl. Phys. B. 323719Alekseev A. and Shatashvili S., Path Integral Quantization of the Coadjoint Orbits of the Virasoro Group and 2d Gravity, Nucl. Phys. B, 1989, V.323, 719.
Completely Integrable Systems, Euclidean Lie Algebras and Curves. M Adler, P Van Moerbeke, Adv. Math. 38Adler M. and van Moerbeke P., Completely Integrable Systems, Euclidean Lie Algebras and Curves, Adv. Math., 1980, V.38, 267-317.
M Bershadsky, H Ooguri, Hidden SL(n) Symmetries in Conformal Field Theories. 49Bershadsky M. and Ooguri H., Hidden SL(n) Symmetries in Conformal Field Theories, Comm. Math. Phys., 1989, V.126, 49.
Lie Algebras and Equations of KdV Type. V G Drinfeld, V V Sokolov, J. Sov. Math. Drinfeld V.G. and Sokolov V.V., Lie Algebras and Equations of KdV Type, J. Sov. Math., 1985, V.30, 1975.
Deformations of Conformal Field Theories. T Eguchi, S K Yang, Phys. Lett. B. 373Eguchi T. and Yang S.K., Deformations of Conformal Field Theories, Phys. Lett. B, 1989, V.224, 373.
Wess-Zumino-Witten Model as a Theory of Free Fields. A Gerasimov, A Marshakov, A Morozov, M Olshanetsky, S Shatashvili, Int. J. Mod. Phys. A. 2495Gerasimov A., Marshakov A., Morozov A., Olshanetsky M. and Shatashvili S., Wess-Zumino-Witten Model as a Theory of Free Fields, Int. J. Mod. Phys. A, 1990, V.5, 2495.
Possible Implications of Integrable Systems for String Theory. A Gerasimov, D Lebedev, A Morozov, Int. J. Mod. Phys. A. 6Gerasimov A., Lebedev D. and Morozov A., Possible Implications of Integrable Systems for String Theory, Int. J. Mod. Phys. A, 1991, V.6, 977-988.
On Commuting Flows of AKS Hierarchy and Twistor Correspondence. P Guha, J. Geom. Phys. Guha P., On Commuting Flows of AKS Hierarchy and Twistor Correspondence, J. Geom. Phys., 1996, V.20, 207.
Adler-Kostant-Symes Construction, Bi-hamiltonian Manifolds and KdV Equations. P Guha, J. Math. Phys. 5167Guha P., Adler-Kostant-Symes Construction, Bi-hamiltonian Manifolds and KdV Equations, J. Math. Phys., 1997, V.38, 5167.
Quantization and Unitary Representations, I: Prequantization. B Kostant, Lect. Notes. Maths., V. 170Kostant B., Quantization and Unitary Representations, I: Prequantization, in Lect. Notes. Maths., V.170, 87-208.
Integrable Systems and Double Loop Algebras in String Theory, Mod. A Morozov, Phys. Lett. A. 6Morozov A., Integrable Systems and Double Loop Algebras in String Theory, Mod. Phys. Lett. A, 1991, V.6, 1525-1531.
Two Dimensional Generalized Toda Lattice. A Mikhailov, M Olshanetsky, A Perelomov, Comm. Math. Phys. 473Mikhailov A., Olshanetsky M. and Perelomov A., Two Dimensional Generalized Toda Lattice, Comm. Math. Phys., 1981, V.79, 473.
J E Marsden, T Ratiu, Introduction to Mechanics and Symmetry. Springer-Verlag11Marsden J.E. and Ratiu T., Introduction to Mechanics and Symmetry, Chapter 6 and 11, Springer- Verlag, 1994.
M Olshanetsky, From Conformal Symmetries to Integrable Models, talk given at XIV John Hopkins Conference. unpublishedOlshanetsky M., From Conformal Symmetries to Integrable Models, talk given at XIV John Hopkins Conference, 1990, (unpublished).
More on Generalized Heisenberg Ferromagnet Models. P Oh, Q-Han Park, Phys. Lett. B. 333Oh P. and Q-Han Park, More on Generalized Heisenberg Ferromagnet Models, Phys. Lett. B, 1996, V.383, 333.
Self Dual Chern-Simons Solitons and Generalized Heisenberg Ferromagnetic Models. P Oh, Q-Han Park, Oh P. and Q-Han Park, Self Dual Chern-Simons Solitons and Generalized Heisenberg Ferromagnetic Models, SNUTP 96-112.
KdV Type Equations From Gauged WZW Models and Conformal Like Gauge of W-Gravity. Q-Han Park, Nucl. Phys. B. 267Q-Han Park, KdV Type Equations From Gauged WZW Models and Conformal Like Gauge of W- Gravity, Nucl. Phys. B, 1990, V.333, 267.
Goldstone Fields in 2 Dimensions with Multivalued Actions. A Polyakov, P Wiegmann, Phys. Lett. B. 223Polyakov A. and Wiegmann P., Goldstone Fields in 2 Dimensions with Multivalued Actions, Phys. Lett. B, 1984, V.141, 223.
Current Algebras and Nonlinear Partial Differential Equation. A G Reiman, M A Semenov-Tian-Shansky, Sov. Math. Dokl. 21Reiman A.G. and Semenov-Tian-Shansky M.A., Current Algebras and Nonlinear Partial Differential Equation, Sov. Math. Dokl., 1980, V.21, 630-634.
What is Classical r-matrix?. M A Semenov-Tian-Shansky, Funct. Anal. Appl. 17Semenov-Tian-Shansky M.A., What is Classical r-matrix? Funct. Anal. Appl., 1985, V.17, 259-272.
A Pressley, G Segal, Loop Groups. Oxford UniversityClarendon PressPressley A. and Segal G., Loop Groups, Clarendon Press, Oxford University, 1986.
System of Toda Type, Inverse Spectral Problems, and Representation Theory. W Symes, Invent. Math. 59Symes W., System of Toda Type, Inverse Spectral Problems, and Representation Theory, Invent. Math., 1980, V.59, 13-51.
| []
|
[
"High voltage assisted mechanical stabilization of single-molecule junctions",
"High voltage assisted mechanical stabilization of single-molecule junctions"
]
| [
"David Gelbwaser-Klimovsky \nDepartment of Chemistry and Chemical Biology\nHarvard University\n02138CambridgeMA\n",
"Alán Aspuru-Guzik \nDepartment of Chemistry and Chemical Biology\nHarvard University\n02138CambridgeMA\n",
"Michael Thoss \nInstitut für Theoretische Physik and Interdisziplinäres Zentrum für Molekulare Materialien\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nStaudtstr. 7/B2D-91058ErlangenGermany\n",
"Uri Peskin \nSchulich Faculty of Chemistry\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n"
]
| [
"Department of Chemistry and Chemical Biology\nHarvard University\n02138CambridgeMA",
"Department of Chemistry and Chemical Biology\nHarvard University\n02138CambridgeMA",
"Institut für Theoretische Physik and Interdisziplinäres Zentrum für Molekulare Materialien\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nStaudtstr. 7/B2D-91058ErlangenGermany",
"Schulich Faculty of Chemistry\nTechnion-Israel Institute of Technology\n32000HaifaIsrael"
]
| []
| The realization of molecular-based electronic devices depends to a large extent on the ability to mechanically stabilize the involved molecular bonds, while making use of efficient resonant charge transport through the device. Resonant charge transport can induce vibrational instability of molecular bonds, leading to bond rupture under a bias voltage. In this work, we go beyond the wide-band approximation in order to study the phenomenon of vibrational instability in single molecule junctions and show that the energy-dependence of realistic molecule-leads couplings affects the mechanical stability of the junction. We show that the chemical bonds can be stabilized in the resonant transport regime by increasing the bias voltage on the junction. This research provides guidelines for the design of mechanically stable molecular devices operating in the regime of resonant charge transport.Chemical bond rupture is a major concern when single molecules are being considered as electronic components in nano-scale devices[1][2][3][4]. In single molecule junctions, tunneling electrons temporally dwell on the molecule and therefore induce changes in the molecular charging state. In the deep (or off-resonant) tunneling regime charge fluctuations on the molecule during transport lead to energy exchange between the electronic and the mechanical molecular degrees of freedom [5-8]. These processes have remarkable effect on the molecular junction transport properties, but their influence on the mechanical stability of chemical bonds is considered to be minor. However, resonant tunneling, often associated with relatively high bias voltage, is more relevant for electronics than deep tunneling, since the associated currents are significantly larger. In this regime, changes in the charging state of the molecule are pronounced, and consequently the electronic coupling to molecular vibrations can result in bond rupture either at the molecule or at the molecule-lead contacts. This mechanical instability often limits experiments on single molecule junctions to the off-resonant tunneling regime. In order to combine the desired features of efficient resonant transport at high voltage operation with mechanically stable molecules, one needs to determine which experimentally controlled parameters contribute to the mechanical stability of molecules under non-equilibrium transport conditions.Charge transport induced bond rupture was observed for physisorbed molecules in scanning tunneling microscope experiments [9-11] as well as in atomic chains[12]and single molecule junctions[3,4,13], where the molecules are chemically bonded to the leads. In particular, the occurrence of bond rupture increased with increasing bias voltage, which points to the increased transport induced charging of the molecule. It is worthwhile to mention in the present context that the possibility to control bond rupture by the molecular junction parameters (e.g., voltage, coupling to the leads, etc) is relevant not only for the sake of mechanical stability of nano-scale current carrying devices, but also for nanoscale chemical catalysis. It was shown theoretically that (by a proper design) transport induced heating can be directed towards a particular bond[14,15], suggesting the possibility of mode-selective chemistry in single junction architectures.Theoretical works on bond dissociation induced by resonant tunneling through molecular junctions consider the effective (anharmonic) mechanical force on the nuclei when the electronic state is a mixture of different charging states[16][17][18]. This may turn the bound nuclear geometry into a metastable one, leading to bond rupture in the steady state (long-time) limit. Other theoretical approaches restrict the discussion of molecular vibration excitations to the harmonic approximation[14,15,[18][19][20]. While bond dissociation can not be treated explicitly in this case, the occurrence of vibrational instability[18][19][20][21][22]due to the excess of energy flow into vibrations is considered as the indicator for bond rupture in the anharmonic case.In this work we address one of the crucial aspects of the realization of single molecule electronic devices: How can the conditions of operation be tuned in order to benefit from efficient resonant charge transport at high voltage through a single molecule junction, and yet to maintain the mechanical stability of the molecule? For this purpose we consider in detail the generic model of vibrational heating in non-equilibrium transport between two Fermionic reservoirs. The onset of vibrational instability is analyzed in the limit of weak molecule-lead and intra-molecular vibronic couplings, where resonant charge transport kinetics is expressed in terms of vibrational heating and cooling processes. We demonstrate cases where increasing the bias voltage favors cooling processes over heating, thus stabilizing the molecular junction at a higher voltage. This result contrasts with the common intuition for resonant transport, which correlates instability with higher voltage. However, it is read-arXiv:1705.08534v1 [cond-mat.mes-hall] | 10.1021/acs.nanolett.8b01127 | [
"https://arxiv.org/pdf/1705.08534v1.pdf"
]
| 49,317,319 | 1705.08534 | b318cd378295b298728decd66e70b040dcccec72 |
High voltage assisted mechanical stabilization of single-molecule junctions
23 May 2017
David Gelbwaser-Klimovsky
Department of Chemistry and Chemical Biology
Harvard University
02138CambridgeMA
Alán Aspuru-Guzik
Department of Chemistry and Chemical Biology
Harvard University
02138CambridgeMA
Michael Thoss
Institut für Theoretische Physik and Interdisziplinäres Zentrum für Molekulare Materialien
Friedrich-Alexander-Universität Erlangen-Nürnberg
Staudtstr. 7/B2D-91058ErlangenGermany
Uri Peskin
Schulich Faculty of Chemistry
Technion-Israel Institute of Technology
32000HaifaIsrael
High voltage assisted mechanical stabilization of single-molecule junctions
23 May 2017
The realization of molecular-based electronic devices depends to a large extent on the ability to mechanically stabilize the involved molecular bonds, while making use of efficient resonant charge transport through the device. Resonant charge transport can induce vibrational instability of molecular bonds, leading to bond rupture under a bias voltage. In this work, we go beyond the wide-band approximation in order to study the phenomenon of vibrational instability in single molecule junctions and show that the energy-dependence of realistic molecule-leads couplings affects the mechanical stability of the junction. We show that the chemical bonds can be stabilized in the resonant transport regime by increasing the bias voltage on the junction. This research provides guidelines for the design of mechanically stable molecular devices operating in the regime of resonant charge transport.Chemical bond rupture is a major concern when single molecules are being considered as electronic components in nano-scale devices[1][2][3][4]. In single molecule junctions, tunneling electrons temporally dwell on the molecule and therefore induce changes in the molecular charging state. In the deep (or off-resonant) tunneling regime charge fluctuations on the molecule during transport lead to energy exchange between the electronic and the mechanical molecular degrees of freedom [5-8]. These processes have remarkable effect on the molecular junction transport properties, but their influence on the mechanical stability of chemical bonds is considered to be minor. However, resonant tunneling, often associated with relatively high bias voltage, is more relevant for electronics than deep tunneling, since the associated currents are significantly larger. In this regime, changes in the charging state of the molecule are pronounced, and consequently the electronic coupling to molecular vibrations can result in bond rupture either at the molecule or at the molecule-lead contacts. This mechanical instability often limits experiments on single molecule junctions to the off-resonant tunneling regime. In order to combine the desired features of efficient resonant transport at high voltage operation with mechanically stable molecules, one needs to determine which experimentally controlled parameters contribute to the mechanical stability of molecules under non-equilibrium transport conditions.Charge transport induced bond rupture was observed for physisorbed molecules in scanning tunneling microscope experiments [9-11] as well as in atomic chains[12]and single molecule junctions[3,4,13], where the molecules are chemically bonded to the leads. In particular, the occurrence of bond rupture increased with increasing bias voltage, which points to the increased transport induced charging of the molecule. It is worthwhile to mention in the present context that the possibility to control bond rupture by the molecular junction parameters (e.g., voltage, coupling to the leads, etc) is relevant not only for the sake of mechanical stability of nano-scale current carrying devices, but also for nanoscale chemical catalysis. It was shown theoretically that (by a proper design) transport induced heating can be directed towards a particular bond[14,15], suggesting the possibility of mode-selective chemistry in single junction architectures.Theoretical works on bond dissociation induced by resonant tunneling through molecular junctions consider the effective (anharmonic) mechanical force on the nuclei when the electronic state is a mixture of different charging states[16][17][18]. This may turn the bound nuclear geometry into a metastable one, leading to bond rupture in the steady state (long-time) limit. Other theoretical approaches restrict the discussion of molecular vibration excitations to the harmonic approximation[14,15,[18][19][20]. While bond dissociation can not be treated explicitly in this case, the occurrence of vibrational instability[18][19][20][21][22]due to the excess of energy flow into vibrations is considered as the indicator for bond rupture in the anharmonic case.In this work we address one of the crucial aspects of the realization of single molecule electronic devices: How can the conditions of operation be tuned in order to benefit from efficient resonant charge transport at high voltage through a single molecule junction, and yet to maintain the mechanical stability of the molecule? For this purpose we consider in detail the generic model of vibrational heating in non-equilibrium transport between two Fermionic reservoirs. The onset of vibrational instability is analyzed in the limit of weak molecule-lead and intra-molecular vibronic couplings, where resonant charge transport kinetics is expressed in terms of vibrational heating and cooling processes. We demonstrate cases where increasing the bias voltage favors cooling processes over heating, thus stabilizing the molecular junction at a higher voltage. This result contrasts with the common intuition for resonant transport, which correlates instability with higher voltage. However, it is read-arXiv:1705.08534v1 [cond-mat.mes-hall]
The realization of molecular-based electronic devices depends to a large extent on the ability to mechanically stabilize the involved molecular bonds, while making use of efficient resonant charge transport through the device. Resonant charge transport can induce vibrational instability of molecular bonds, leading to bond rupture under a bias voltage. In this work, we go beyond the wide-band approximation in order to study the phenomenon of vibrational instability in single molecule junctions and show that the energy-dependence of realistic molecule-leads couplings affects the mechanical stability of the junction. We show that the chemical bonds can be stabilized in the resonant transport regime by increasing the bias voltage on the junction. This research provides guidelines for the design of mechanically stable molecular devices operating in the regime of resonant charge transport.
Chemical bond rupture is a major concern when single molecules are being considered as electronic components in nano-scale devices [1][2][3][4]. In single molecule junctions, tunneling electrons temporally dwell on the molecule and therefore induce changes in the molecular charging state. In the deep (or off-resonant) tunneling regime charge fluctuations on the molecule during transport lead to energy exchange between the electronic and the mechanical molecular degrees of freedom [5][6][7][8]. These processes have remarkable effect on the molecular junction transport properties, but their influence on the mechanical stability of chemical bonds is considered to be minor. However, resonant tunneling, often associated with relatively high bias voltage, is more relevant for electronics than deep tunneling, since the associated currents are significantly larger. In this regime, changes in the charging state of the molecule are pronounced, and consequently the electronic coupling to molecular vibrations can result in bond rupture either at the molecule or at the molecule-lead contacts. This mechanical instability often limits experiments on single molecule junctions to the off-resonant tunneling regime. In order to combine the desired features of efficient resonant transport at high voltage operation with mechanically stable molecules, one needs to determine which experimentally controlled parameters contribute to the mechanical stability of molecules under non-equilibrium transport conditions.
Charge transport induced bond rupture was observed for physisorbed molecules in scanning tunneling microscope experiments [9][10][11] as well as in atomic chains [12] and single molecule junctions [3,4,13], where the molecules are chemically bonded to the leads. In particular, the occurrence of bond rupture increased with increasing bias voltage, which points to the increased transport induced charging of the molecule. It is worthwhile to mention in the present context that the possibility to control bond rupture by the molecular junction parameters (e.g., voltage, coupling to the leads, etc) is relevant not only for the sake of mechanical stability of nano-scale current carrying devices, but also for nanoscale chemical catalysis. It was shown theoretically that (by a proper design) transport induced heating can be directed towards a particular bond [14,15], suggesting the possibility of mode-selective chemistry in single junction architectures.
Theoretical works on bond dissociation induced by resonant tunneling through molecular junctions consider the effective (anharmonic) mechanical force on the nuclei when the electronic state is a mixture of different charging states [16][17][18]. This may turn the bound nuclear geometry into a metastable one, leading to bond rupture in the steady state (long-time) limit. Other theoretical approaches restrict the discussion of molecular vibration excitations to the harmonic approximation [14,15,[18][19][20]. While bond dissociation can not be treated explicitly in this case, the occurrence of vibrational instability [18][19][20][21][22] due to the excess of energy flow into vibrations is considered as the indicator for bond rupture in the anharmonic case.
In this work we address one of the crucial aspects of the realization of single molecule electronic devices: How can the conditions of operation be tuned in order to benefit from efficient resonant charge transport at high voltage through a single molecule junction, and yet to maintain the mechanical stability of the molecule? For this purpose we consider in detail the generic model of vibrational heating in non-equilibrium transport between two Fermionic reservoirs. The onset of vibrational instability is analyzed in the limit of weak molecule-lead and intra-molecular vibronic couplings, where resonant charge transport kinetics is expressed in terms of vibrational heating and cooling processes. We demonstrate cases where increasing the bias voltage favors cooling processes over heating, thus stabilizing the molecular junction at a higher voltage. This result contrasts with the common intuition for resonant transport, which correlates instability with higher voltage. However, it is read-ily explained by considering realistic, energy-dependent, profiles for the density of states in the leads, beyond the commonly invoked wide-band approximation. Indeed, relative changes in the leads densities of states may favor inelastic transport of low-energy electrons from one lead into high energy states of the other leads, resulting in efficient vibrational cooling. Since the relevant densities of states depend on the bias voltage, this effect can be obtained at relatively high voltages. This analysis provides new guidelines for mechanical stabilization of single molecule junctions under resonant transport conditions. The minimal model for transport induced vibrational excitation considers a single electronic transport channel through the molecule [23], where a single spin orbital is coupled to a single bond, represented as a quantum mechanical oscillator. In realistic systems vibrational excitation energies exceeding a few (∼ 10-100) vibration quanta would be typically associated with highly anharmonic parts of the potential energy surface, where bond rupture is likely to occur. However, our purpose is to capture the onset of vibrational instability at low vibration excitation numbers, consistent with the harmonic approximation. Therefore we shall treat explicitly the harmonic part of the potential energy surface. The model Hamiltonian reads,
H = [ ω 0 + g 2 (ã + a + )]d †d + Ωã †ã ,
where ω 0 is the charging energy of the single molecular orbital, associated with the Fermionic creation and annihilation operators,d † andd, respectively, andã † (ã) are the creation (annihilation) operators for the vibrational degree of freedom with frequency Ω. The vibronic coupling parameter is g. This system Hamiltonian is assumed to be weakly coupled to right and left reservoirs of non-interacting electrons (the baths),
H leads = J∈R,L k∈J ω k c † k c k , via the interaction term, H int =d † J∈R,L k∈J λ k c k + h.c..(ω) = k∈J +∞ −∞ e i(ω−ω k )t |λ k | 2 c k c † k T J dt = e (ω−µ J )/k B T J G J (−ω)
, where µ J is the lead chemical potential, k B is the Boltzmann constant and T J is the lead temperature.
The analysis of this model is simplified by invoking the small polaron transformation [24] which diagonalizes the system Hamiltonian (see supplementary information), H = ω 0 d † d + Ωa † a, where a, a † and d, d † are transformed system operators, and ω 0 =ω 0 − g 2 4Ω . We study the weak vibronic coupling limit of the above model, i.e., g 2Ω << 1, which is realistic for many molecular systems and was associated in earlier works with vibrational instability [18][19][20]. In this limit, each electron that flows through the junction exchanges either one or zero vibration quanta with the bond oscillator. Our purpose is to capture the onset of vibrational instability at low excitation numbers, as the indicator for bond rupture in realistic anharmonic systems. Therefore, we additionally restrict the following analysis to low excitation numbers,
g 2Ω √ N << 1,(1)
where N = a † a is the average vibrational excitation. In a typical scenario where the coupling between the molecular junction and the leads is weak, a Markovian master equation is adequate for describing the evolution of ρ 0(1) n [25][26][27] which represents the population of the molecular electronic state and the vibrational mode. The superscript 0(1) denotes a neutral (charged) electronic level and n the vibration quantum number. The master equation yields the following equation of motion for the eigenstate populations,
ρ 1 n = G(−ω 0 )ρ 0 n − G(ω 0 )ρ 1 n + g 2 4Ω 2 nG(−ω + )ρ 0 n−1 + (n + 1)G(−ω − )ρ 0 n+1 − (1 + n)G(ω − ) + nG(ω + ) ρ 1 n ; ρ 0 n = G(ω 0 )ρ 1 n − G(−ω 0 )ρ 0 n + g 2 4Ω 2 nG(ω − )ρ 1 n−1 + (n + 1)G(ω + )ρ 1 n+1 − (1 + n)G(−ω + ) + nG(−ω − ) ρ 0 n ,(2)
where ω ± = ω 0 ± Ω, and G(ω) = J∈{R,L} G J (ω). Generally, the coupling density to the J lead depends on the temperature and the transition frequency, i.e.,
G J (ω) = Γ J (ω)(1 − f J (ω)); G J (−ω) = Γ J (ω)f J (ω), where Γ J (ω)
is the rate of decay of the electronic occupation on the molecule due to molecule-lead coupling, and f J (ω) is the Fermi distribution [28].
Notice that the dynamics of the vibrational mode population is affected by the leads through the electronic charging state. The weak electron-vibration coupling, renders this dynamics slow relatively to the electronic evolution. This can be seen from Eqs. (2), which point to two different time scales: The fast dynamics associated with transfer between the electronic charging states, and the slow dynamics of population transfer between vibrational states, which depends on the small factor, g 2 Ω 2 . Accounting only for the fast dynamics, one obtains,
ρ 1 n = −G(ω 0 )ρ 1 n + G(−ω 0 )ρ 0 n ; ρ 0 n = −G(−ω 0 )ρ 0 n + G(ω 0 )ρ 1 n ,(3)
which implies that the electronic populations quickly reach steady state,ρ 0(1) n =ρ n G(±ω0) G(ω0)+G(−ω0) , whereρ n = ρ 1 n +ρ 0 n represents the population of the vibrational state n after the electronic states has reached steady state. Accounting also for the terms proportional to g 2 Ω 2 , we now derive an equation forρ n on the slow time scale, which is the equation of motion for the vibrational state populations, ρ n = r(n + 1)ρ n+1 + snρ n−1 − (s(1 + n) + rn)ρ n , (4) where r(s) are the cooling (heating) rates,
r = g 2Ω 2 G(ω + )G(−ω 0 ) + G(−ω − )G(ω 0 ) G(ω 0 ) + G(−ω 0 ) ; s = g 2Ω 2 G(ω − )G(−ω 0 ) + G(−ω + )G(ω 0 ) G(ω 0 ) + G(−ω 0 ) .(5)
These rates are composed of specific contributions. For example, the product G R (ω + )G L (−ω 0 ) which is included in the first term in r, corresponds to a cooling process in which an electron with energy ω 0 is being absorbed from the left lead, followed by its emission to the right lead at a different energy, (ω 0 + Ω). The net result is a deexcitation of the vibrational mode by one quantum, i.e., Ω. The equation of motion for the average excitation energy, n(t) ≡ ∞ n=1 nρ n (t), can be readily obtained from Eq. 4 , ṅ(t) = −r n(t) + s( n(t) + 1), which yields,
n(t) = s r − s + [ n(0) − s r − s ]e −(r−s)t .
In the scenario r > s, where the overall cooling rate exceeds the overall heating rate, the vibrational mode reaches a stationary state characterized by the asymptotic average excitation,
n(∞) = s r − s .(6)
Recalling that in realistic systems large excitation numbers would be associated with highly anharmonic parts of the potential energy surface, where bond rupture is likely to occur, we set a vibrational excitation threshold level, n tr , beyond which the bond is considered unstable. The condition for bond instability thus reads s r−s > n tr . Notice that this bounds from above the value of r − s. Hence, large r − s values imply stable molecules, as suggested by the fact that the overall cooling rates is larger than the overall heating rate. Since the time evolution of n(t) is monotonic, it is enough to consider the steady state in order to find out if the vibrational mode population ever crossed the instability threshold. When the overall heating rate exceeds the overall cooling rate one has, r < s. Rather than approaching a steady state, the vibrational excitation level diverges, implying that the junction will be unstable for any n tr . This regime has been previously related with work extraction in the context of heat machines [27,29].
Let us consider first the wide-band limit for the couplings to the leads. The wide-band approximation implies that the energy-dependence of the coupling densities {G J (ω)} is only due to the thermal electronic population in the lead, i.e., G J (ω) ≡ Γ J (1−f J (ω)); G J (−ω) ≡ Γ J f J (ω), where Γ J is a frequency-independent decay rate 2 cooling processes 2 heating processes 3 cooling processes 1 heating process 2 cooling processes 0 heating processes Figure 1: Voltage-dependent heating (red) and cooling (blue) processes in the wide-band limit for leads at zero temperature: a) low voltage; b) intermediate voltage and c) high voltage.
a) b) c) μ L μ L μ L μ R μ R μ R ω ω ω E
[28]. In the zero temperature limit, G J (ω) can take one of two values, i.e., either zero or Γ J depending on the chemical potential. Consequently, and for a symmetric junction with Γ R = Γ L = Γ, the task of calculating the steady state excitation, s r−s , simplifies to counting the number of non-zero contributions to the heating and cooling rates in Eqs. (5). Fig. 1 depicts schematically the three relevant scenarios, where the chemical potential at the left lead is higher than that at the right lead. A larger bias window, µ L − µ R , leads to an excess of heating over cooling processes, which is reflected in a larger vibrational excitation number, n(∞) = 0, 1 2 , ∞. The trend in n(∞) , within the wide-band approximation, seems to be in accord with recent experiments at finite (non-zero) temperatures [3,13], in which increas-
a) b) c)
ω ω ω Figure 3: Coupling spectrum beyond the wide-band approximation and its effect for zero (a), small (b) and high (c) voltages. The voltage increase displaces the left and right spectra in opposite directions. Therefore, the wide-band result applies at low voltages, but breaks down at higher voltages, where cooling processes becomes favorable with respect to heating.
ing the bias voltage was found to lead to bond rupture. This trend is indeed observed also at finite temperature within the present model, as demonstrated in Fig. 2.
The left and right plots in Fig. 2b correspond to the same junction parameters (see figure caption), with n tr = 3, and n tr = 10, respectively. Associating the bond instability with n(∞) > n tr , the uncolored regions reflect the regions of bond instability, where by definition, a smaller n tr corresponds to a larger instability region. Notice in Fig. 2 that, for certain voltages, n(∞) decreases and then increases as a function of temperature. This non-monotonic dependence is due to the fact that the thermal broadening of the Fermi distribution affects differently the heating and the cooling rates. For unstable junctions (see Fig. 2a), a small increase of the temperature permits electron-hole cooling processes and primarily reduces vibrational heating processes. For example, the emission of low energy electrons from the molecule to the right lead is partially blocked in this case, thus reducing the overall vibrational heating rate, and lowering n(∞) . A larger increase of the temperature affects also the cooling rate by, among other things, reducing also the emission of high energy electrons, which contributes to a relative increase of n(∞) . At infinite temperature, the Fermi distribution approaches the value 1/2 for any frequency and therefore all processes are allowed and have the same rate. Since the numbers of allowed heating and cooling processes are the same, the overall heating and cooling rates become equal, and so n(∞) = ∞, destabilizing the junction for any voltage [19].
A much richer voltage-dependence of the vibration instability is expected in realistic systems where the assumption of the wide-band approximation breaks down. While the wide-band approximation is often adequate for describing the decay rates between molecules and metallic leads, this approximation is an over simplification in other cases. For example, graphene electrodes show rich energy-dependence of the molecule-lead coupling, which depend on the particular graphene surface edge coupled to the molecule [30,31]. Even in the case of metallic leads, covalently bonded adsorbates acting as linkers between the metal and the conducting molecule may induce a pronounced energy-dependence to the molecule-lead coupling. Accounting for the explicit energy-dependence of the decay rates, Γ J → Γ J ( ω − µ J ), the corresponding coupling densities obtain a non-trivial energy dependence already at zero temperature. Since the latter determine rates of transport-induced vibrational heating and cooling processes on the molecule, the lead chemical potentials control in fact the balance between heating and cooling and thus determine the bond stability in a non-trivial way.
Remarkably, in some realistic cases, an increase in the bias voltage can actually stabilize the bond, in contrast to the intuitive result based on the wide-band approximation. Without loss of generality, let us consider a junction at zero temperature, where one of the leads (the left one) has a flat electronic decay profile, Γ L ( ω − µ L ) = Γ, and the other (right) lead is also flat, except for an additional Gaussian peak centered at µ R + ω * , i.e.,
Γ R ( ω −µ R ) = Γ 1 + e − ( ω−µ R − ω * ) 2 2σ 2
, as illustrated in Fig. 3a, where the leads Fermi energy is set to zero, and the molecular charging energy is ω 0 . The presence of an external bias voltage (V ) on the junction is modeled here in terms of shifts to the single particle energy levels in the non-interacting leads, resulting in shifts of the moleculelead coupling spectra (marked as vertical arrows in Figs. 3b, 3c). The left and right chemical potentials become voltage-dependent (µ L = eV /2 and µ R = −eV /2), and so do the electronic decay rates. As the voltage increases, vibrational heating and cooling processes are activated. If the vibration frequency is in the range, Ω ω * −ω 0 −σ (see Fig. 3b), all the heating and cooling processes involving exchange of a single vibration quanta become accessible at some voltage, while the peak in Γ R ( ω − µ R ) is still outside the Fermi conductance window. In this range, the model is in accord with the wide-band approximation which predicts that a stable bond is destabilized by an increase of the voltage (see Fig. 2a). This result is indeed confirmed also in the lower part of Fig. 4 (V<0.4 eV). However, as the voltage keeps increasing (see Fig. 3b), the peak in Γ R ( ω − µ R ) enters the Fermi conductance window and the wide-band assumption no longer holds. Consequently, the rate of vibrational cooling by electron emission at energy ω = ω 0 + Ω to the right lead is favored over other processes. This breaking of the balance between heating and cooling, leads to high-voltage induced stabilization of the junction.
A more quantitative condition for the junction stability reads, s r−s < n tr . Using Eq. (5) for the overall heating and cooling rates, this condition translates to the following one,
dΓ R dω > Γ R Ωn tr ,(7)
where dΓ R dω ≡ 1 2Ω ω0+Ω ω0−Ω dΓ R dω dω is the average derivative and Γ R = Γ R ( ω0−µr)+Γ R ( ω0− Ω−µr) 2 is the average elec-tronic decay rate to the right lead. As shown on Eq. (7), as long as the peak is steep enough, the junction will be stabilized by increasing the bias voltage. This can be seen also from Fig. 4, where the color region shows regimes where the junction is stable and the white area corresponds to unstable junctions. Below a certain threshold, 2σ 2
( Ω) 2 ∼ 30, the bond is stabilized by increasing the voltage.
The above example demonstrates a general scenario of high voltage induced mechanical stability, which is facilitated by a non-uniform energy dependence of the electron transfer rate between the molecule and the leads. Peaks (and deeps) in the transfer rate profiles which characterize realistic lead structures and/or chemical compositions facilitate such a scenario. For example, employing graphene electrodes or specifically designed moleculelead linker groups, may be used to design mechanically stable single molecule devices operating at high voltage in the resonance transport regime.
The relevant properties of the baths are encoded in the Fourier transforms of the autocorrelation functions (J = L, R), G J
Figure 2
2: a) Heating and cooling processes at high voltage for low, intermediate and high temperatures (from left to right). b) Steady state vibrational excitation, n(∞) , as function of the temperature and the voltage in the wide-band limit. Colored areas represent regions of stability in contrast to white areas corresponding to bond instability. The stability regions depends on ntr. The threshold is set to ntr = 3 on the left and to ntr = 10 on the right. Notice the different color scale between the two figures. The junction model parameters are: ω0 = 0.1 eV, Ω = 0.05 eV, g = 0.1Ω and Γ = 0.01 eV.
Figure 4 :
4Steady state vibrational excitation, n(∞) , beyond the wide-band approximation for a molecular junction at zero temperature. Colored (uncolored) areas correspond to vibrational stability (instability), where the threshold was set to ntr = 10. The model parameters are as inFig. 2. The peak in the right lead decay rate above the Fermi energy is centered around ω * = 0.65 eV.
AcknowledgmentsThis research was supported by the German Israeli Foundation grant 1154-119.5/1. We acknowledge the support from the Center for Excitonics, an Energy Frontier Research Center funded by the U.S. Department of Energy under award de-sc0001088 (Energy transfer). UP acknowledges the great sabbatical hospitality by the Harvard group. MT thanks Rainer Härtle for helpful discussions.Supplementary informationIn this supplementary material we derive the dynamic equations for the single molecular orbit, E, and the vibrational mode. We start by defining new operators that diagonalize their Hamiltonian, which isThe new operators are given bỹWith this new variables, the Hamiltonian is diagonal,where ω 0 =ω 0 − g 2 4Ω . We assume that E is weakly coupled to the leads therefore the reduced dynamics is governed by a Markovian master equation[25,26,29]. Furthermore, we analyze the regime where the vibrational mode is weakly coupled to E, g Ω1.Under these conditions, a † a ≈ a + a , the transformed number operator a † a provides a reliable measure of the excitation level of the vibrational mode in the original frame.In terms of the new operators the transformation isWe continue the derivation of the master equation by transforming the operators of E in the interaction Hamiltonian, d andd † , to the interaction picture:whereThis approximation is strictly valid only for g 2Ω a † a << 1.Finally, in the interactino picture the dynamic equations for ρ, the density matrix of E + vibrational mode, arėwhereL q,J ρ = 1 2 G J (ω 0 + qΩ) S q ρ, S † q + S q , ρS † q + G J (−(ω 0 + qΩ)) S † q ρ, S q + S † q , ρS q .
. M S Hybertsen, J. Chem. Phys. 14692323M. S. Hybertsen, J. Chem. Phys. 146, 092323 (2017).
. C Bruot, L Xiang, J L Palma, Y Li, N Tao, J Am Chem Soc. 13713933C. Bruot, L. Xiang, J. L. Palma, Y. Li, and N. Tao, J Am Chem Soc 137, 13933 (2015).
. H Li, N T Kim, T A Su, M L Steigerwald, C Nuckolls, P Darancet, J L Leighton, L Venkataraman, J Am Chem Soc. 13816159H. Li, N. T. Kim, T. A. Su, M. L. Steigerwald, C. Nuck- olls, P. Darancet, J. L. Leighton, and L. Venkataraman, J Am Chem Soc 138, 16159 (2016).
. H Li, T A Su, V Zhang, M L Steigerwald, C Nuckolls, L Venkataraman, J Am Chem Soc. 1375028H. Li, T. A. Su, V. Zhang, M. L. Steigerwald, C. Nuckolls, and L. Venkataraman, J Am Chem Soc 137, 5028 (2015).
. L Yu, Z K Keane, J W Ciszek, L Cheng, M Stewart, J Tour, D Natelson, Phys Rev Lett. 93266802L. Yu, Z. K. Keane, J. W. Ciszek, L. Cheng, M. Stewart, J. Tour, and D. Natelson, Phys Rev Lett 93, 266802 (2004).
. M Galperin, M A Ratner, A Nitzan, J. Chem. Phys. 12111965M. Galperin, M. A. Ratner, and A. Nitzan, J. Chem. Phys. 121, 11965 (2004).
. M , Caspary Toroker, U Peskin, J. Chem. Phys. 127154706M. Caspary Toroker and U. Peskin, J. Chem. Phys. 127, 154706 (2007).
. R Smit, Y Noat, C Untiedt, N Lang, M V Van Hemert, J Van Ruitenbeek, Nature. 419906R. Smit, Y. Noat, C. Untiedt, N. Lang, M. v. van Hemert, and J. Van Ruitenbeek, Nature 419, 906 (2002).
. K Huang, L Leung, T Lim, Z Ning, J C Polanyi, J Am Chem Soc. 1356220K. Huang, L. Leung, T. Lim, Z. Ning, and J. C. Polanyi, J Am Chem Soc 135, 6220 (2013).
. K Huang, L Leung, T Lim, Z Ning, J C Polanyi, ACS nano. 812468K. Huang, L. Leung, T. Lim, Z. Ning, and J. C. Polanyi, ACS nano 8, 12468 (2014).
. B Stipe, M Rezaei, W Ho, S Gao, M Persson, B Lundqvist, Phys Rev Lett. 784410B. Stipe, M. Rezaei, W. Ho, S. Gao, M. Persson, and B. Lundqvist, Phys Rev Lett 78, 4410 (1997).
. C Sabater, C Untiedt, J M Van Ruitenbeek, Beilstein J of Nanotech. 62338C. Sabater, C. Untiedt, and J. M. van Ruitenbeek, Beil- stein J of Nanotech. 6, 2338 (2015).
. B Capozzi, J Z Low, J Xia, Z.-F Liu, J B Neaton, L M Campos, L Venkataraman, Nano Lett. 163949B. Capozzi, J. Z. Low, J. Xia, Z.-F. Liu, J. B. Neaton, L. M. Campos, and L. Venkataraman, Nano Lett 16, 3949 (2016).
. R Härtle, R Volkovich, M Thoss, U Peskin, R. Härtle, R. Volkovich, M. Thoss, and U. Peskin (2010).
. R Volkovich, R Härtle, M Thoss, U Peskin, Phys Chem Chem Phys. 1314333R. Volkovich, R. Härtle, M. Thoss, and U. Peskin, Phys Chem Chem Phys 13, 14333 (2011).
. A A Dzhioev, D S Kosov, F Von Oppen, J. Chem. Phys. 138134103A. A. Dzhioev, D. S. Kosov, and F. von Oppen, J. Chem. Phys. 138, 134103 (2013).
. A A Dzhioev, D Kosov, J. Chem. Phys. 13574701A. A. Dzhioev and D. Kosov, J. Chem. Phys. 135, 074701 (2011).
. J Koch, M Semmelhack, F Von Oppen, A Nitzan, Phys Rev B. 73155306J. Koch, M. Semmelhack, F. von Oppen, and A. Nitzan, Phys Rev B 73, 155306 (2006).
. R Härtle, M Thoss, Phys Rev B. 83125419R. Härtle and M. Thoss, Phys Rev B 83, 125419 (2011).
. R Härtle, M Kulkarni, Phys Rev B. 91245429R. Härtle and M. Kulkarni, Phys Rev B 91, 245429 (2015).
. D Kast, L Kecke, J Ankerhold, Beilstein J. of Nanotech. 2416D. Kast, L. Kecke, and J. Ankerhold, Beilstein J. of Nan- otech. 2, 416 (2011).
. R Avriller, A L Yeyati, Phys. Rev. B. 8041309R. Avriller and A. L. Yeyati, Phys. Rev. B 80, 041309 (2009).
. A Mitra, I Aleiner, A Millis, Phys Rev B. 69245302A. Mitra, I. Aleiner, and A. Millis, Phys Rev B 69, 245302 (2004).
G D Mahan, Many-particle physics. Springer Science & Business MediaG. D. Mahan, Many-particle physics (Springer Science & Business Media, 2013).
. R Alicki, D Gelbwaser-Klimovsky, G Kurizki, arXiv:1205.4552arXiv preprintR. Alicki, D. Gelbwaser-Klimovsky, and G. Kurizki, arXiv preprint arXiv:1205.4552 (2012).
. E B Davies, Comm. Math. Phys. 3991E. B. Davies, Comm. Math. Phys. 39, 91 (1974).
. D Gelbwaser-Klimovsky, R Alicki, G Kurizki, EPL. 10360005D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, EPL 103, 60005 (2013).
. U Peskin, J Phys B: At , Mol Opt Phys. 43153001U. Peskin, J Phys B: At , Mol Opt Phys 43, 153001 (2010).
. D Gelbwaser-Klimovsky, W Niedenzu, G Kurizki, Adv. At., Mol., Opt. Phys. 64329D. Gelbwaser-Klimovsky, W. Niedenzu, and G. Kurizki, Adv. At., Mol., Opt. Phys. 64, 329 (2015).
. D A Ryndyk, J Bundesmann, M.-H Liu, K Richter, Phys Rev B. 86195425D. A. Ryndyk, J. Bundesmann, M.-H. Liu, and K. Richter, Phys Rev B 86, 195425 (2012).
. K Ullmann, P B Coto, S Leitherer, A Molina-Ontoria, N Martín, M Thoss, H B Weber, Nano Lett. 153512K. Ullmann, P. B. Coto, S. Leitherer, A. Molina-Ontoria, N. Martín, M. Thoss, and H. B. Weber, Nano Lett 15, 3512 (2015).
| []
|
[
"Surface critical behavior of the three-dimensional O(3) model",
"Surface critical behavior of the three-dimensional O(3) model"
]
| [
"F Parisen \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nAm HublandD-97074WürzburgGermany\n",
"Toldin \nInstitut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nAm HublandD-97074WürzburgGermany\n"
]
| [
"Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nAm HublandD-97074WürzburgGermany",
"Institut für Theoretische Physik und Astrophysik\nUniversität Würzburg\nAm HublandD-97074WürzburgGermany"
]
| []
| We report results of high-precision Monte Carlo simulations of a three-dimensional lattice model in the O(3) universality class, in the presence of a surface. By a finite-size scaling analysis we have proven the existence of a special surface transition, computed the associated critical exponents, and shown the presence of an extraordinary phase with logarithmically decaying correlations. | 10.1088/1742-6596/2207/1/012003 | [
"https://arxiv.org/pdf/2111.11762v1.pdf"
]
| 244,488,150 | 2111.11762 | 519146c5f43ce6e8c2781bdf7d539e94d2357855 |
Surface critical behavior of the three-dimensional O(3) model
F Parisen
Institut für Theoretische Physik und Astrophysik
Universität Würzburg
Am HublandD-97074WürzburgGermany
Toldin
Institut für Theoretische Physik und Astrophysik
Universität Würzburg
Am HublandD-97074WürzburgGermany
Surface critical behavior of the three-dimensional O(3) model
We report results of high-precision Monte Carlo simulations of a three-dimensional lattice model in the O(3) universality class, in the presence of a surface. By a finite-size scaling analysis we have proven the existence of a special surface transition, computed the associated critical exponents, and shown the presence of an extraordinary phase with logarithmically decaying correlations.
Introduction
A system in the vicinity of a critical point exhibits a variety of interesting features, such as powerlaw singularities and scaling behavior in many observables. One of the most fascinating aspects is the emergence of universality: critical exponents associated to the aforementioned singularities, and other quantities, are independent of the local details of interactions. They are rather determined by the global features of the system, such as the symmetry group, the pattern of symmetry-breaking, dimensionality and range of interactions. The theory of Renormalization Group (RG) provides a framework for understanding and predicting the emergence of universality through a suitably defined flow of Hamiltonians, the fixed points of which control the critical behavior and define the so-called Universality Classes (UCs) [1].
While the singular behavior associated to the onset of a phase transition occurs, in principle, in the thermodynamic limit only, real physical systems naturally have boundaries. Their presence is the source of rich phase diagrams and critical behavior, that has been the target of many experimental [2] and theoretical [3][4][5] investigations. According to RG theory, a given bulk fixed point controlling the critical behavior in the thermodynamic limit potentially splits into several fixed points associated to the critical behavior on the boundary [1,4], thereby defining surface or, more generally, boundary UCs. This implies that critical exponents and other universal quantities on the boundary differ from bulk ones. Furthermore, for a given system at a critical point the boundary may exhibit diverse critical behavior, depending on the strength of boundary interactions. Surface UCs also determine the critical Casimir force [6][7][8][9][10][11].
The simplest setup where this physics is realized is the case of a semi-infinite d−dimensional system bounded by a (d−1)−dimensional surface. In this context, due to its physical relevance, the critical behavior of the classical 3D O(N ) model represents one of the most significant UC [12]. In Fig. 1 we sketch the bulk-surface phase diagram, as a function of the bulk and surface coupling constants. For N = 1, 2 a surface transition line, in the presence of a disordered bulk, separates a disordered surface from an ordered one, for N = 1, or from a surface possessing quasi-long range order (QLRO), for N = 2. For a critical bulk, and as a function of the surface coupling, we distinguish an ordinary and extraordinary transition lines. Surface, ordinary and extraordinary lines meet at a multicritical point, the so-called special UC [3,4]. For N = 3, no surface transition exists [12], hence the phase diagram topology does not necessarily mandate a special point. While early Monte Carlo (MC) studies supported the absence of a special point [13], a later MC investigation reported a possible Berezinskii-Kosterlitz-Thouless-like surface transition [14]. More recently, the problem has received a renewed attention in the context of conformal field-theory approaches [15][16][17][18][19][20][21][22][23][24][25][26][27], classical [28,29] and quantum critical behavior [30][31][32][33][34][35][36][37][38]. In particular, quantum MC investigations have focused on dimerized spin-1/2 [30][31][32][33] and spin-1 [34] systems in d = 2, which exhibit a second-order quantum phase transition in the classical 3D O(3) UC.
In the presence of an edge, these models have been shown to display ordinary, as well as nonordinary boundary exponents, depending on the geometrical setup. A recent field-theoretical study has predicted the existence of a so-called "extraordinary-log" phase at the surface of a critical 3D O(N ) model. This phase exists for N < N c , with N c > 2 1 , and is characterized by surface correlations decaying as a power of a logarithm. The associated exponent is universal, and it is determined by some amplitudes of the normal UC [25]. This is realized by applying a symmetry-breaking field on the boundary [3,4,39,40].
Motivated by these advancements, in Ref. [28] we have investigated the classical surface O(3) UC by means of MC simulations of an improved lattice model, where leading scaling corrections are suppressed. A finite-size scaling (FSS) analysis has shown the existence of a special transition and of an extraordinary phase consistent with the extraordinary-log scenario of Ref. [25]. In the following we summarize the results of Ref. [28].
Model
We have simulated the φ 4 model on a three-dimensional lattice of size L in all directions, applying periodic boundary conditions (BCs) along two directions, and open BCs on the remaining one. The reduced Hamiltonian H, such that the Gibbs weight is exp(−H), is
H = −β ı φ ı · φ − β s,↓ ı s↓ φ ı · φ − β s,↑ ı s↑ φ ı · φ + ı [ φ 2 ı + λ( φ 2 ı − 1) 2 ],(1)
where φ x is a three-components real field on the lattice site x = (x 1 , x 2 , x 3 ), indicated as a 3D vector. In Eq. (1) the first sum extends over nearest-neighbor pairs of sites where at least one belongs to the (2), are suppressed. Nextto-leading scaling corrections due to a nonrotationally invariant irrelevant operator have an exponent ω nr ≈ 2 [42], hence they decay very fast. Improved lattice models are particularly useful in highprecision numerical studies of critical phenomena [12], in particular for boundary critical phenomena [28,[43][44][45][46][47][48][49][50][51][52] because they allow a better control of scaling corrections. In our MC simulations we have fixed λ = 5.2 and β = 0.68798521, for which the model is critical [42]. The coupling constants β s,↓ , β s,↑ in Eq. (1) control the strength of boundary interactions. To study the surface critical behavior we have set β s,↓ = β s,↑ = β s and examined various surface observables as a function of β s . MC simulations have been performed by combining Metropolis, overrelaxation, and Wolff single-cluster updates [28,53].
Results
Special transition
A standard method to locate the onset of a continuous phase transition consists in a FSS analysis of RG-invariant observables [12,54]. To study the special UC we have proceeded in two steps. First, we have studied an RG-invariant quantity, determining its critical-point value. Subsequently, we have employed this value in a FSS analysis of other observables, to compute the critical exponents at the special transition. According to RG, close to a surface phase transition at β s = β s,c , an RG-invariant observable R behaves as
R = f ((β s − β s,c )L ysp ),(2)
where y sp is the scaling dimension of the relevant scaling field associated with the transition and we have for the moment neglected scaling corrections. We have analyzed the surface Binder ratio U 4 , defined as
U 4 ≡ ( M 2 s ) 2 M 2 s 2 , M s ≡ ı∈surface φ ı .(3)
In Fig. 2 we show U 4 as a function of β s . A scan over a wide range in β s reveals a crossing, indicative of a surface transition. Close to the putative transition, we have sampled U 4 for lattice sizes up to L = 128; the data are shown in the inset of Fig. 2. We observe a rather slow increase of the slope of U 4 with L, such that a precision of ≈ 10 −5 is needed in order to resolve the crossing. A fit of U 4 to a suitable Taylor expansion of Eq. (2), including also scaling corrections, allowed us to estimate the critical-point value U * 4 ≡ U 4 (β s,c ) = 1.0652 (4). We have used this value to analyze MC data, supplemented by additional simulations at L = 192, using FSS analysis at fixed RG-invariant [55][56][57]. In this method, one fixes a chosen RG-invariant R (here, R = U 4 ), thereby trading the statistical fluctuations of R with fluctuations of a coupling constant driving the transition (here, β s ). A discussion of the method can be found in Ref. [57]. To estimate the exponent y sp , we have computed derivatives with respect to β s of RG-invariants R = U 4 and R = Z a /Z p , the ratio of the partition functions with antiperiodic and periodic BCs on a direction parallel to the surfaces; this ratio can be conveniently sampled with the boundary-flip algorithm [58,59]. At fixed U 4 , dR/dβ s behaves as dR/dβ s ∝ L ysp . Suitable fits of dU 4 /dβ s and d(Z a /Z p )/dβ s at fixed U 4 , and as a function of L, including also scaling corrections, have delivered the estimate y sp = 0.36 (1), ν sp ≡ 1/y sp = 2.78 (8).
Next, we have sampled the surface susceptibility χ s . At fixed U 4 its leading scaling behavior is χ s ∝ L 2−η . Fits of χ s resulted in the estimate
η = −0.473(2).(5)
FSS analysis at fixed U 4 has also allowed us to estimate the critical surface couplint at the onset of the special transition β s,c = 1.1678(2).
Extraordinary phase
The existence of a special transition implies that for β s > β s,c the surface displays an extraordinary phase. To study it, we have simulated the model at β s = 1.5, for lattice sizes 8 ≤ L ≤ 384. We have computed the helicity modulus Υ which measures the response of the model to a torsion in the lateral BCs. To compute it, one replaces the nearest-neighbor interaction along one lateral boundary in the Hamiltonian (1) as
φ x · φ x+ê l → φ x R α,β (θ) φ x+ê l ,(6)
withê l the unit vector along one of the lateral directions, where periodic BCs are applied. In Eq. (6) R α,β (θ) is a rotation matrix that rotates the α and β components of φ by an angle θ. For the present geometry, the helicity modulus is then defined as [60] Υ ≡ 1 L
∂ 2 F (θ) ∂θ 2 θ=0 ,(7)
where F is the total free energy. In Fig. 3 we show the product ΥL, which exhibits a remarkable logarithmic growth, neither compatible with a standard critical phase, where ΥL ∼ const, nor with an ordered phase, where Υ ∼ const. A logarithmic violation of FSS is also found in the ratio ξ/L of the finite-size second-moment correlation length ξ 2 over the size L. Moreover, the two-point function on the surface exhibits a rather slow, but visible, decay. These findings are indicative of the extraordinary-log scenario put forward in Ref. [25]. In such a phase, the surface correlations decay as C(x → ∞) ∝ ln(x) −(N −1)/(2πα) , where N = 3 for the present case, and α is a universal RG parameter, determined by some amplitudes at the normal UC. Furthermore, in the extraordinary-log phase a logarithmic violation of FSS is predicted, such that ΥL 2α ln(L) and (ξ/L) 2 ∼ (α/2) ln L [62]. To further check this scenario, we have performed fits of various observables. Fits of the surface two-point function provided an estimate α = 0.15 (2). The quoted error bar has been estimated by comparing various fit results and should be taken with some caution, because in fitting the data we did not consider subleading corrections; these are potentially important, as found, e.g., in other critical models with marginal operators [63]. Fits of ξ/L to the expected logarithmic growth delivered α ≈ 0.14, consistent with the estimate coming from the correlations, although we observed some drift in the fitted value of α as a function of the minimum lattice size L used in the fits. Fits of ΥL gave less stable results, with α 0.11. All in all, despite the intrinsic difficulty in estimating a logarithmic exponent, we found a rough quantitative consistence of the scaling behavior with the scenario of an extraordinary-log phase.
Summary
In Ref. [28] we have elucidated the boundary critical behavior of the three-dimensional O(3) UC, in the presence of a 2D surface. A FSS analysis of high-precision MC data allowed us to conclude that there is a special transition on the surface, in the presence of a critical bulk, and to compute the associated critical exponents. The exponent η that we found is remarkably close to the nonordinary η exponent found in quantum MC studies of dimerized spin models [30-34, 36, 37]. This suggests that, for the geometrical settings where such a nonordinary exponent is found, these models are "accidentally" close to the special transition. Unlike the ordinary and extraordinary surface phases, the special UC has a relevant non symmetry-breaking surface scaling field with dimension y sp [see Eq. (2)]. Therefore, one generically needs a fine tuning of boundary interactions in order to realize the special UC. Nevertheless, we notice that the value of y sp [Eq. (4)] is unusually small: this implies a slow crossover from the special fixed point when surface interactions are tuned away from the special transition. Therefore, it may be possible to observe critical exponents similar to those of the special UC even if a model is not very close to the special transition. This can provide an explanation to the observed nonordinary edge exponent η found in the aforementioned quantum spin models, without the need of a fine-tuning. To further substantiate this hypothesis, it would be desirable to study in more detail the quantum-to-classical mapping [64] of these spin models.
In the extraordinary phase, our MC data display slowly-decaying correlations and a remarkable logarithmic violation of FSS, which is indicative of the extraordinary-log phase scenario put forward in Ref. [25]. Recently, this scenario has been put on a firmer ground in Ref. [65], where we have computed the universal amplitudes of the normal UC that determine the onset and the value of the logarithmic exponent of the extraordinary-log phase. We found a good agreement with MC simulations of the extraordinary phase presented in Refs. [28], [29], thus providing a nontrivial check of the connection between the normal and the extraordinary-log phases outlined in Ref. [25]. A concurrent conformal bootstrap study [27] found results in agreement with Ref. [65].
Figure 1 .
1Bulk-surface phase diagram of the three-dimensional O(N) model, bounded by a 2D surface, for N = 1, 2 (a) and N = 3 (b).
Figure 2 .
2Surface Binder ratio U 4 at the special transition, as a function of β s ; Inset: MC data close to the special transition. Data are taken from Ref.[28]..
Figure 3 .
3Helicity modulus at β s = 1.5, in the extraordinary phase, in a semilogarithmic scale. From Ref.[28].inner bulk, the second and third sum extend to lattice sites on the lower and upper surface, and the last sum is over all lattice sites. The coupling constants β and λ determine the bulk critical behavior. In the (β, λ) plane the bulk displays a line of continuous phase transitions in the O(3) UC[12,41], and in the limit λ → ∞ the model reduces to the standard O(3) hard spin model. At λ = 5.17(11) the model is improved[42], i.e., leading bulk scaling corrections ∝ L −ω , ω = 0.759
We remark that Nc does not need to be an integer.
See Appendix A of[61] for a discussion on the definition of a finite-size correlation length.
AcknowledgmentsFPT is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)-Project No. 414456783. The author gratefully acknowledges the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at Jülich Supercomputing Centre (JSC)[66].
J Cardy, Scaling and Renormalization in Statistical Physics. CambridgeCambridge University PressCardy J 1996 Scaling and Renormalization in Statistical Physics (Cambridge: Cambridge University Press)
H Dosch, Critical Phenomena at Surfaces and Interfaces: Evanescent X-Ray and Neutron Scattering. Berlin; Berlin HeidelbergSpringerDosch H 2006 Critical Phenomena at Surfaces and Interfaces: Evanescent X-Ray and Neutron Scattering (Berlin: Springer Berlin Heidelberg)
K Binder, Critical behavior at surfaces Phase Transitions and Critical Phenomena. Domb C and Lebowitz J LLondonAcademic Press81Binder K 1983 Critical behavior at surfaces Phase Transitions and Critical Phenomena vol 8 ed Domb C and Lebowitz J L (London: Academic Press) p 1
H Diehl, Field-theoretical approach to critical behaviour at surfaces Phase Transitions and Critical Phenomena. C and Lebowitz J LLondonAcademic Press1075Diehl H W 1986 Field-theoretical approach to critical behaviour at surfaces Phase Transitions and Critical Phenomena vol 10 ed Domb C and Lebowitz J L (London: Academic Press) p 75
. M Pleimling, J. Phys. A: Math. Gen. 3779Pleimling M 2004 J. Phys. A: Math. Gen. 37 R79
. M E Fisher, P De Gennes, C. R. Acad. Sci. Paris Ser. B. 287207Fisher M E and de Gennes P G 1978 C. R. Acad. Sci. Paris Ser. B 287 207
The Casimir Effect in Critical Systems. M Krech, World ScientificLondonKrech M 1994 The Casimir Effect in Critical Systems (London: World Scientific)
. M Krech, J. Phys.: Condens. Matter. 11391Krech M 1999 J. Phys.: Condens. Matter 11 R391
. A Gambassi, J. Phys.: Conf. Ser. 16112037Gambassi A 2009 J. Phys.: Conf. Ser. 161 012037
. A Gambassi, S Dietrich, Soft Matter. 71247Gambassi A and Dietrich S 2011 Soft Matter 7 1247
. A Maciołek, S Dietrich, Rev. Mod. Phys. 9045001Maciołek A and Dietrich S 2018 Rev. Mod. Phys. 90 045001
. A Pelissetto, E Vicari, Phys. Rep. 368Pelissetto A and Vicari E 2002 Phys. Rep. 368 549-727
. M Krech, Phys. Rev. B. 62Krech M 2000 Phys. Rev. B 62 6360-6371
. Y Deng, H W J Blöte, M P Nightingale, Phys. Rev. E. 7216128Deng Y, Blöte H W J and Nightingale M P 2005 Phys. Rev. E 72 016128
. D Mcavity, H Osborn, Nucl. Phys. B. 455McAvity D M and Osborn H 1995 Nucl. Phys. B 455 522-576
. P Liendo, L Rastelli, B Van Rees, J. High Energy Phys. 07113Liendo P, Rastelli L and van Rees B C 2013 J. High Energy Phys. JHEP07(2013)113
. F Gliozzi, P Liendo, M Meineri, A Rago, J. High Energy Phys. 0536Gliozzi F, Liendo P, Meineri M and Rago A 2015 J. High Energy Phys. JHEP05(2015)036
. M Billò, V Gonçalves, E Lauria, M Meineri, J. High Energy Phys. 0491Billò M, Gonçalves V, Lauria E and Meineri M 2016 J. High Energy Phys. JHEP04(2016)091
. P Liendo, C Meneghelli, J. High Energy Phys. 01122Liendo P and Meneghelli C 2017 J. High Energy Phys. JHEP01(2017)122
. E Lauria, Meineri M , Trevisani , J. High Energy Phys. 11148Lauria E, Meineri M and Trevisani E 2018 J. High Energy Phys. JHEP11(2018)148
. D Mazáč, L Rastelli, X Zhou, J. High Energy Phys. 124Mazáč D, Rastelli L and Zhou X 2019 J. High Energy Phys. JHEP12(2019)004
. A Kaviraj, M F Paulos, J. High Energy Phys. 04135Kaviraj A and Paulos M F 2020 J. High Energy Phys. JHEP04(2020)135
. P Dey, T Hansen, M Shpot, J. High Energy Phys. 1251Dey P, Hansen T and Shpot M 2020 J. High Energy Phys. JHEP12(2020)051
. C Behan, Di Pietro, L , Lauria E Van Rees, B , J. High Energy Phys. 12182Behan C, Di Pietro L, Lauria E and van Rees B C 2020 J. High Energy Phys. JHEP12(2020)182
. M Metlitski, arXiv:2009.05119Metlitski M A 2020 Preprint arXiv:2009.05119
. A Gimenez-Grau, Liendo P Van Vliet, P , J. High Energy Phys. 04167Gimenez-Grau A, Liendo P and van Vliet P 2021 J. High Energy Phys. JHEP04(2021)167
. J Padayasi, A Krishnan, M A Metlitski, I Gruzberg, M Meineri, arXiv:2111.03071Padayasi J, Krishnan A, Metlitski M A, Gruzberg I A and Meineri M 2021 Preprint arXiv:2111.03071
. F Parisen Toldin, Phys. Rev. Lett. 126135701Parisen Toldin F 2021 Phys. Rev. Lett. 126 135701
. M Hu, Y Deng, J P Lv, Phys. Rev. Lett. 127120603Hu M, Deng Y and Lv J P 2021 Phys. Rev. Lett. 127 120603
. T Suzuki, M Sato, Phys. Rev. B. 86224411Suzuki T and Sato M 2012 Phys. Rev. B 86 224411
. L Zhang, F Wang, Phys. Rev. Lett. 11887201Zhang L and Wang F 2017 Phys. Rev. Lett. 118 087201
. C Ding, L Zhang, W Guo, Phys. Rev. Lett. 120235701Ding C, Zhang L and Guo W 2018 Phys. Rev. Lett. 120 235701
. L Weber, F Parisen Toldin, S Wessel, Phys. Rev. B. 98140403RWeber L, Parisen Toldin F and Wessel S 2018 Phys. Rev. B 98 140403(R)
. L Weber, S Wessel, Phys. Rev. B. 10054437Weber L and Wessel S 2019 Phys. Rev. B 100 054437
. C M Jian, Y Xu, X Wu, C Xu, SciPost Phys. 1033Jian C M, Xu Y, Wu X C and Xu C 2021 SciPost Phys. 10 033
. W Zhu, C Ding, L Zhang, W Guo, Phys. Rev. B. 10324412Zhu W, Ding C, Zhang L and Guo W 2021 Phys. Rev. B 103 024412
. L Weber, S Wessel, Phys. Rev. B. 10320406Weber L and Wessel S 2021 Phys. Rev. B 103 L020406
. C Ding, W Zhu, Guo W Zhang, L Preprint, arXiv:2110.04762Ding C, Zhu W, Guo W and Zhang L Preprint arXiv:2110.04762
. A Bray, M A Moore, J. Phys. A: Math. Gen. 10Bray A J and Moore M A 1977 J. Phys. A: Math. Gen. 10 1927-1962
. T W Burkhardt, J L Cardy, J. Phys. A: Math. Gen. 20Burkhardt T W and Cardy J L 1987 J. Phys. A: Math. Gen. 20 L233-L238
. M Campostrini, M Hasenbusch, A Pelissetto, P Rossi, E Vicari, Phys. Rev. B. 65144520Campostrini M, Hasenbusch M, Pelissetto A, Rossi P and Vicari E 2002 Phys. Rev. B 65 144520
. M Hasenbusch, Phys. Rev. B. 10224406Hasenbusch M 2020 Phys. Rev. B 102 024406
. M Hasenbusch, J. Stat. Mech. 7031Hasenbusch M 2009 J. Stat. Mech. (2009)P07031
. M Hasenbusch, Phys. Rev. B. 82104425Hasenbusch M 2010 Phys. Rev. B 82 104425
. F Parisen Toldin, S Dietrich, J. Stat. Mech. 11003Parisen Toldin F and Dietrich S 2010 J. Stat. Mech. (2010)P11003
. M Hasenbusch, Phys. Rev. B. 83134425Hasenbusch M 2011 Phys. Rev. B 83 134425
. M Hasenbusch, Phys. Rev. B. 84134405Hasenbusch M 2011 Phys. Rev. B 84 134405
. M Hasenbusch, Phys. Rev. B. 85174421Hasenbusch M 2012 Phys. Rev. B 85 174421
. F Parisen Toldin, Tröndle M Dietrich, S , Phys. Rev. E. 8852110Parisen Toldin F, Tröndle M and Dietrich S 2013 Phys. Rev. E 88 052110
. F Parisen Toldin, Phys. Rev. E. 9132105Parisen Toldin F 2015 Phys. Rev. E 91 032105
. F Parisen Toldin, Tröndle M Dietrich, S , J. Phys.: Condens. Matter. 27214010Parisen Toldin F, Tröndle M and Dietrich S 2015 J. Phys.: Condens. Matter 27 214010
. F Parisen Toldin, F Assaad, S Wessel, Phys. Rev. B. 9514401Parisen Toldin F, Assaad F F and Wessel S 2017 Phys. Rev. B 95 014401
. U Wolff, Phys. Rev. Lett. 62Wolff U 1989 Phys. Rev. Lett. 62 361-364
V Privman, Finite-Size Scaling Theory Finite Size Scaling and Numerical Simulation of Statistical Systems ed Privman V. SingaporeWorld Scientific1Privman V 1990 Finite-Size Scaling Theory Finite Size Scaling and Numerical Simulation of Statistical Systems ed Privman V (Singapore: World Scientific) p 1
. M Hasenbusch, J. Phys. A: Math. Gen. 32Hasenbusch M 1999 J. Phys. A: Math. Gen. 32 4851-4865
. M Hasenbusch, F Parisen Toldin, Pelissetto A Vicari, E , J. Stat. Mech. 2016Hasenbusch M, Parisen Toldin F, Pelissetto A and Vicari E 2007 J. Stat. Mech. (2007)P02016
. F Parisen Toldin, Phys. Rev. E. 8425703RParisen Toldin F 2011 Phys. Rev. E 84 025703(R)
. M Hasenbusch, Physica A. 197Hasenbusch M 1993 Physica A 197 423-435
. M Campostrini, M Hasenbusch, A Pelissetto, P Rossi, E Vicari, Phys. Rev. B. 63214503Campostrini M, Hasenbusch M, Pelissetto A, Rossi P and Vicari E 2001 Phys. Rev. B 63 214503
. M E Fisher, M Barber, D Jasnow, Phys. Rev. A. 8Fisher M E, Barber M N and Jasnow D 1973 Phys. Rev. A 8 1111-1124
. F Parisen Toldin, M Hohenadler, F Assaad, I F Herbut, Phys. Rev. B. 91165108Parisen Toldin F, Hohenadler M, Assaad F F and Herbut I F 2015 Phys. Rev. B 91 165108
Metlitski M private communication. Metlitski M private communication.
. M Hasenbusch, F Parisen Toldin, Pelissetto A Vicari, E , Phys. Rev. E. 7811110Hasenbusch M, Parisen Toldin F, Pelissetto A and Vicari E 2008 Phys. Rev. E 78 011110
S Sachdev, Quantum Phase Transitions. CambridgeCambridge University PressSachdev S 2011 Quantum Phase Transitions (Cambridge: Cambridge University Press)
. F Parisen Toldin, M A Metlitski, arXiv:2111.03613Parisen Toldin F and Metlitski M A 2021 Preprint arXiv:2111.03613
. Jülich Supercomputing Centre, Journal of large-scale research facilities. 5135Jülich Supercomputing Centre 2019 Journal of large-scale research facilities 5 A135
| []
|
[
"The stable homotopy classification of (n − 1)-connected (n + 4)-dimensional polyhedra with 2 torsion free homology",
"The stable homotopy classification of (n − 1)-connected (n + 4)-dimensional polyhedra with 2 torsion free homology"
]
| [
"PANJian Zhong ",
"Zhong Jian Zhu "
]
| []
| []
| In this paper, we study the stable homotopy types of F 4 n(2) -polyhedra, i.e., (n − 1)-connected, at most (n + 4)-dimensional polyhedra with 2-torsion free homologies. We are able to classify the indecomposable F 4 n(2) -polyhedra. The proof relies on the matrix problem technique which was developed in the classification of representaions of algebras and applied to homotopy theory by Baues and Drozd. keywords Homotopy; indecomposable; matrix problem | 10.1007/s11425-016-5123-8 | [
"https://arxiv.org/pdf/1509.07932v1.pdf"
]
| 54,595,086 | 1509.07932 | 279157bbdecf8189479a0b2c8540ccb9281aa3f0 |
The stable homotopy classification of (n − 1)-connected (n + 4)-dimensional polyhedra with 2 torsion free homology
26 Sep 2015
PANJian Zhong
Zhong Jian Zhu
The stable homotopy classification of (n − 1)-connected (n + 4)-dimensional polyhedra with 2 torsion free homology
26 Sep 2015arXiv:1509.07932v1 [math.AT]Homotopyindecomposablematrix problem
In this paper, we study the stable homotopy types of F 4 n(2) -polyhedra, i.e., (n − 1)-connected, at most (n + 4)-dimensional polyhedra with 2-torsion free homologies. We are able to classify the indecomposable F 4 n(2) -polyhedra. The proof relies on the matrix problem technique which was developed in the classification of representaions of algebras and applied to homotopy theory by Baues and Drozd. keywords Homotopy; indecomposable; matrix problem
Introduction
Let the A k n (n ≥ k + 1) be the subcategories of the stable homotopy category consisting of (n−1)-connected polyhedra with dimension at most n+k. It is a fully additive category if we consider the wedge of two polyhedra as the coproduct of two objects in the category A k n . The classification problem of A k n (n ≥ k+1) is to find a complete list of indecomposable isomorphic classes, i.e. the indecomposable homotopy types in A k n (n ≥ k + 1). For k ≤ 3, all indecomposable stable homotopy types have been described in [3]. For k ≥ 4, Drozd shows the classification problem is wild (in the sense similar to that in representation of finite dimensional algebras) in [8] by finding a wild subcategory of A 4 n (n ≥ 5) whose objects are polyhedra with 2-torsion homologies.
In another direction, Baues and Drozd also consider full subcategory F k n of A k n (n ≥ k+1) consisting of polyhedra with torsion free homology groups. For k ≤ 5, such polyhedra have been classified to have finite indecomposable homotopy types in [1], [2] or [6], [8]. For k = 6, Drozd got tame type classification of congruence classes of homotopy types, and proved that, for k > 6, this problem is wild in [7]. This is the second of a series of papers devoted to the homotopy theory of A k npolyhedron. In our previous paper [10], we noticed that, for (n − 1)-connected and at most (n + k)-dimensional (k < 7) spaces with 2 and 3-torsion free homologies, the classification of indecomposable stable homotopy types essentially reduces to that of spaces with torsion free homologies. When homologies of the spaces involved have 3-torsion, the reduction process doesn't lead to the matrix problem for spaces with torsion free homologies but to a matrix problem which can be solved. By this we are able to classify homotopy types of the full subcategory F 4 n(2) of A 4 n (n ≥ 5) consisting of polyhedra with 2-torsion free homology groups. We will discuss the splitting of smash product of A k n -polyhedra in a latter publication. Section 2 contains some basic notations and facts about stable homotopy category and classification problem. Our main theorem is given at the end of this section. In Section 3, Theorem 3.2 and Corollary 3.3 establish a connection between bimodule categories and stable homotopy categories. In Section 4, we use the known results of indecomposable homotopy types of F 4 n in [1] to classify the indecomposable isomorphic classes of another matrix problem (A 0 , G 0 ) corresponding to F 4 n . In Section 5, the matrix problem (A ′ , G ′ ) used to classify the indecomposable homotopy types of F 4 n(2) is given. In Section 6, we solve the matrix problem (A ′ , G ′ ) by the results of indecomposable isomorphic classes of matrix problem (A 0 , G 0 ).
Preliminaries
In this paper "Polyhedron" is used as "finite CW-complex" and "Space" means a based space; we denote by * X (or by * if there is no ambiguity) the based point of the space X. Denote by Hot(X, Y ) the set of homotopy classes of continuous maps X → Y and by CW the homotopy category of polyhedra. The suspension functor Σ : X → X [1] [5], which is a fully additive category, and we denote it by CWS too.
We will say a polyhedron X p-torsion free if all homology groups of X are p-torsion free, where p is a prime. Denote by A k n the full subcategory of CW consisting of (n − 1)connected and at most (n + k)-dimensional polyhedra, and denote by F k n (resp. F k n(2) ) the full subcategory of A k n consisting of torsion free (resp. 2-torsion free) polyhedra. The suspension gives a functor Σ : F k n → F k n+1 (resp. Σ : F k n(2) → F k n+1 (2) ). By the Freudenthal Theorem( [12] Theorem.6.26), it follows that Note. If an additive functor F : C → D is a full representation equivalence, denoted by C F ≃rep − −− −→ D, then it induces an 1-1 correspondence of indecomposable isomorphic classes of objects of these two additive categories. Corollary 2.3. Functors Σ : F k n → F k n+1 and Σ : F k n(2) → F k n+1 (2) are equivalences of categories for n ≥ k + 2 and full representation equivalences for n = k + 1. Therefore F k := F k n and F k (2) := F k n(2) with n ≥ k + 2 dose not depend on n. Let C be an additive category with zero object * and biproducts A ⊕ B for any objects A, B ∈ C, where X ∈ C means that X is an object of C. X ∈ C is decomposable if there is an isomorphism X ∼ = A ⊕ B where A and B are not isomorphic to * , otherwise X is indecomposable. For example, X ∈ CW (resp. CWS) is indecomposable if X is homotopy equivalent (resp. stable homotopy equivalent ) to X 1 ∨ X 2 implies one of X 1 and X 2 is contractible. A decomposition of X ∈ C is an isomorphism
Proposition 2.1. If dimX ≤ d and Y is (n − 1)-connected,where d < 2n − 1, then the map Hot(X, Y ) → Hot(X[1], Y [1]) is bijective. If d = 2n − 1,X ∼ = A 1 ⊕ · · · ⊕ A n , n < ∞,
where A i is indecomposable for i ∈ {1, 2, · · · , n}. The classification problem of category C is to find a complete list of indecomposable isomorphism types in C and describe the possible decompositions of objects in C.
Theorem 2.4. (Main theorem)
The complete list of indecomposable (stable) homotopy types in F 4 n(2) (n ≥ 5) is given by the polyhedra in Theorem 6.3; the Moore spaces M (Z/p r , n), M (Z/p r , n + 1), M (Z/p r , n + 2), M (Z/p r , n + 3) where prime p = 2, r ∈ N + and S n , S n+1 , S n+2 , S n+3 , S n+4 , C η = S n ∪ η e n+2 , where η is the Hopf map.
Techniques
Definition 3.1. Let A and B be additive categories. U is an A-B-bimodule, i.e. a biadditive functor A op × B → Ab, the category of abelian groups. We define the bimodule category El(U ) as follows:
• the set of objects is the disjoint union A∈A,B∈B U (A, B).
• A morphism α → β, where α ∈ U (A, B), β ∈ U (A ′ , B ′ ) is a pair of morphisms f : A → A ′ , g : B → B ′ such that gα = βf ∈ U (A, B ′ ) (We write gα instead of U (1, g)α and βf instead of U (f, 1)β).
Obviously El(U ) is an (full) additive category if so are A and B.
Suppose A and B are two full subcategories of CW (or CWS), then we denote by A †B the full subcategory of CW (or CWS) consisting of cofibers of maps f : A → B, where A ∈ A, B ∈ B. We also denote by A † m B the full subcategory of A †B consisting of cofibers of f : A → B such that H m (f ) = 0 and denote by Γ(A, B) the subgroup of Hos(A, B) consisting of maps f : (2) moreover I 2 = 0, hence the projection A †B → A †B/I is a representation equivalence.
A → B such that H m (f ) = 0, where A ∈ A, B ∈ B.
(3) In particular, let n < m ≤ n + k and denote by B the full subcategory of F k n(2) (n ≥ k + 1) consisting of all (n − 1)-connected polyhedra of dimension at most m and by A the full subcategory of F k n(2) (n ≥ k + 1) consisting of all (m − 1)-connected polyhedra of dimension at most n + k − 1., then
El(H)/J C ≃ − − −→ A † B/I P ≃rep ←− −− − A † B.
gives a natural one-to-one correspondence between isomorphic classes of objects of El(H)/J and A † B. F k n(2) is the full subcategory of A † B consisting of 2-torsion free polyhedra.
Proof. (1) and (2) of Theorem 3.2 follow directly from Theorem 1.1. of [8]. It remains to show that F k n(2) is a full subcategory of A † B. For any X ∈ F k n(2) , let B = X n+2 be the (n + 2)-skeleton of X. We get a cofiber sequence B → X → X/B. Since X/B ≃ A [1] for some A by Proposition 2.2, there is a cofiber sequence A f − → B → X → X/B, i.e. X ≃ C f . By the homology exact sequence of cofiber sequence, it is easy to know that A ∈ A, B ∈ B.
The following corollary follows from the Corollary 1.2 of [8]. (2) C :
El(H 0 ) P ≃rep − −− −→ El(H 0 )/J H 0 C ≃ − − −→ A † H 0 B/I H 0 P ≃rep ←− −− − A † H 0 B. If H 0 = Γ : A op × B → Ab, then A † H 0 B = A † m B.
Matrix problem Let A be a set of matrices which is closed under finite direct sums of matrices and let G denote the set of admissible transformations on A . We say A ∼ = B in A if A can be transformed to B by admissible transformations, and we say A is
decomposable if A ∼ = A 1 A 2 for nontrivial A 1 , A 2 ∈ A .
The block matrices A 1 0 and A 1 0 are also thought to be decomposable. The matrix problem (A , G), or simply A , means to classify the indecomposable isomorphic classes of A (denoted by indA ) under admissible transformations G. Matrix problem (A , G) is said to be equivalent to matrix
problem (A ′ , G ′ ) if there is a bijective map ϕ : A → A ′ such that A ∼ = A ′ in A if and only if ϕ(A) ∼ = ϕ(A ′ ) in A ′ and ϕ(A 1 A 2 ) = ϕ(A 1 ) ϕ(A 2 )
. It is clear that if two matrix problems are equivalent, then there is a one-to-one correspondence between their indecomposable isomorphic classes.
Definition 3.4. Let A be a set of some matrices, "·" is a "product" of two matrices defined in A ( "·" may be not the usual matrix product), we say that M ∈ A is invertible in A if there is a matrix N ∈ A such that M · N = N · M = I ∈ A , where I is the identity matrix.
In the following context, for a matrix problem (A , G), saying a matrix M ∈ A invertible always means that M is invertible in A . 4 The solution of a new matrix problem of the category F 4 n (n ≥ 5)
In the following context, the tabulations * * * * * * represent the matrices or block matrices. For any category C, denote by indC the set of indecomposable isomorphic classes of C. indF 4 n is known in [1] and Drozd got a matrix problem corresponding to F 4 n in [8]. Here we need a new matrix problem (A 0 , G 0 ) for the classification problem of F 4 n . When n ≥ 5, denote by B 0 the full subcategory of F 4 n (n ≥ 5) consisting of all (n − 1)-connected polyhedra of dimension at most n + 2 and by A 0 the full subcategory of F 4 n (n ≥ 5) consisting of all (n + 1)-connected polyhedra of dimension at most n + 2. Then
F 4 n = A 0 † n+2 B 0 . From [4] we know indA 0 = {S n+2 , S n+3 }; indB 0 = {S n , S n+1 , S n+2 , C η = S n ∪ η e n+2 }.
Now take m = n + 2, we obtain the A 0 -B 0 subbimodule Γ of H :
Γ : A op 0 × B 0 → Ab (A, B) → Γ(A, B), where Γ(A, B) is the subgroup of Hos(A, B) defined on section 3. Take H 0 = Γ in Corollary 3.3, then f 1 af 2 = 0 whenever f i ∈ Hos(B i , A i ) (i = 1, 2), a ∈ Γ(A 2 , B 1 ), A i ∈ A 0 and B i ∈ B 0 . Hence by Corollary 3.3, we have C : El(Γ) P ≃rep − −− −→ El(Γ)/J Γ C ≃ − − −→ A 0 † n+2 B 0 /I Γ P ≃rep ←− −− − A 0 † n+2 B 0 .
Objects of El(Γ) can be represented by 5 × 2 block matrices (γ ij ), where block γ ij has entries from the (ij)-th cell of T able 1. Morphisms γ → γ ′ are given by block matrices α = (α ij ) 2×2 , β = (β ij ) 5×5 , α ij has entries from the (ij)-th cell of T able 2 and β ij has entries from the (ij)-th cell of T able 3. Their sizes are compatible with those of γ ij and γ ′ ij and βγ = γ ′ α. Such a morphism is invertible if and only if α and β are invertible in Hos(A 0 , A 0 ) and Hos(B 0 , B 0 ) respectively. And it is equivalent to say that all diagonal blocks of α and β are square, and both det(α), det(β) equal to ±1. Since only entries from Z and 2Z give nonzero input to the determinants, they belong indeed to Z. We get the corresponding matrix problem of El(Γ) which denoted by (A 0 , G 0 ).
Γ(A 0 , B 0 ) B 0 A 0 S n+2 S n+3 S n Z/2 Z/24 S n+1 Z/2 Z/2 S n+2 0 Z/2 C η : n 0 Z/12 n+2 0 0 T able 1 Hos(A 0 , A 0 ) A 0 A 0 S n+2 S n+3 S n+2 Z Z/2 S n+3 0 Z T able 2 Hos(B 0 , B 0 ) B 0 B 0 S n S n+1 S n+2 C η : n n+2 S n Z Z/2 Z/2 2Z 0 S n+1 0 Z Z/2 0 0 S n+2 0 0 Z 0 Z C η : n Z 0 Z/2 = Z = 0 n+2 0 0 2Z = 0 Z = T able 3
In T able 3, Hos(C η , C η ) is identified with the ring
Z = 0 0 Z = = a 0 0 b a ≡ b (mod2) ; Hos(S n+2 , C η ) is identified with the subgroup Z/2 = 2Z = of Z/2 2Z = ε 2a ε ∈ Z/2, a ∈ Z
which is the image of the following injective map
Hos(S n+2 , C η ) F − → Z/2 2Z f → ε 2a
.
For any f ∈ Hos(S n+2 , C η ), let S n+1 η − → S n i − → C η q − → S n+2 be the cofiber sequence and S n+3 η n+2 − −− → S n+2 be the suspension of η. Then qf = 2aι n+2 ∈ Hos(S n+2 , S n+2 ) ∼ = Z for some a ∈ Z, where ι n+2 : S n+2 → S n+2 is the identity map. Let ε = 1, if f η n+2 = 0 0, if f η n+2 = 0 , F is defined by mapping f to ε 2a .
In order to make the product of matrices in T able 3 compatible with the composition of the corresponding maps, special rule for the matrix product in T able 3 is needed:
(1) For 2a 0 and 1 2b respectively in
C η :n,n+2 S n 2Z 0 and S n+2 C η :n Z/2 = n+2 2Z = , 2a 0 1 2b = a in S n+2 S n Z/2
, where a is the image of a under the quotient map Z → Z/2.
(2) For a 0 and ε respectively in
S n C η :n Z n+2 0 and S n+2 S n Z/2 , a 0 ε = 0 0 in S n+2 C η :n Z/2 = n+2 2Z = .
(3) Keep elements in zero blocks being zero. For example, for any a and 0 b respectively in
S n+2 S n Z/2 and C η :n,n+2 S n+2 0 Z , a 0 b = 0 0 in C η :n,n+2 S n+2 Z 0
, the second element is not ab but 0.
Denote by W x (respectively W y ) the x-horizontal (respectively y-vertical) stripe, where x ∈ {S n , S n+1 , S n+2 , C η : n, C η : n+2}, y ∈ {S n+2
, S n+3 }, and denote by W y x the block corresponding to x-horizontal stripe and y-vertical stripe. Let dim W x = the number of rows in W x , dim W y = the number of columns in W y . T able 1 represents the matrix set A 0 . By right multiplication with invertible matrices in T able 2 and left multiplication with invertible matrices in T able 3, T able 2 and T able 3 provide admissible transformations G 0 (see [6]) for matrices in T able1, i.e.
(a) "elementary-row transformations" of W x consisting of following three types:
(j+ai)-type : The replacement of the j-th row α j of W x by α j + aα i , where α i is the i-th row of W x , a ∈ Z.
(ai)-type : The multiplication of the i-th row α i of W x by a ∈ {±1}.
(i,j)-type : The transposition of the i-th and j-th row.
(b) "elementary-column transformations" of W y which also have three types as for elementary-row transformations;
(Restriction on (a) and (b)) If one performs a (j+ai)-type (respectively (ai)-type and (i,j)-type) elementary-row transformation of W Cη:n , then one has to perform the (j+a ′ i)-type (respectively (a ′ i)-type and (i,j)-type) elementary-row transformation of W Cη:n+2 simultaneously where a ≡ a ′ (mod2) and vice versa;
(c) Adding k times of a column of W S n+2 to a column of W S n+3 ;
(d) Adding k times of a row of W S n+1 or W S n+2 to a row of W S n ;
(e) Adding k times of a row of W S n+2 to a row of W S n+1 ;
(f) (1) Adding k times of a row of W S n to a row of W Cη:n ;
(2) Adding 2k times of of a row of W Cη:n to a row of W S n ;
(g) Adding 6k times of a row of W S n+2 to a row of W Cη:n ;
where k is an integer. (2) Adding 1 ∈ Z/2 to an element a ∈ Z/24 gives a + 12 in Z/24, since η 3 is 12 in Z/24 = Hos(S n+3 , S n ).
(3) The reason for (g) is as follow: in the definition of the injective map F above, for any f ∈ Hos(S n+2 , C η ), f η = ix for some x ∈ Hos(S n+3 , S n ) = Z/24. If qf = 2ι n+2 ∈ Hos(S n+2 , S n+2 ) then x = 6 (Proposition 6 (iii) of [13]).
From the known fact that
ind(A 0 ) ∼ = indEl(Γ) ∼ = ind(A 0 † n+2 B 0 ) = indF 4 n .
we have List (*) :
(I) X(ηvη) = S n ∨ S n+2 ∪ i 1 η e n+2 ∪ i 1 v+i 2 η e n+4 corresponds to S n+3 S n+2 1 C η : n v n+2 0 where v ∈ {1, 2, 3} ⊂ Z/12. (II) (1) X(ηηvηη) = S n ∨ S n+1 ∪ i 1 ηη e n+3 ∪ i 1 v+i 2 ηη e n+4 corresponds to S n+2 S n+3 S n 1 v S n+1 0 1 (2) X(ηηvη) = S n ∨ S n+2 ∪ i 1 ηη e n+3 ∪ i 1 v+i 2 η e n+4 corresponds to S n+2 S n+3 S n 1 v S n+2 0 1 (3) X(ηvηη) = S n ∨ S n+1 ∪ i 1 η e n+2 ∪ i 1 v+i 2 η e n+4 corresponds to S n+3 S n+1 1 C η : n v n+2 0 (4) X(ηηv) = S n ∪ ηη e n+3 ∪ v e n+4 corresponds to S n+2 S n+3 S n 1 v (5) X(vηη) = S n ∨ S n+1 ∪ i 1 v+i 2 ηη e n+4 corresponds to S n+3 S n v S n+1 1 (6) X(ηv) = S n ∪ η e n+2 ∪ v e n+4 corresponds to S n+3 C η : n v n+2 0 (7) X(vη) = S n ∨ S n+2 ∪ i 1 v+i 2 η e n+4 corresponds to S n+3 S n v S n+2 1
where v ∈ {1, 2, 3, 4, 5, 6} ⊂ Z/24 in the cases (1),(2),(4),(5),(7) of (II), and v ∈ {1, 2, 3, 4, 5, 6}⊂Z/12 in the case (3),(6) of (II).
(III) X(v) = S n ∪ v e n+4 corresponds to S n+3 S n v where v ∈ {1, 2, · · · , 12} ⊂ Z/24. (IV) (1) X(η 1 ) = S n+1 ∪ η e n+3 corresponds to S n+2 S n+1 1 ; (2) X(η 2 ) = S n+2 ∪ η e n+4 corresponds to S n+3 S n+2 1 ; (3) X(ηη) 0 = S n ∪ ηη e n+3 corresponds to S n+2 S n 1 ; (4) X(ηη) 1 = S n+1 ∪ ηη e n+4 corresponds to S n+3 S n+1 1 ,
For a wedge of spaces X ∨ Y , i 1 : X ֒→ X ∨ Y and i 2 : X ֒→ X ∨ Y above are the canonical inclusions .
(resp. * → B) in A 0 † n+2 B 0 which corresponds to 0 × 1 matrix (resp. 1 × 0 matrix) in A 0 .
For a general matrix problem (A , G), these 0 × 1 and 1 × 0 matrices are regarded as elements in indA , but will not be listed to simplify notation. 5 The reduction of the classification problem of F 4 n(2) (n ≥ 5)
Let M k t be the Moore space M (Z/t, k) , t, k ∈ N + = {1, 2, · · · , }. Take m = n + 2 and two full subcategories A and B of F 4 n(2) as in Theorem 3.2 (3). By the results of the indecomposable homotopy types of A 2 n (n ≥ 3) in [4], we have
ind A = {S n+2 , S n+3 , M n+2 p r | prime p = 2, r ∈ N + }; ind B = {S n , S n+1 , S n+2 , C η = S n ∪ η e n+2 , M n p s , M n+1 p s | prime p = 2, s ∈ N + }.
Lemma 5.1.
Hos(M n+2 p r , B) = 0 for any B ∈ ind B, where prime p = 2, 3; r ∈ N + ; Hos(A, M n p s ) = 0 for any A ∈ ind A, where prime p = 2, 3; s ∈ N + ; Hos(A, M n+1 p s ) = 0 for any A ∈ ind A, where prime p = 2; s ∈ N + .
Proof. It follows from the triviality of p-primary component of relevant homotopy groups of spheres and the universal coefficients theorem for homotopy groups with coefficients.
For C f ∈ A † B, where f : A → B ,A = ∨A i , A i ∈ ind A, B = ∨B j , B j ∈ ind B. If A i = M n+2 p r (p = 2, 3) for some i, then A i [1] split out of C f . Similarly if B j = M n p s (p = 2, 3) or M n+1 p s (p = 2)
for some j, then this B j also split out of C f . So we get the following
{X ∈ ind(A †B) | X is 2-torsion free} = {M n+2 p r | prime p = 2, r ∈ N + } ∪ { C(f ) is indecomposable | f ∈ El(Γ) }.
Proof. For integers k, l, t, u, v, w ≥ 0, let
A(k, l) := k S n+2 ∨ l S n+3 ∈ A 0 ; B(t, u, v, w) := t S n+2 ∨ u C η ∨ v S n ∨ w S n+1 ∈ B 0 .
For any 2-torsion free polyhedra
X = C f ∈ A †B, f ∈ Hos(A, B) where A ∈ A, B ∈ B, suppose that A = A(k, l) ∨ M A , B = B(t, u, v, w) ∨ M B , where M A (resp. M B ) is a wedge of Moore spaces {M n+2 3 r | r ∈ N + } (resp. {M n 3 s | s ∈ N + } ). Let A(k, l) j A / / A, B p B / / / / B(t, u, v, w)
be the canonical inclusion and projection of the summands respectively. For
h := p B f j A ∈ Hos( A(k, l), B(t, u, v, w) ),
by the proof of Theorem 5.5 of [10], we have the commutable top square in the following Diagram 1, where k 1 + k ′ = k, k 1 + t ′ = t, α and β are self-homotopy equivalences of A(k, l) and B(t, u, v, w) respectively and the maps
k 1 S n+2 h 1 − −→ k 1 S n+2 , A(k ′ , l) h ′ − −→ B(t ′ , u, v, w)
satisfy that (i) the mapping cone C h 1 = i M n+2 α i , where α i ∈ N + is odd for each i;
(ii) the composition of maps
k ′ S n+2 j − −→ A(k ′ , l) h ′ − −→ B(t ′ , u, v, w) p − −→ t ′ S n+2 ∨ u C η
is zero, where j and p are canonical inclusion and projection of the summands respectively. This is equivalent to the statement that H n+2 (h ′ ) = 0.
Note that Hos(S n+2 , M B ) = 0 and Hos(M A , S n+2 ) = 0. Hence
(β ∨ 1 M B )f (α ∨ 1 M A ) = h 1 ∨ f ′ such that A(k ′ , l) ∨ M A f ′ − −→ B(t ′ , u, v, w) ∨ M B satisfies H n+2 (f ′ ) = 0. It implies that X = C f ≃ C h 1 ∨ C f ′ = i M n+2 α i ∨ C f ′ , f ′ ∈ El(Γ).
Since for any f ∈ El(Γ), C(f ) = C f is 2-torsion free, by the above analysis, we complete the proof of Lemma 5.4.
( k 1 S n+2 )∨A(k ′ ,l) h 1 ∨h ′ / / ( k 1 S n+2 )∨B(t ′ ,u,v,w) A(k,l) _ j A α ≃ / / A(k,l) _ j A h / / B(t,u,v,w) β ≃ / / B(t,u,v,w) A α∨1 M A ≃ / / A f / / B p B O O O O β∨1 M B ≃ / / B p B O O O O ( k 1 S n+2 )∨(A(k ′ ,l)∨M A ) h 1 ∨f ′ / / ( k 1 S n+2 )∨(B(t ′ ,u,v,w)∨M B ) Diagram 1 Take H 0 = Γ in Corollary 3.3, then f 1 af 2 = 0 whenever f i ∈ Hos(B i , A i ) (i = 1, 2), a ∈ Γ(A 2 , B 1 ), A i ∈ A and B i ∈ B. Hence by Corollary 3.3, we have C : El(Γ) P ≃rep − −− −→ El(Γ)/J Γ C ≃ − − −→ A † n+2 B/I Γ P ≃rep ←− −− − A † n+2 B,
which implies the following Corollary 5.5. In Lemma 5.4,
{ C(f ) is indecomposable | f ∈ El(Γ) } = ind(A † n+2 B) ∼ = indEl(Γ).
In the remainder of this section, we will find the matrix problem corresponding to El(Γ).
Computing Γ(A, B) for A ∈ indA, B ∈ indB; Hos(A, A ′ ) for A, A ′ ∈ indA and Hos(B, B ′ ) for B, B ′ ∈ indB as in [7]. For example, Γ(S n+2 , S n+2 ) = Γ(S n+2 , C η ) = 0;
Hos(M n+2 3 r , M n 3 s ) = M n+2 3 r :n+2 n+3 M n 3 s :n 0 Z/3 n+1 0 0 ; Hos(M n+2 3 r , C η ) = M n+2 3 r :n+2 n+3 C η :n 0 Z/3 n+1 0 0 ; Hos(M n+2 3 r , M n+2 3 s ) = M n+2 3 r :n+2 n+3 M n+2 3 s :n+2 Z/3 s= 0 n+3 0 Z/3 r= , where Z/3 s= 0 0 Z/3 r= = ā 0 0b
ā ∈ Z/3 s ,b ∈ Z/3 r and 3 r a = 3 s b in Z.
= ā 0 0 3 r−s a ā ∈ Z/3 s , 3 r−s a ∈ Z/3 r r > s 3 s−r b 0 0b 3 s−r b ∈ Z/3 s ,b ∈ Z/3 r r ≤ s.
Now we get the matrix problem ( A , G) corresponding to El(Γ) as follows. The objects of El(Γ) can be represented by block matrices γ = (γ ij ) with finite order in T able 4 which provides the matrix set A , where block γ ij has entries from the (ij)-th cell of T able 4. Morphisms γ → γ ′ are given by block matrices α = (α ij ) and β = (β ij ) from T able 5 and T able 6 respectively with proper order, which provide the admissible transformations G.
Γ(A, B) B A S n+2 S n+3 M n+2 3 :n+2 n+3 M n+2 3 2 :n+2 n+3 M n+2 3 3 :n+2 n+3 · · · S n Z/2 Z/24 0 Z/3 0 Z/3 0 Z/3 · · · S n+1 Z/2 Z/2 0 0 0 0 0 0 · · · S n+2 0 Z/2 0 0 0 0 0 0 · · · C η :n 0 Z/12 0 Z/3 0 Z/3 0 Z/3 · · · n+2 0 0 0 0 0 0 0 0 · · · M n 3 :n 0 Z/3 0 Z/3 0 Z/3 0 Z/3 · · · n+1 0 0 0 0 0 0 0 0 · · · M n 3 2 :n 0 Z/3 0 Z/3 0 Z/3 0 Z/3 · · · n+1 0 0 0 0 0 0 0 0 · · · M n 3 3 :n 0 Z/3 0 Z/3 0 Z/3 0 Z/3 · · · n+1 0 0 0 0 0 0 0 0 · · · · · ·
· · · · · · · · · · · · · · · · · · · · · · · · · · · T able 4
Hos(A, A)
A A S n+2 S n+3 M n+2 3 :n+2 n+3 M n+2 3 2 :n+2 n+3 · · · M n+2 3 r :n+2 n+3 · · · S n+2 Z Z/2 0 0 0 0 · · · 0 0 · · · S n+3 0 Z 0 Z/3 0 Z/3 2 · · · 0 Z/3 r · · · M n+2 3 :n+2 Z/3 0 Z/3 = 0 Z/3 = 0 · · · Z/3 = 0 · · · n+3 0 0 0 Z/3 = 0 Z/3 2= · · · 0 Z/3 r= · · · M n+2 3 2 :n+2 Z/3 2 0 Z/3 2= 0 Z/3 2= 0 · · · Z/3 2= 0 · · · n+3 0 0 0 Z/3 = 0 Z/3 2= · · · 0
Z/3 r= · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · M n+2
3 r :n+2 Z/3 r 0 Z/3 r= 0 Z/3 r= 0 · · · Z/3 r= 0 · · · n+3 0 0 0 Z/3 = 0 Z/3 2= · · · 0
Z/3 r= · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · T able 5
Hos(B, B) B B S n S n+1 S n+2 Cη :n n+2 M n 3 :n n+1 M n 3 2 :n n+1 ··· M n 3 r :n n+1 ··· S n Z Z/2 Z/2 2Z 0 0 0 0 0 ··· 0 0 ··· S n+1 0 Z Z/2 0 0 0 0 0 0 ··· 0 0 ··· S n+2 0 0 Z 0 Z 0 0 0 0 ··· 0 0 ··· Cη:n Z 0 Z/2 = Z = 0 0 0 0 0 ··· 0 0 ··· n+2 0 0 2Z = 0 Z = 0 0 0 0 ··· 0 0 ··· M n 3 :n Z/3 0 0 Z/3 0 Z/3 = 0 Z/3 = 0 ··· Z/3 = 0 ··· n+1 0 0 0 0 0 0 Z/3 = 0 Z/3 2= ··· 0 Z/3 r= ··· M n 3 2 :n Z/3 2 0 0 Z/3 2 0 Z/3 2= 0 Z/3 2= 0 ··· Z/3 2= 0 ··· n+1 0 0 0 0 0 0 Z/3 = 0 Z/3 2= ··· 0 Z/3 r= ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· M n 3 r :n Z/3 r 0 0 Z/3 r 0 Z/3 r= 0 Z/3 r= 0 ··· Z/3 r= 0 ··· n+1 0 0 0 0 0 0 Z/3 = 0 Z/3 2= ··· 0 Z/3 r= ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· T able 6
It is well known ind A ∼ = indEl(Γ)
We eliminate the zero stripes M n+2 3 r :n+2 and M n 3 s :n+1 of matrices in A to simplify the matrix problem ( A , G) to the following equivalent matrix problem (A ′ , G ′ ).
Γ ′ (A, B) B A S n+2 S n+3 M n+2 3 M n+2 3 2 M n+2 3 3 · · · S n Z/2 Z/24 Z/3 Z/3 Z/3 · · · S n+1 Z/2 Z/2 0 0 0 · · · S n+2 0 Z/2 0 0 0 · · · C η :n 0 Z/12 Z/3 Z/3 Z/3 · · · n+2 0 0 0 0 0 · · · M n 3 0 Z/3 Z/3 Z/3 Z/3 · · · M n 3 2 0 Z/3 Z/3 Z/3 Z/3 · · · M n 3 3 0 Z/3 Z/3 Z/3 Z/3
· · · · · · · · · · · · · · · · · · · · · · · · T able 7
Hos ′ (A, A) A A S n+2 S n+3 M n+2 3 M n+2 3 2 · · · M n+2 3 r · · · S n+2 Z Z/2 0 0 · · · 0 · · · S n+3 0 Z Z/3 Z/3 2 · · · Z/3 r · · · M n+2 3 0 0 Z/3 0 · · · 0 · · · M n+2 3 2 0 0 Z/3 Z/3 2 · · · 0
· · · · · · · · · · · · · · · · · · · · · · · · · · · M n+2 3 r 0 0 Z/3 Z/3 2 · · · Z/3 r · · · · · · · · · · · · · · · · · · · · · · · · · · · T able 8
Hos ′ (B, B) B B S n S n+1 S n+2 C η :n n+2 M n 3 M n 3 2 ··· M n 3 r ··· S n Z Z/2 Z/2 2Z 0 0 0 ··· 0 ··· S n+1 0 Z Z/2 0 0 0 0 ··· 0 ··· S n+2 0 0 Z 0 Z 0 0 ··· 0 ··· C η :n Z 0 Z/2 = Z = 0 0 0 ··· 0 ··· n+2 0 0 2Z = 0 Z = 0 0 ··· 0 ··· M n 3 Z/3 0 0 Z/3 0 Z/3 Z/3 ··· Z/3 ··· M n 3 2 Z/3 2 0 0 Z/3 2 0 0 Z/3 2 ··· Z/3 2 ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· M n 3 r Z/3 r 0 0 Z/3 r 0 0 0 ··· Z/3 r ··· T able 9 .
In the above tables, M n+2
G ′ co : (i) W S n+2 < W S n+3 (ii) W S n+3 < W M n+2 3 r ; W M n+2 3 r+1 < W M n+2
3 r for any r ∈ N + G ′ ro :
(i) W S n+2 < W S n+1 < W S n (ii) W S n < W Cη:n < W M n 3 s ; W M n 3 s+1 < W M n 3 s ; 2W Cη :n < W S n for any s ∈ N + (iii) 6W S n+2 < W Cη:n Remark 5.6.
(1) W x < W y means that adding k times of a row of W x to a row of W y is admissible and aW x < W y (a ∈ N + ) means adding ak times of a row of W x to a row of W y is admissible where k is an any nonzero integer. W x < W y has the similar meaning for corresponding vertical stripes.
(2) Similarly, zero blocks in T able 7 should keep being zero after admissible transformations. Adding 1 ∈ Z/2 to an element a ∈ Z/24 gives a + 12 in Z/24.
(3) Special rules of matrix product in T able 3 are also needed for matrix product in T able 9.
Γ ′ (A, B) (2) B A S n+2 S n+3 M n+2 3 M n+2 3 2 · · · S n Z/2 Z/8 0 0 · · · S n+1 Z/2 Z/2 0 0 · · · S n+2 0 Z/2 0 0 · · · C η :n 0 Z/4 0 0 · · · n+2 0 0 0 0 · · · M n 3 0 0 0 0 · · · M n 3 2 0 0 0 0 · · · · · ·
· · · · · · · · · · · · · · · T able 10
The list of non-trivial admissible transformations on Γ ′ (A, B) (2) is :
el : "elementary-row (column)" transformations of each horizontal (vertical) stripe ;
co : W S n+2 < W S n+3 ;
ro : W S n < W Cη:n ; 2W Cη :n < W S n ; 2W S n+2 < W Cη:n .
Γ ′ (A, B) (3) B A S n+2 S n+3 M n+2 3 M n+2 3 2 M n+2 3 3 · · · S n 0 Z/3 Z/3 Z/3 Z/3 · · · S n+1 0 0 0 0 0 · · · S n+2 0 0 0 0 0 · · · C η :n 0 Z/3 Z/3 Z/3 Z/3 · · · n+2 0 0 0 0 0 · · · M n 3 0 Z/3 Z/3 Z/3 Z/3 · · · M n 3 2 0 Z/3 Z/3 Z/3 Z/3 · · · M n 3 3 0 Z/3 Z/3 Z/3 Z/3 · · · · · ·
· · · · · · · · · · · · · · · · · · T able 11
The list of non-trivial admissible transformations on Γ ′ (A, B) (3) is :
el : "elementary-row (column)" transformations of each horizontal (vertical) stripe.
co : W S n+3 < W M n+2 3 r ; W M n+2 3 r+1 < W M n+2
3 r for any r ∈ N + ro : W S n < W Cη:n < W M n 3 s ; W M n 3 s+1 < W M n 3 s ; 2W Cη:n < W S n for any s ∈ N + .
Let ∆ (2)(3) be the subset of Γ ′ (A, B) (2) × Γ ′ (A, B) (3) which consists of (M 2 , M 3 ) such that for every x ∈ indA, y ∈ indB, the two blocks W y x respectively in matrixes M 2 and M 3 have the same order. Define the map
Γ ′ (A, B) L=(L 2 ,L 3 ) − −−−−− −→ ∆ (2)(3) ⊂ Γ ′ (A, B) (2) × Γ ′ (A, B) (3) .
L is given by the following ring isomorphisms
Z/24 L 24 − −→ Z/8 × Z/3 , 1 → (1, 1); Z/12 L 12 − −→ Z/4 × Z/3 , 1 → (1, 1). The inverse map T of L ∆ (2)(3) T − −→ Γ ′ (A, B)
is given by the following two ring isomorphisms
Z/8 × Z/3 T 8 − −→ Z/24 , (a, b) → 9a + 16b; Z/4 × Z/3 T 4 − −→ Z/12 , (a, b) → 9a + 4b.
which are the inverse of L 24 and L 12 respectively.
It's easy to know that if M ∼ = N in matrix problem (A ′ , G ′ ) then L 2 (M ) ∼ = L 2 (N ) and L 3 (M ) ∼ = L 3 (N ) in matrix problem (A ′ (2) , G ′ ) and (A ′ (3) , G ′ ) respectively. We don't know whether the inverse is true. However, in the following we will show that the inverse will be true if we take some restrictions to the admissible transformations on A ′
(2) and A ′ (3) .
Some notations :
(1) Let Hos ′ (A, A)(y 1 , y 2 , · · · , y n ) (resp. Hos ′ (B, B)(x 1 , x 2 , · · · , x m )) be the set of all square matrices in the Hos ′ (A, A) (resp. Hos ′ (B, B)) with only y 1 , y 2 , · · · , y n -stripes (resp. x 1 , x 2 , · · · , x m -stripes ).
Especially, we denote V = Hos ′ (B, B)(S n , S n+1 , S n+2 , C η : n) (note that there is no C η :n+2-stripe). And we call the sub-matrix which contains entries in S n , S n+1 , S n+2 , C η :n-stripes of M ∈ Hos ′ (B, B) "V-part of M ".
(2) Let I be the identity matrix and E ij be the matrix whose unique non-zero entry has index (i, j) and equals 1. Let B + be the subset consists of invertible matrices β =
V 1 0 V 3 V 4 in Hos ′ (B, B), where V 1 = V 11 V 12 V 13 2V 14 0 0 V 22 V 23 0 0 0 0 V 33 0 0 V 41 0 V 43 V 44 0 0 0 0 0 V 55
which is an element in (2) , G ′+ ) is the same as that of matrix problem (A ′
Hos ′ (B, B)(S n , S n+1 , S n+2 , C η : n, C η : n+2) = Z Z/2 Z/2 2Z 0 0 Z Z/2 0 0 0 0 Z 0 Z Z 0 Z/2 = Z = 0 0 0 2Z = 0 Z = such that the V-part W 1 = V 11 V 12 V 13 2V 14 0 V 22 V 23 0 0 0 V 33 0 V 41 0 V 43 V 44 of V 1
(2) , G ′ ) except that (-1i)-type of elementary transformations are not allowed, and (i,j)type should be replaced by (i, -j)-type or (-i,j)-type, which means when we transport two rows (columns) of a stripe, one row (column) α of them is replaced by −α. , G ′ ) except that (-1i)-type of elementary transformations on the W S n+3 , W S n and W Cη:n are not allowed; (i,j)-type elementary transformations on W S n+3 , W S n and W Cη :n should be replaced by (i,-j)-type or (-i,j)-type.
Theorem 6.1. If M (2) ∼ = N (2) in matrix problem (A ′ (2) , G ′+ ) and M (3) ∼ = N (3) in matrix problem (A ′ (3) , G ′+ ) , then T (M (2) , M (3) ) ∼ = T (N (2) , N (3) ) in matrix problem (A ′ , G ′ ).
Proof. By the condition of the Theorem, we get that
β 2 M (2) α 2 = N (2) , β 3 M (3) α 3 = N (3)
where α 2 , α 3 ∈ A + and β 2 , β 3 ∈ B + . Let
α 2 = U 1 U 2 0 U 4 , α 3 = U ′ 1 U ′ 2 0 U ′ 4 ,
where U 1 , U ′ 1 ∈ Hos ′ (A, A)(S n+2 , S n+3 ) .
β 2 = V 1 0 V 3 V 4 , β 3 = V ′ 1 0 V ′ 3 V ′ 4
where V 1 , V ′ 1 ∈ Hos ′ (B, B)(S n , S n+1 , S n+2 , C η : n, C η : n+2). V-part of V 1 and V ′ 1 are denoted by W 1 and W ′ 1 respectively. Lemma 6.2. For any
U 1 , U ′ 1 ∈ Hos ′ (A, A)(S n+2 , S n+3 ) = Z Z/2 0 Z and W 1 , W ′ 1 ∈ V = Z Z/2 Z/2 2Z 0 Z Z/2 0 0 0 Z 0 Z 0 Z/2 Z
where U 1 , U ′ 1 , W 1 and W ′ 1 are products of elementary matrices I + aE ij (i = j, a ∈ Z), orders of U 1 , U ′ 1 (respectively orders of W 1 , W ′ 1 ) are the same, there exist invertible matrices
U ∈ Hos ′ (A, A)(S n+2 , S n+3 ) , W ∈ V such that U ≡ U 1 (mod 8) U ≡ U ′ 1 (mod 3)
and
W ≡ W 1 (mod 8) W ≡ W ′ 1 (mod 3)
.
Note. For any abelian group A, a, b ∈ A, and positive integer k, a ≡ b (mod k) means that the image of a and b are equal under the quotient homomorphism A → A/kA.
We give some remarks before the proof of this lemma. Using W and U in Lemma 6.2, let
V = C η :n+2 W 0 C η :n+2 0 V 55
where V 55 is an invertible matrix that makes V be an element in Hos ′ (B, B)(S n , S n+1 , S n+2 , C η : n, C η : n+2). And let
α = U U ′ 2 0 U ′ 4 and β = V 0 V ′ 3 V ′ 4 .
Then α and β are invertible and βM (2) (2) , N (3) ). Thus :
α = β 2 M (2) α 2 = N (2) , βM (3) α = β 3 M (3) α 3 = N (3) . Since βT (M (2) , M (3) )α = T (βM (2) α, βM (3) α) = T (NT (M (2) , M (3) ) ∼ = T (N (2) , N (3) ) in matrix problem (A ′ , G ′ ).
The proof of Lemma 6.2.
Statement (1). For any A, B ∈ SL n (Z), there is a C ∈ SL n (Z), such that C ≡ A (mod 8) and C ≡ B (mod 3).
The statement(1) follows from the following two conclusions in [11] :
SL n (Z) q − −→ SL n (Z/24) is surjective; SL n (Z/24) (q 1 ,q 2 ) − −− −→ SL n (Z/8) × SL n (Z/3) is isomorphic,
where q, q 1 , q 2 are quotient maps.
Statement (2). Suppose that
A = I + aE ij = 1 a ij . . . 1 , B = I + bE st = 1 b st . . . 1 i = j, a ∈ Z s = t, b ∈ Z
are any two elementary matrices in V (resp. Hos ′ (A, A)(S n+2 , S n+3 ) ) of the same order, then there is an invertible block matrix C in V (resp. Hos ′ (A, A)(S n+2 , S n+3 )) such that C ≡ A (mod 8), C ≡ B (mod 3).
The proof of statement (2). We only prove the case for A, B ∈ V since the remaining case is much more easier.
Note if a ij (resp. b st ) is from Z or 2Z block, then a ij = a (resp. b st = b); if a ij (resp. b st ) is from Z/2 block, then a ij (resp. b st ) is the image of a (resp. b) under the quotient map Z → Z/2.
• If b st is from Z/2 block, then b st ≡ 0 (mod 3). For a ij from Z/2 block, take C = A. For a ij from Z or 2Z block, there is a c ∈ Z, such that c ≡ a (mod 8) and c ≡ 0 (mod 3). Take C = I + cE ij .
• If b st is from Z or 2Z block,
(i) i = t or j = s there is a d ∈ Z such that d ≡ 0 (mod 8) and d ≡ b (mod 3).
If a ij is from Z/2 block, take integer c such that c ≡ a (mod 2); If a ij is from Z or 2Z block, take integer c such that c ≡ a (mod 8) and c ≡ 0 (mod 3). Then take
C = I + cE ij + dE st = (I + cE ij )(I + dE st ) which is invertible in V.
(ii) i = t and j = s In this case a ij must come from Z or 2Z block. Suppose i > j. By statement (1), there is a matrix X = x 11 x 12
x 21 x 22 ∈ SL 2 (Z) such that X ≡ 1 0 a 1 (mod 8) and X ≡ 1 b 0 1 (mod 3).
Take
C = I − (1 − x 11 )E jj − (1 − x 22 )E ii + x 12 E ji + x 21 E ij .
Note that x 12 ∈ 2Z and if a ∈ 2Z then x 21 ∈ 2Z, so C is an element in V. It is easy to check that C is invertible in V and C ≡ A (mod 8), C ≡ B (mod 3). The proof of the case i < j is similar.
Now the proof of the Lemma 6.2 is easily obtained by statement (2).
The indecomposable isomorphic classes of (A ′
(2) , G ′+ ) and
(A ′ (3) , G ′+ )
Note that the matrix problem (A ′ (2) , G ′ ) is essentially the same as the 2-primary component of the matrix problem (A 0 , G 0 ). Thus we can get the list (denoted by List(**)) of the indecomposable isomorphic classes of (A ′ (2) , G ′ ) from the List(*) by taking v to its image of the quotient map Z/24 → Z/8 or Z/12 → Z/4. It means List(**) are just the same as the List(*) except that the ranges of v are different . That is
• v ∈ {1} ⊂ Z/4 for the case (I);
• v ∈ {1, 2} ⊂ Z/8 for the case (1),(2),(4),(5),(7) of (II) ;
• v ∈ {1, 2} ⊂ Z/4 for the case (3),(6) of (II) ;
• v ∈ {1, 2, 3, 4} ⊂ Z/8 for the case (III) .
From the The differences between
(A ′ (2) , G ′+ ) and (A ′ (2) , G ′ ), we know that M ∈ Γ ′ (A, B) (2) is indecomposable in (A ′ (2) , G ′+ ) if and only if it is indecomposable in (A ′ (2) , G ′ ). But non-isomorphic matrices of (A ′ (2) , G ′+ ) may be isomorphic in (A ′ (2) , G ′ ).
For example, Here are the list of the indecomposable isomorphic classes of (A ′ (2) , G ′+ ) :
List (2) :
(I) S n+3 S n+2 1 Cη:n 1 n+2 0 ; (II) (1) S n+2 S n+3 S n 1 v S n+1 0 1 ; (2) S n+2 S n+3 S n 1 v S n+2 0 1 ; (4) S n+2 S n+3 S n 1 v (5) S n+3 S n v S n+1 1 ; (7) S n+3 S n v S n+2 1 ,
where v ∈ {1, 2, 3} ⊂ Z/8 for the cases (1),(2),(4),(5),(7).
(3) where v ∈ {1, 2, 3} ⊂ Z/4 for the cases (3) (6).
(III) S n+3 S n v
where v ∈ {1, 2, · · · , 7} ⊂ Z/8. Since N 3 is a matrix of which every row and every column have at most one nonzero entry, it enables us to select from T (N 2 , N 3 ) a set of indecomposable matrices as follows which covers all the indecomposable isomorphic classes of matrix problem (A ′ , G ′ ).
(IV) (1) S n+2 S n+1 1 ; (2) S n+3 S n+2 1 ; (3) S n+2 S n 1 ; (4) S n+3 S n+1 1 .(I) S n+3 S n+2 1 C η :n T 4 (a, b) n+2 0 ; S n+3 M n+2 3 r S n+2 1 0 C η :n T 4 (1, 0) 1 n+2 0 0 ; S n+3 S n+2 1 C η :n T 4 (1, 0) n+2 0 M n 3 s 1 ; S n+3 M n+2 3 r S n+2 1 0 C η :n T 4 (1, 0) 1 n+2 0 0 M n 3 s 1 0 , 1 0 ; where (a, b) ∈ Z/8 × Z/3 such that (a, b) = (0, 0). s, r ∈ N + . (IV) (1) S n+2 S n+1 1 ; (2) S n+3 S n+2 1 ; S n+3 S n+2 1 M n 3 s 1 ; (3) S n+2 S n 1 ; S n+2 M n+2 3 r S n 1 1 ; (4) S n+3 S n+1 1 ; S n+3 S n+1 1 M n 3 s 1 .
where r, s ∈ N + . Through a detailed check by admissible transformations of matrix problem (A ′ , G ′ ), it follows the following Theorem 6.3. All indecomposable isomorphic classes of (A ′ , G ′ ) are given by the following list
(I) S n+3 S n+2 1 C η :n v n+2 0 X(ηvη) ; S n+3 M n+2 3 r S n+2 1 0 C η :n 3 1 n+2 0 0 X(η3η) r ; S n+3 S n+2 1 C η :n 3 n+2 0 M n 3 s 1 X(η3η) s ; S n+3 M n+2 3 r S n+2 1 0 C η :n 3 1 n+2 0 0 M n 3 s 1 0 X(η3η) r s ,
where v ∈ {1, 2, 3} ⊂ Z/12.
(II) (1) S n+2 S n+3 S n 1 v S n+1 0 1 X(ηηvηη) ; S n+2 S n+3 M n+2 3 r S n 1 v 1 1 S n+1 0 1 0 X(ηηv 1 ηη) r ; S n+2 S n+3 S n 1 v 1 S n+1 0 1 M n 3 s 0 1 X(ηηv 1 ηη) s ; S n+2 S n+3 M n+2 3 r S n 1 v 1 1 S n+1 0 1 0 M n 3 s 0 1 0 X(ηηv 1 ηη) r s ; (2) S n+2 S n+3 S n 1 v S n+2 0 1 X(ηηvη) ; S n+2 S n+3 M n+2 3 r S n 1 v 1 1 S n+2 0 1 0 X(ηηv 1 η) r ; S n+2 S n+3 S n 1 v 1 S n+2 0 1 M n 3 s 0 1 X(ηηv 1 η) s ; S n+2 S n+3 M n+2 3 r S n 1 v 1 1 S n+2 0 1 0 M n 3 sS n+3 S n v S n+1 1 X(vηη) ; S n+3 M n+2 3 r S n v 1 1 S n+1 1 0 X(v 1 ηη) r ; S n+3 S n v S n+1 1 M n 3 s 1 X(v 1 ηη) s ; S n+3 M n+2 3 r S n v 1 1 S n+1 1 0 M n 3 s 1 0 X(v 1 ηη) r s ;(5)
S n+3 S n v S n+2 1
X(vη)
;
S n+3 M n+2 3 r S n v 1 1 S n+2 1 0 X(v 1 η) r ; S n+3 S n v 1 S n+2 1 M n 3 s 1 X(v 1 η) s ; S n+3 M n+2 3 r S n v 1 1 S n+2 1 0 M n 3 s 1 0 X(v 1 η) r s ,
where v ∈ {1, 2, 3, 4, 5, 6} ⊂ Z/24 or Z/12 , v 1 ∈ {3, 6} ⊂ Z/24 or Z/12 and r, s ∈ N + .
(III) S n+3 S n v X(v) ; S n+3 M n+2 3 r S n v 1 1 X(v 1 ) r ; S n+3 S n v 1 M n 3 s 1 X(v 1 ) s ; S n+3 M n+2 3 r S n v 1 1 M n 3 s 1 0 X(v 1 ) r s ;
where v ∈ {1, 2, · · · , 12} ⊂ Z/24 and v 1 ∈ {3, 6, 9} ⊂ Z/24. r, s ∈ N + .
(IV ) (1)
S n+2 S n+1 1 X(η 1 ) ; (2) S n+3 S n+2 1 X(η 2 ) ; S n+3 S n+2 1 M n 3 s 1 X(η 2 ) s ; (3) S n+2 S n 1 X(ηη) 0 ; S n+2 M n+2 3 r S n 1 1 X(ηη) r 0 ; (4) S n+3 S n+1 1 X(ηη) 1 ; S n+3 S n+1 1 M n 3 s 1 X(ηη) 1s ,
where r, s ∈ N + .
It is easy to recover the polyhedra from the matrices listed in Theorem 6.3. For example, S n+2 ∨ C η ∨ M n 3 s = (S n+2 ∨ S n ∨ S n ) ∪ i 2 η e n+2 ∪ i 3 3 s e n+1 , we get that the polyhedron corresponding to this matrix is (S n+2 ∨ S n ∨ S n ∨ S n+3 ) ∪ i 2 η e n+2 ∪ i 3 3 r e n+1 ∪ i 1 ηη+i 2 3+i 3 1 e n+4 ∪ i 3 1+i 4 3 r e n+4 , where i t : X t ֒→ j X j is the canonical inclusion of the summand. Finally, from Corollary 5.3, Lemma 5.4 and Corollary 5.5, we obtain all the 2-torsion free indecomposable homotopy types of A †B, which completes the proof of Theorem 2.4 (Main theorem).
S n+3 M n+2
Concluding remarks
In this paper, using the well known results about homotopy classes of maps between Moore spaces and suspended complex projective space and their compositions, we succeed in classifying indecomposable F 4 n(2) polyhedra. However the corresponding classification problems for the cases F 5 n(2) and F 6 n(2) are still open. We hope to return to this issue in the future publication. On the other hand, from the previous remark, it is crucial to understand globally a collection of spaces as a subcategory of homotopy category of spaces.We will focus on this point in the future works.
(X[n] = Σ n X) defines a natural map Hot(X, Y ) → Hot(X[n], Y [n]). Set Hos(X, Y ) = lim n→∞ Hot(X[n], Y [n]). If α ∈ Hot(X[n], Y [n]), β ∈ Hot(Y [m], Z[m]), the class β[n]·α[m] ∈ Hot(X[m + n], Z[m + n]) after stabilization is, by definition, the product βα of the classes of α and β in Hos(X, Z). Thus we obtain the stable homotopy category of polyhedra CWS. Extending CWS by adding formal negative shifts X[−n](n ∈ N) of polyhedra and setting Hos(X[−n], Y [−m]) := Hos(X[m], Y [n]), one gets the category S of
Theorem 3 . 2 .
32Let A and B be two full subcategories of CWS, suppose that Hos (B, A[1]) = 0 for all A ∈ A, B ∈ B. Consider H : A op × B → Ab, i.e. (A, B) → Hos(A, B), as an A-B-bimodule. Denote by I the ideal of category A †B consisting of morphisms which factor both through B and A[1], and by J the ideal of the category El(H) consisting of morphisms (α, β) : f → f ′ such that β factors through f ′ and α factors through f . Then (1) the functor C : El(H) → A †B (f → C f ) induces an equivalence El(H)/J ≃ A †B/I.
Corollary 3 . 3 .
33Under conditions of Theorem 3.2, let H 0 be anA-B-subbimodule of H such that f 1 af 2 = 0 whenever a ∈ El(H 0 ), f i ∈ Hos(B i , A i )(i = 1.2). Denote by A † H 0 B the full subcategory of A †B consisting of cofibers of a ∈ El(H 0 ). I H 0 = M or(A † H 0 B) ∩ I and J H 0 = M or(El(H 0 )) ∩ J . Then we have (1) J 2 H 0 = I 2 H 0 = 0;
Remark 4. 2 .
2Indecomposable homotopy types in {A[1] | A ∈ indA 0 } and indB 0 of F 4 n are not contained in List (*). An element A[1] of {A[1] | A ∈ indA 0 } (resp. B of indB 0 ) can be considered as a mapping cone of map A → *
Lemma 5. 2 .
2Let A and B be the full subcategories of A and B respectively, such thatindA = {S n+2 , S n+3 , M n+2 3 r | r ∈ N + }; indB = {S n , S n+1 , S n+2 , C η = S n ∪ η e n+2 , M n 3 s | s ∈ N + }. then ind( A † B) = ind(A †B) ∪ {M n+3 p r , M n p r , M n+1 q r | primes p = 2, 3, q = 2; r ∈ N + }.By Theorem 3.2 (3),Corollary 5.3. indF 4 n(2) = {X ∈ ind(A †B) | X is 2-torsion free } ∪ { M n+3 p r , M n p r , M n+1q r | prime p = 2, 3, prime q = 2 and r ∈ N + }.In order to get indF 4 n(2) , it suffices to compute {X ∈ ind(A †B) | X is 2-torsion free }. Let Γ : A op × B → Ab, Γ(A, B) = {g ∈ Hos(A, B) | H n+2 (g) = 0}, defined in section 3, be a sub-bimodule of A-B-bimodule H : A op × B → Ab, H(A, B) = Hos(A, B).
-vertical stripe and M n 3 s represents the M n 3 s :n-horizontal stripe. T able 7 provides the matrix set A ′ ; T able 8 and T able 9 provide the (non-trivial) admissible transformations G ′ : G ′ el : "elementary-row (column)" transformations of each horizontal (vertical) stripe .
6
Computation of indA ′ for matrix problem (A ′ , G ′ ) In this section we solve the matrix problem (A ′ , G ′ ) to get indA ′ , then we get the indF 4 n(2) by indA ′ . 6.1 p-primary component of matrix problem (A ′ , G ′ ) p = 2, 3 Let Γ ′ (A, B) (2) be the 2-primary component of Γ ′ (A, B), that means we replace Z/24 by Z/8, Z/12 by Z/4 and Z/3 by 0 in T able 7. Similarly, Γ ′ (A, B) (3) is the 3-primary component of Γ ′ (A, B), that means we replace Z/24 by Z/3, Z/12 by Z/3 and Z/2 by 0 in T able 7. Then we get the following two matrix problems (A ′ (2) , G ′ ) and (A ′ (3) , G ′ ) with admissible transformations also provided by T able 8 and T able 9.
Then the (i+aj)-type of elementary row (column) transformations corresponds to left (right) multiplication by an elementary matrix I + aE ij (I + aE ji ), and (−1i)-type of elementary row (column) transformations corresponds to left (right) multiplication by an elementary matrix I − 2E ii . Note that the (i,j)-type of transformations can be obtained by composition of (i+aj)-type and (−1i)-type of elementary transformations. Let A + be the subset consists of invertible matrices α = U 1 U 2 0 U 4 in Hos ′ (A, A),where U 1 ∈ Hos ′ (A, A)(S n+2 , S n+3 ) is a product of elementary matrices I + aE ij , i = j.
is a product of elementary matrices I + aE ij , i = j. Denoting by G ′+ the admissible transformations provided by A + and B + on Γ ′ (A, B) (2) and Γ ′ (A, B) (3) , we get two new matrix problems (A ′ (2) , G ′+ ) and (A ′ (3) , G ′+ ). The differences between (A ′ (2) , G ′+ ) and (A ′ (2) , G ′ ) : The list of non-trivial admissible transformations of matrix problem (A ′
The differences between (A ′(3) , G ′+ ) and (A ′ (3) , G ′ ) : The list of non-trivial admissible transformations of matrix problem (A ′ (3) , G ′+ ) is the same as that of matrix problem (A ′ (3)
(
where 1, -1 ∈ Z/8 ), which are isomorphic under G ′ , are not isomorphic under G ′+ .
,
For the matrix problem (A ′(3) , G ′+ ), the indecomposable isomorphic classes are −1 ∈ Z/3 and s, n ∈ N + .6.3The indecomposable isomorphic classes of (A ′ , G ′ ) and F 4 n(2) (n ≥ 5)By theorem 6.1, for any M ∈ Γ ′ (A, B), we have M ∼ = T (N 2 , N 3 ) in matrix problem (A ′ , G ′ ) for some N 2 ∈ Γ ′ (A, B)(2) and N 3 ∈ Γ ′ (A, B) an indecomposable matrix listed in the List(2) for every i. an indecomposable matrix listed in the List(3) for every j.
O 2 and O 3
3are direct products of some zero matrices.
cone of the map S n+3 ∨ M n+2 3 r → S n+2 ∨ C η ∨ M n3 s corresponding to the matrix. Since
this map is surjective. In particular, the map Hot(X[m], Y [m]) → Hos(X, Y ) is bijective if m > d − 2n + 1 and surjective if m = d − 2n + 1. From Proposition 2.1, we get the following Proposition 2.2. The suspension functor induces equivalences A k n ∼ − −→ A k n+1 for all n > k + 1. Moreover, if n = k + 1, the suspension functor A k n − −→ A k n+1 is a full representation equivalence, i.e. it is full, dense and reflects isomorphisms.
Remark 4.1. When admissible transformations above are performed on block matrix γ = (γ ij ), where block γ ij has entries from (ij)-cell of table 1, we should note that (1) If (ij)-cell of table 1 is zero, then γ ij keeps being zero after admissible transformations;
Acknowledgements: This work has been accepted for publication in SCIENCE CHINA Mathematics.where (a, b) ∈ Z/4 × Z/3 such that a ∈ {0, 1} and (a, b) = (0, 0) . s, r ∈ N + .S n+3 C η :n T 4 (a, b)
The homotopy classification of (n-1)-connected (n+4)-dimensional polyhedra with torsion free homology, n ≥ 5. H J Baues, Y A Drozd, Expositiones Mathematicae. 17Baues H J, Drozd Y A. The homotopy classification of (n-1)-connected (n+4)- dimensional polyhedra with torsion free homology, n ≥ 5. Expositiones Mathemati- cae, 1999, 17: 161-180
Classification of stable homotopy types with torsion-free homology. H J Baues, Y Drozd, Topology. 40Baues H J, Drozd Y A. Classification of stable homotopy types with torsion-free homology. Topology, 2011, 40: 789-821
The homotopy classification of (n-1)-connected (n+3)-dimensional polyhedra, n ≥ 4. H J Baues, M Hennes, Topology. 30Baues H J, Hennes M. The homotopy classification of (n-1)-connected (n+3)- dimensional polyhedra, n ≥ 4. Topology, 1991, 30: 373-408
Homology invariants and continuous mappings. S Chang, Proc. Roy. Soc. London. ser. A. 202Chang S C. Homology invariants and continuous mappings. Proc. Roy. Soc. London. ser. A, 1950, 202: 253-263
Stable homotopy. J Cohen, Lect. Notes Math. 165Springer-VerlagCohen J M. Stable homotopy. Lect. Notes Math. vol. 165. Berlin Heidelberg New York: Springer-Verlag, 1970
Matrix problems and stable homotopy types of polyhedra. Y Drozd, Central European J. Math. 2Drozd Y A. Matrix problems and stable homotopy types of polyhedra. Central Eu- ropean J. Math, 2004, 2: 420-447
On classification of torsion free polyhedra. Y Drozd, 92BonnMax-Planck-Institut für MathematikDrozd Y A. On classification of torsion free polyhedra. Preprint series, Max-Planck- Institut für Mathematik (Bonn), 2005, 92
Matrix problems, triangulated categories and stable homotopy types. Y Drozd, Sao Paulo Journal of Mathematical Sciences. 4Drozd Y A. Matrix problems, triangulated categories and stable homotopy types. Sao Paulo Journal of Mathematical Sciences, 2010, 4: 209-249
Remarks on the classification of a pair of commuting linear transformations in a finite dimensional space. Functional Analysis and Its Applications. I M Gelfand, V A Ponomarev, 3Gelfand I M, Ponomarev V A. Remarks on the classification of a pair of commut- ing linear transformations in a finite dimensional space. Functional Analysis and Its Applications, 1969, 3: 81-82
The classification of 2 and 3 torsion free polyhedra. J Z Pan, Z Zhu, Acta Mathematica Sinica, English Series. Pan J Z, Zhu Z J. The classification of 2 and 3 torsion free polyhedra. Accepted by Acta Mathematica Sinica, English Series, 2015
Introduction to the arithmetic theory of automorphic functions. G Shimura, Shimura G. Introduction to the arithmetic theory of automorphic functions[M].
. R M. Algebraic Switzer, Topology-Homology, Homotopy, Berlin, Springer-VerlagSwitzer R M. Algebraic Topology-Homology and Homotopy. Berlin: Springer-Verlag, 1975
A 4 n -polyhedra with free homology. H Unsöld, Manuscripta mathematica. 65Unsöld H M. A 4 n -polyhedra with free homology. Manuscripta mathematica, 1989, 65: 123-146
| []
|
[
"MULTI-SPECIES PATLAK-KELLER-SEGEL SYSTEM",
"MULTI-SPECIES PATLAK-KELLER-SEGEL SYSTEM"
]
| [
"Eitan Tadmor "
]
| []
| []
| We study the regularity and large-time behavior of a crowd of species driven by chemo-tactic interactions. What distinguishes the different species is the way they interact with the rest of the crowd: the collective motion is driven by different chemical reactions which end up in a coupled system of parabolic Patlak-Keller-Segel equations. We show that the densities of the different species diffuse to zero provided the chemical interactions between the different species satisfy certain sub-critical condition; the latter is intimately related to a log-Hardy-Littlewood-Sobolev inequality for systems due to Shafrir & Wolansky. Thus for example, when two species interact, one of which has mass less than 4π, then the 2-system stays smooth for all time independent of the total mass of the system, in sharp contrast with the well-known breakdown of one specie with initial mass> 8π. | 10.1512/iumj.2021.70.8527 | [
"https://arxiv.org/pdf/1903.02673v3.pdf"
]
| 119,607,038 | 1903.02673 | 90106a08450c14323347501bf49d829231eb558c |
MULTI-SPECIES PATLAK-KELLER-SEGEL SYSTEM
23 Aug 2021
Eitan Tadmor
MULTI-SPECIES PATLAK-KELLER-SEGEL SYSTEM
23 Aug 2021
We study the regularity and large-time behavior of a crowd of species driven by chemo-tactic interactions. What distinguishes the different species is the way they interact with the rest of the crowd: the collective motion is driven by different chemical reactions which end up in a coupled system of parabolic Patlak-Keller-Segel equations. We show that the densities of the different species diffuse to zero provided the chemical interactions between the different species satisfy certain sub-critical condition; the latter is intimately related to a log-Hardy-Littlewood-Sobolev inequality for systems due to Shafrir & Wolansky. Thus for example, when two species interact, one of which has mass less than 4π, then the 2-system stays smooth for all time independent of the total mass of the system, in sharp contrast with the well-known breakdown of one specie with initial mass> 8π.
Introduction
In this paper, we consider the multi-species parabolic-elliptic Patlak-Keller-Segel (PKS) system which models chemotaxis phenomena involving multiple bacteria species ∂ t n α +∇ · (∇c α n α ) = ∆n α , α ∈ I, −∆c α = β∈I b αβ n β , n α (x, t = 0) = n α0 (x), x ∈ R 2 .
(1.1)
Here n α , c α denote the bacteria and the chemical densities respectively. The parameters α, β ∈ I indicate different species of bacteria/chemicals. The total number of species, which is denoted |I| throughout the paper, is assumed to be finite. The first equation in the system (1.1) describes the time evolution of the bacteria density n α subject to chemical density distribution c α and diffusion. The second equation governs the evolution of the chemical density c α , which is determined by the collective effect of different species of bacteria n β . The chemical generation coefficients b αβ represent the relative impact of the bacteria distribution n β on the generation of the chemical c α .
Remark that system (1.1) covers the more general setup, in which each species has its own sensitivity to the chemo-attractant, quantified by the positive constant parameters {χ α }, ∂ t n α +χ α ∇ · (∇c α n α ) = ∆n α , α ∈ I, −∆c α = β∈I b αβ n β , n α (x, t = 0) = n α0 (x), x ∈ R 2 .
(1.1) ′ Indeed, if we let η α > 0 be scaling parameters at our disposal, we set n ′ α := η α n α and c ′ α := χ α c α , then (1.1) ′ is reduced to (1.1) for the 'tagged' variables, (n ′ α , c ′ α ), with re-scaled generation array, b ′ αβ = χ α b αβ η −1 β . In particular, choosing η β = 1/χ β shows that if B = {b αβ } is symmetric, then so is B ′ .
In the last few years, social interaction within biofilms -a special form of bacteria colonies -has aroused increasing interest among the biology and biophysics community, [12]. In a biofilm, billions of bacteria of different species live together and create hard-to-remove infections. Different cells in the biofilm specialize in various tasks, acquiring food, defending colony and preserving genetic information included. Chemical signals and ion signals are generated to communicate information within these bacteria colonies. The multi-species PKS model (1.1) serves as an attempt to understand the biofilm. Moreover, in the Chemotaxis experiment, the bacteria involved have large genetic variation. For example, E.coli only share 30% of their genes. Equation (1.1) also serves as a more accurate model than single species dynamics, taking into account the possible genetic variation appeared in the experiments.
We recall the large literature on the single species PKS model (1.1) (|I| = 1), referring the interested reader to the review [18] and the following works [3]- [6], [10]- [11], [19], [17], [24], [23], [26], [20]. We summarize the essential results here. The preserved total mass of the solution M := |n(t)| L 1 = |n 0 | L 1 determines the long time behavior. If the intitial data n 0 has subcritical mass M < 8π and finite second moment, the unique global smooth solutions exist for all time, [5], [7], [13]. If M is strictly greater than 8π and the second moment is finite, solution blows up in finite time, [19], [22], [5]. If M = 8π, solution aggregates to a Dirac mass as time tends to infinity, [4].
The multi-species PKS equation (1.1) has attracted increasing interest in the last decade. Its study originates in Wolansky's work [27]. Since then, a lot of research were carried out in the specific case of two interacting species, [9], [2], [21], [1], [15], [14]. Even in the two-species case, the PKS systems (1.1) behave differently from the single-species ones. Consider the PKS equation (1.1) subject to symmetric chemical generation coefficients
B := b 11 b 12 b 21 b 22 = 0 1 1 0 , (1.2)
which models two species with cross-attractions. We will prove that if one species has mass strictly less than 4π, the solutions to (1.1) exist globally regardless of the mass of the other species. However, if some critical mass constraint is violated, the solutions undergo finite time blow-up. On the other hand, for some special non-symmetric chemical generation matrices, e.g.,
B = 0 1 −2 0 ,
the solutions n := {n α } α∈I to (1.1) decay to zero unconditionally. In this paper, we quantify a global well-posedness condition for the multi-species PKS model (1.1) subject to symmetric chemical generation coefficients, and we characterize its long time behavior (for both -symmetric and non-symmetric cases), along the lines of our results announced in [16].
Before stating the main theorems, we list the basic assumptions and terminologies. The following initial conditions are always assumed (1.3) α∈I n α0 (1 + |x| 2 ) ∈ L 1 (R 2 ); n α0 log n α0 ∈ L 1 (R 2 ), ∀α ∈ I.
We store the chemical generation coefficients b αβ 's and the masses M α = |n α (·, t)| 1 ≡ |n α0 | 1 in compact matrix/vector form: , where ·, · , | · | 1 denote the Euclidean inner product and the ℓ 1 -vector norm.
We first studied the multi-species PKS system (1.1) subject to symmetric arrays (1.6) b αβ = b βα , ∀α, β ∈ I.
Same as in the single species case, there exists natural dissipated free energy for the system (1.1) (1.7) E[n] = α∈I n α log n α dx + α,β∈I b αβ 4π n α (x) log |x − y|n β (y)dxdy, n := (n α ) α∈I .
The proof of the dissipation of (1.7) is postponed to the next section. We solve the equation (1.1) in the distribution sense with free energy dissipation constraint.
(1.8) E[n(t)] + α∈I t 0 R 2 n α |∇ log n α − ∇c α | 2 dxds E[n 0 ], ∀t ∈ [0, T ⋆ ).
If the equality in (1.8) is satisfied, we call it free energy dissipation equality.
The existence and blow-up theorems of (1.1) are stated as follows.
Q B + ,M [I] < 8π, (1.9a) Q B + ,M [J ] < Q B + ,M [I] for all ∅ = J I. (1.9b)
Then the free energy solutions to (1.1) exist for all finite time.
The multi-species mass condition (1.9) recovers the threshold for global regularity of a single species (after re-scaling), χM < 8π, which is known to be sharp [19,22,5,7,13]. It also provides a sharp characterization for global regularity of two-species dynamics.
Here are three prototypical examples. Example 1.1 (Competition of two species). We consider the 2-species dynamics (1.2) with general sensitivity coefficients χ 1 , χ 2 > 0, ∂ t n 1 + χ 1 ∇·(n 1 ∇c 1 ) = ∆n 1 ,
∂ t n 2 + χ 2 ∇·(n 2 ∇c 2 ) = ∆n 2 , −∆c 1 = n 2 , −∆c 2 = n 1 .
Theorem 1.1 applies to the re-scaled variables n ′ α = n α /χ α with re-scaled masses M ′ α = M α /χ α and the corresponding re-scaled chemical generation array B = 0
χ 1 χ 2 χ 1 χ 2 0 . The sub-critical condition (1.9a) now reads ((χ 2 M 1 ) −1 + (χ 1 M 2 ) −1 ) −1 < 4π, while (1.9b) is void since Q B,M ′ [J ] = 0 for J = {1}, {2}.
In particular, if the mass of one species -either χ 2 M 1 or χ 1 M 2 is strictly less than 4π, then (1.9) holds: global regularity follows independently of the mass of the other species.
Example 1.2 (Competition of three-and many-species). We consider the 3-species dynamics (1.2) with positive sensitivity coefficients χ 1 = χ 3 := χ and χ 2 ,
∂ t n α + χ α ∇ · (n α ∇c α ) = ∆n α , α ∈ {1, 2, 3} −∆ c 1 c 2 c 3 = 0 1 0 1 0 1 0 1 0 n 1 n 2 n 3 .
Theorem 1.1 applies to the re-scaled variables n ′ α = n α /χ α with re-scaled masses M ′ α = M α /χ α and the corresponding re-scaled chemical generation array B =
0 χ 1 χ 2 0 χ 1 χ 2 0 χ 2 χ 3 0 χ 2 χ 3 0 . The sub-critical condition (1.9b) with J = {1, 2} ⊂ {1, 2, 3} requires 2 M 1 M 2 M 1 /χ 1 + M 2 /χ 2 < 2 M 1 M 2 + M 2 M 3 M 1 /χ 1 + M 2 /χ 2 + M 3 /χ 3 ,
which is satisfied for all M α 's (recalling that χ 3 = χ 1 ). Similarly, the sub-critical condition
(1.9b) with J = {2, 3} ⊂ {1, 2, 3} requires 2 M 2 M 3 M 2 /χ 2 + M 3 /χ 3 < 2 M 1 M 2 + M 2 M 3 M 1 /χ 1 + M 2 /χ 2 + M 3 /χ 3 ,
holds for all M α 's; finally, (1.9b) with J = {1, 3} is void, and hence it remains to verify that (1.9a) holds
2 M 1 M 2 + M 2 M 3 M 1 /χ 1 + M 2 /χ 2 + M 3 /χ 3 < 8π;
This inequality is satisfied if
1 1/χ 2 M 1 + 1/χ 1 M 2 + 1 1/χ 3 M 2 + 1/χ 2 M 3 < 4π
For example, if χM 2 < 2π, then (1.9) holds: global regularity follows independently of the mass of the other species, M 1 and M 3 .
This can be extended to a general many species array
0 1 0 . . . . . . 1 0 1 0 . . . 0 1 . . . . . . . . . 0 . . . . . . 0 1 0 . . . . . . 1 0 .
Example 1.3 (Cooperation of two species).
Consider the 2-species dynamics [14,8] ∂ t n 1 + χ 1 ∇ · (n 1 ∇c) = ∆n 1 , ∂ t n 2 + χ 2 ∇ · (n 2 ∇c) = ∆n 2 , ∆c + n 1 + n 2 − c = 0.
Theorem 1.1 applies to the re-scaled variables n ′ α = n α /χ α with re-scaled masses M ′ α = M α /χ α and the corresponding re-scaled concentrations c ′ 1 := χ 1 c and c ′ 2 := χ 2 c, coupled through the chemical generation array B = χ 2 1 χ 1 χ 2 χ 1 χ 2 χ 2 2 . The sub-critical condition(1.9) now reads
max{χ 2 1 M ′ 1 , χ 2 2 M ′ 2 } < (χ 1 M ′ 1 + χ 2 M ′ 2 ) 2 M ′ 1 + M ′ 2 < 8π,
or -after scaling back,
(1.10) max{χ 1 M 1 , χ 2 M 2 } < (M 1 + M 2 ) 2 M 1 /χ 1 + M 2 /χ 2 < 8π.
The inequality on the right of (1.10) coincides with the first part of characterization for global existence in [14,Theorem 1]. The inequality on the left of (1.10) holds whenever
1 2 < χ 1 /χ 2 < 2 (independent of the M i 's(1.11) ρ(B + ) max α M α < 8π, ρ(X) |X∈Symm I×I := max α λ α (X).
Thus, (1.11) implies that the first inequality (1.9a) is satisfied. As an example, we revisit the two-species example (1.2) (with χ 1 = χ 2 = 1). In this case, Q B,M [J ] = 0 for J I, so the second inequalities in (1.9b) are void: it is only the first part, (1.9a), that needs to be verified. Here ρ(B + ) = 1 and the sufficient condition (1.11) amounts to max α∈{1,2} M α < 8π, which suffices (yet stronger than the sharp (M −1 1 + M −1 2 ) −1 < 4π encountered before) for (1.9a) and hence the global existence of (1.2).
To formulate the smoothness and uniqueness theorems, we need further physical restriction on the free energy solutions. First, the physical solutions to equation (1.1) should satisfy the conservation of mass:
|n α (t)| 1 ≡|n α (0)| 1 = M α , ∀α ∈ I, ∀t ∈ [0, T ⋆ ). (1.12a)
Moreover, by formal computation, which is postponed to the next section, we have that the total second moment of the physically relevant solutions should grow linearly
V [n] := α∈I V α (t) = α∈I n α (x, t)|x| 2 dx = α 4M α 1 − Q B,M [I] 8π t + α∈I V α (0), ∀t ∈ [0, T ⋆ ). (1.12b)
Finally, since it is well-known that the boundedness of the entropy S[n α ] := n α log n α is closely related to existence of smooth solutions, we consider free energy solutions subject to bounded entropy and free energy dissipation,
A t [n] := sup s∈[0,t] α∈I n α (x, s) log + n α (x, s)dx + α∈I t 0 n α (x, s)|∇ log n α (x, s) − ∇c α (x, s)| 2 dxds < ∞, ∀t < T ⋆ , (1.12c)
where T ⋆ denotes the maximal existing time and log + denotes the positive part of the function log. Similar quantity is defined in the paper [13]. We say that a free energy solution is physically relevant if it satisfies physical constraints (1.12a), (1.12b) and (1.12c). Now we state the theorems concerning the smoothness, uniqueness and long-time behavior of the physically relevant free energy solutions. Consider the solutions to (1.1) subject to initial condition n α ∈ H s , ∀α ∈ I, s 2 and symmetric chemical generation matrices (1.6). There exists a constant C, which only depends on the initial data, such that the following estimate is satisfied,
(1.13) α∈I |n α (t)| 2 2 C 1 + t , ∀t ∈ [0, ∞).
If the chemical generation matrix B is non-symmetric, the free energy (1.7) defined above is no longer dissipated. As a result, we cannot use the machinery developed in [5] to prove a global well-posedness theorem. However, we can still prove the global existence and uniform in time boundedness results for the multi-species PKS systems (1.1) subject to a special class of chemical generation matrices which we call essentially dissipative matrices. The definition is as follows: The theorem corresponding to the multi-species PKS model (1.1) subject to essentially dissipative B is as follows. Theorem 1.6 (Non-symmetric interactions). Consider the multi-species PKS system (1.1) subject to initial condition (n α ) 0 ∈ H s , ∀α ∈ I, s 2. Assume that the chemical generation matrix B is essentially dissipative. Then there exists a uniformly bounded H s solution to the equation (1.1) for all time, i.e., there exists a constant
C H s = C H s ({n α0 } α∈I ) such that α∈I |n α (t)| H s C H s < ∞, ∀t ∈ [0, ∞).
Furthermore, there exists a constant C, which depends only on the initial data and B, such that the following estimate is satisfied,
(1.14) α∈I |n α (t)| 2 2 C 1 + t , ∀t 0.
The paper is organized as follows: in section 2, we give preliminaries and the proof of Theorem 1.2; in section 3, we prove the existence of global free energy solutions with subcritical mass; in section 4, we prove the smoothness of the free energy solutions; in section 5, we prove the uniqueness of the free energy solutions; in section 6, we explore the long-time behavior of the free energy solutions; in the last section, we discuss the non-symmetric case.
1.1. Notations. In the paper, we use the notation A B (A, B 0), if there exists a constant C such that A CB. We will also use α to represent α∈I unless otherwise stated. Constant C S , C HLS , C lHLS , C GN S and C N are used to represent universal constant depending on various differential(integral) inequalities. The exact values might change from line to line. Given a vector w we let |w| p denote its ℓ p norm; given a vector function w(·) we let |w(·)| X denote its norm in vector space X. In particular, |w(·)| p denote the usual L p spaces, and the distinction between ℓ p and L p spaces is clear from the text.
Preliminaries
Two quantities are crucial in the analysis of the multi-species PKS dynamics (1.1) -the free energy E[n] (1.7) and the second moment α V α (1.12b). In this section, we calculate the time evolution of these two quantities formally and give the proof of Theorem 1.2.
Same as in the single species case, the free energy E[n] (1.7) is formally dissipated under the equation (1.1).
Lemma 2.1. Consider smooth solutions n to the equation (1.1) subject to initial data n 0 and symmetric B, the free energy E[n] (1.7) is deceasing and it satisfies the following free energy dissipation equality
(2.1) E[n(t)] = E[n 0 ] − α∈I t 0 n α |∇ log n α − ∇c α | 2 dxds =: E[n 0 ] − t 0 D[n(s)]ds.
Proof. We apply the equation (1.1) and the symmetric condition (1.6) to calculate the time evolution of the free energy E[n]
d dt E[n] = α (n α ) t log n α − α c α (n α ) t 2 dx − α (c α ) t n α 2 dx = α (n α ) t log n α − α c α (n α ) t 2 dx + α,β b αβ 4π (n β ) t (y) log |x − y|n α (x)dxdy = α (n α ) t log n α − α c α (n α ) t 2 dx + α,β b αβ 4π (n α ) t (x) log |x − y|n β (y)dxdy = α (n α ) t (log n α − c α )dx. (2.2)
Since the equation (1.1) can be rewritten as
∂ t n α = ∇ · (n α (∇ log n α − ∇c α )),
applying integration by parts on the time evolution of E[n] (2.2) yields
d dt E[n] = − α n α |∇ log n α − ∇c α | 2 dx 0.
Now by integration in time, we obtain (2.1).
Next we give the time evolution of the second moment.
Lemma 2.2. Consider the smooth solutions n to the equation (1.1) subject to smooth initial data n 0 ∈ H s , s 2 and symmetric chemical generation matrix B. The time evolution of the total second moment α∈I V α (1.12b) satisfies the following equality
d dt V [n] = d dt α∈I V α = α∈I 4M α 1 − Q B,M [I] 8π , (2.3) where Q B,M is defined in (1.5).
Proof. Applying the equation (1.1), the definition of Q B,M (1.5) and the symmetry condition (1.6), we calculate the time evolution of the total second moment as follows
d dt α V α = α 4M α + α 2x · (∇c α n α )dx = α 4M α − α,β b αβ 1 2π 2x · (x − y) |x − y| 2 n β (y)n α (x)dxdy = α 4M α − α,β b αβ 1 4π 2(x − y) · (x − y) |x − y| 2 n β (y)n α (x)dxdy = α 4M α − α,β b αβ M α M β 2π = α 4M α 1 − Q B,M [I] 8π .
This completes the proof of the lemma.
Remark 2.1. Note that in the proofs of these two lemmas, the symmetry of the matrix B is always assumed. In the non-symmetric case, i.e., b αβ = b βα , neither of these lemmas can be applied. This is the main difficulty we faced when applying the free energy machinery in the non-symmetric case.
Proof of Theorem 1.2. Suppose that the solution n is smooth for all time. By the assumption Q B,M [I] > 8π, we have that the time evolution (2.3) is a strictly negative constant. As a result, the total second moment will decrease to zero at a finite time T ⋆ while the L 1 norm of the solution α∈I |n α | 1 is preserved. At time T ⋆ , the smoothness assumption of the solution will be contradicted. Hence the solution must lose H s regularity before T ⋆ .
3. Global existence for subcritical data 3.1. A priori estimate on entropy. In the case of a single species, the analysis of PKS equation proceeds by combining an a priori estimate of the free energy (1.8) together with a logarithmic Hardy-Littlewood-Sobolev inequality to recover a uniform in time a priori bound on the entropy, which in turn yields existence of free energy solution for all time.
In the present context of a coupled system of PKS equations, one seeks the corresponding log-Hardy-Littlewood-Sobolev inequality for systems which guarantees a finite lower bound of the multi-species functional Ψ[n], n := {n α } α∈I ,
(3.1) Ψ[n] := α∈I R 2 n α log n α dx + 1 4π α,β∈I a αβ R 2 ×R 2 n α (x) log |x − y|n β (y)dxdy,
overall n α 's in the function space
Γ M (R 2 ) = (n α ) α∈I n α 0, R 2 n α | log n α |dx < ∞, R 2 n α dx = M α , R 2 n α log(1 + |x| 2 )dx < ∞, ∀α ∈ I . (3.2)
To this end we follow [25]. For an arbitrary subset of our index set, J ⊂ I, one defines the quantity, [25, p. 414], if the condition Λ J 0 is violated for some ∅ = J I, then a scaling argument yields that the functional Ψ[n] on the sphere S 2 has no lower bound. One might be able to use this property to construct blow-up solutions on the plane, when the following strict monotonicity fails (recalling the functional Q B + ,M in (1.5)
(3.3) Λ J (M) := 8π α∈J M α − α,β∈J a αβ M α M β , M := (M α ) α∈I , |I| < ∞.Q B + ,M (J ) < Q B + ,M (I) for all J I.
The above theorem yields the following.
Q B + ,M [J ] < Q B + ,M [I] < 8π, ∅ = J I,
then the total entropy α n α log n α dx is bounded for all finite time.
Remark 3.2. We will not lose generality if we assume that B + is not a zero matrix. If all the entries in B is negative, classical techniques are sufficient to analyze the system.
Proof. First we rewrite the free energy dissipation relation (2.1) as follows
E[n 0 ] E[n] α∈I n α log n α dx + α,β∈I (b αβ ) + 4π n α (x) log |x − y|n β (y)dxdy − α,β∈I (b αβ ) − 4π |x−y| 1 n α (x) log |x − y|n β (y)dxdy =(1 − θ) α∈I n α log n α dx + θ α∈I n α log n α dx + 1 4π α,β∈I (b αβ ) + θ n α (x) log |x − y|n β (y)dxdy − α,β∈I (b αβ ) − 4π (M α V β + M β V α ). (3.6) Define a αβ := (b αβ ) + /θ 0, 0 < θ < 1.
In order to apply Theorem 3.1, we need to check the condition (3.4). By choosing θ properly, we make sure that the first condition Λ I (M) = 0 in (3.4) is satisfied. Direct calculation yields that
Λ I (M) = 0 ⇔θ = α,β∈I (b αβ ) + M α M β 8π β∈I M β = Q B + ,M [I] 8π .
Note that the assumption Q B + ,M (I) < 8π guarantees that θ < 1. Next we check the remaining conditions in (3.4). Recalling the definition of θ and Q B + ,M [J ], the following condition guarantees the existence of the minimizer of Ψ in Γ M (R 2 )
Q B + ,M [I] > Q B + ,M [J ], ∀∅ = J I, ⇔Λ J (M) = 8π β∈J M β − 8π β∈I M β α,β∈I (b αβ ) + M α M β α,β∈J (b αβ ) + M α M β > 0, ∀∅ = J I, ⇔Λ J (M) > 0, ∀∅ = J I.
Now combining Theorem 3.1, the boundedness of the second moment (2.3) and the fact that 0 < θ < 1 yields that
E[n 0 ] E[n] (1 − θ) α∈I n α log n α − θC lHLS − 1 4π α,β∈I (b αβ ) − (M α V β + M β V α ), ⇒ α∈I n α log n α dx E[n 0 ] + θC lHLS + 1 2π α,β (b αβ ) − M α V β 1 − θ < ∞.
This completes the proof.
The proof above shows that the log-HLS will not hold if supp(B) I, or else we can choose J = supp(B) I for which
Λ J (M) = 8π β∈J M β − 8π β∈I M β α,β∈I (b αβ ) + M α M β α,β∈J (b αβ ) + M α M β < 0.Proposition 3.2. Let A = (a αβ ) α,β∈I be a symmetric matrix with positive entries a αβ 0, then Q A,M [I] < ρ(A) max α M α .
To verify (3.4), we express A in terms of its spectral decomposition
A = α λ α w α w * α where {(λ α , w α )} are the ortho-normal eigensystem of A. We compute AM, M = α λ α | M, w α | 2 max α λ α |M| 2 2 max α λ α |M| 1 max α M α and the result follows, Q A,M [I] ρ(A) max α M α .
3.2. Local existence and extension theorems. Before introducing the local existence theorems of the free energy solutions, we regularize the system (1.1) by appropriately truncating the singularity in the convolution kernel ∇K = ∇(−∆) −1 :
K ǫ (z) :=K 1 |z| ǫ − 1 2π log ǫ; K 1 (|z|) := − 1 2π log |z|, |z| 4, K 1 (|z|) :=0, |z| 1
to get the following regularized multi-species PKS system
∂ t n ǫ α + ∇ · (∇c ǫ α n ǫ α ) = ∆n ǫ α , (3.7a) c ǫ α = K ǫ * β∈I b αβ n β , (3.7b) n ǫ α (t = 0) = min{n α0 , ǫ −1 }, ∀α ∈ I, x ∈ R 2 . (3.7c)
Note that the masses of the solutions M α = |n α | 1 are preserved in time.
Since |∇K ǫ | ∞ is bounded for any fixed positive ǫ, applying the Young's convolution inequality yields that the vector field ∇c α is bounded in L ∞ , i.e.,
α |∇c α | ∞ α,β |∇K ǫ | ∞ |b αβ |M β .
Now standard convection-diffusion PDE theory can be applied to show that the regularized system (3.7) admits global solutions in
L 2 ((0, T ]; H 1 ) ∩ C((0, T ]; L 2 ).
The following two propositions are the main local existence theorems.
Proposition 3.3.
(Criterion for Local Existence) Let (n ǫ α ) α∈I be the solutions to the regularized multi-species PKS system (3.7) on the time interval [0, T ) subject to initial constrain (1.3). If the total entropy α S[n ǫ α ] is bounded from above uniformly in ǫ, i.e.,
α∈I S[n ǫ α (t)] = α∈I n ǫ α (x, t) log n ǫ α (x, t)dx C L log L < ∞, ∀t ∈ [0, T ], (3.8)
then there exists a subsequence of {(n ǫ α ) α∈I } ǫ>0 converging in the L 2 t L 2
x strong topology to a non-negative free-energy solution to the multi-species PKS system (1.1) subject to initial data (n α ) 0 on the time interval [0, T ]. Proposition 3.4. (Blow-up Criterion of Free-energy Solutions) Consider the multi-species PKS system (1.1) subject to initial condition (1.3). There exists a maximal existence time T * > 0 for the free-energy solution to the system (1.1). Moreover, if T * < ∞, then there exists an α ∈ I such that
lim t→T * R 2 n α (t) log n α (t)dx = ∞.
Proof of proposition 3.3. The proof is divided into three main steps.
• STEP #1. Here we prove A priori estimates on mass distribution n ǫ and chemical distribution c ǫ α to prepare for the latter steps. For the readers' convenience, we summarize the uniform in ǫ estimates we obtained in this step:
α |(1 + |x| 2 )n ǫ α | L ∞ t (0,T ;L 1 x ) C V ({(V α ) 0 } α∈I , M) < ∞; (3.9a) α |n ǫ α log ǫ n ǫ α | L ∞ t (0,T ;L 1 x ) C(C L log L , C V ) < ∞; (3.9b) α |∇ √ n α | 2 L 2 t (0,T ;L 2 x ) C(C L log L , C V ) < ∞; (3.9c) α | √ n α ∇c α | 2 L 2 t (0,T ;L 2 x ) C(C L log L , C V ) < ∞. (3.9d)
Before proving these estimates, we recall the following Gagliardo-Nirenberg-Sobolev inequality, which is applied several times in the sequel:
|u| 2 L p C GN S |∇u| 2−4/p L 2 |u| 4/p L 2 , ∀u ∈ H 1 , ∀p ∈ [2, ∞). (3.10)
We start by proving the second moment control of the solutions (3.9a). Similar to the calculation in the proof of Lemma 3.11, we have the following:
d dt α n α |x| 2 dx 4 α M α + α,β (b αβ ) − M α M β 2π , (3.11)
from which the estimate (3.9a) follows directly.
To prove the L 1 control of n ǫ α log n ǫ α (3.9b), we recall the following lemma.
Lemma 3.1. For any g such that (1 + |x| 2 )g ∈ L 1 + (R 2 ), we have g log − g ∈ L 1 (R 2 ) and R 2 g log − gdx 1 2 R 2 g(x)|x| 2 dx + log(2π) R 2 g(x)dx + 1 e . (3.12)
Proof. The proof of the lemma can be found in the paper [5] and [4]. We refer the interested readers to these papers for further details.
The estimate (3.12) yields that
|n ǫ α log n ǫ α |dx n ǫ α (log n ǫ α + |x| 2 )dx + 2 log(2π)M α + 2 e C L log L + C V + 2 log(2π)M α + 2 e .
As a result, we prove (3.9b).
Next we show the bound of |∇ √ n α | 2
L 2 t (0,T ;L 2 x ) (3.9c
). This term naturally arises when we calculate the time evolution of the entropy α S[n α ]:
d dt α∈I S[n α ] = − 4 α∈I |∇ √ n α | 2 dx + α,β∈I b αβ n α n β dx. (3.13) If we integrate (3.13), the quantity α |∇ √ n α | 2 L 2 t (0,T ;L 2
x ) will appear on the right hand side. Therefore, we need to estimate the other terms in (3.13). Before going into the detailed estimates of the second term on the right hand side of (3.13), we recall that the total mass in the superlevel set can be estimated in terms of the entropy bound C L log L α∈I nα K n α dx 1 log(K) α∈I |n α log n α |dx C L log L log(K) =: η(K). (3.14)
If K is chosen large compared to the bound C L log L , the constant η(K) will be small. It is classical to use this fact to control the nonlinearity in the PKS equation. Now the second term on the right hand side of (3.13) can be estimated using Hölder's inequality, Gagliardo-Nirenberg-Sobolev inequality and Young's inequality as follows:
α,β b αβ n α n β dx max α,β |b αβ | α |n α | 2 β |n β | 2 max α,β |b αβ | α |n α 1 nα K | 2 + α M 1/2 α K 1/2 2 (3.15) 2 max α,β |b αβ | α |n α 1 nα K | 1/4 1 |n α | 3/4 3 2 + 2 max α,β |b αβ |I|K α M α η(K) 1/2 C GN S max α,β |b αβ | α M 1/2 α α |∇ √ n α | 2 2 + 2 max α,β |b αβ |I|K α M α .
Combining (3.13) and (3.15), we have the following estimate on the time evolution of α S[n α ]:
d dt α S[n α ] − α 4 − η(K) 1/2 C GN S max α,β |b αβ | α M 1/2 α |∇ √ n α | 2 2 + 2 max α,β |b αβ | · |I|K α M α .
The coefficient −(4 − η(K) 1/2 C GN S max α,β |b αβ |( α M 1/2 α )) is negative for K large enough. Therefore, for large enough K, we have the following estimate:
α T 0 |∇ √ n α | 2 dxdt S[n(0)] − S[n(T )] + 2 max α,β |b αβ | · |I|K α M α T 4 − η(K) 1/2 C GN S max α,β |b αβ |( α M 1/2 α ) < ∞. (3.16)
Since the entropy S[n(T )] is bounded, the right hand side is bounded. This completes the proof of (3.9c).
Finally, we prove the estimate (3.9d). The term | √ n ǫ α ∇c ǫ α | 2 2 naturally arises when we calculate the time evolution of α n ǫ α c ǫ α dx
1 2 d dt α n ǫ α c ǫ α dx = α n ǫ α ∆c ǫ α + α n ǫ α |∇c ǫ α | 2 dx.
Integration in time yields that
α T 0 n ǫ α |∇c ǫ α | 2 dxdt = 1 2 n ǫ α (T )c ǫ α (T ) − 1 2 n ǫ α (0)c ǫ α (0)dx − α T 0 n ǫ α ∆c ǫ α dxdt.
(3.17)
We first estimate the first term on the right hand side of (3.17). Applying the estimate of |n ǫ α log n ǫ α | L ∞ t (0,T ;L 1
x ) (3.9b), the relation c ǫ α = β b αβ K ǫ * n ǫ β and the Young's inequality ab e a−1 + b ln b, ∀a, b 1, we deduce that
|c ǫ α (x)| 1 2π β∈I |b αβ | |x−y| 1 |K ǫ (|x − y|)n β (y)|dy + 1 2π β∈I |b αβ | |x−y| 1 |K ǫ (|x − y|)n β (y)|dy β∈I |b αβ | |x−y| 1
(1 + n β (y)) log(1 + n β (y)) + 1 |x − y| dy + β∈I |b αβ | (log(1 + |x|) + log(1 + |y|))n β (y)dy
β∈I |b αβ |(C L log L + M β + 1 + V β + M β log(1 + |x|)).
Combining it with the second moment control (3.9a), we have that n α c α (t) is bounded independent of ǫ on time interval [0, T ]:
n α c α dx β∈I |b αβ |(C L log L + M β + 1 + V β )M α + β∈I |b αβ |M β V α < ∞. (3.18)
The last term on the right hand side of (3.17) can be estimated using the L 2 ([0, T ] × R 2 ) estimate of ∇ √ n ǫ α (3.9c) and the relation
d dt α S[n ǫ α (t)] = −4 α |∇ n ǫ α | 2 dx + α∈I n ǫ α (−∆c ǫ α )dx.
Time integration of this relation yields that
α∈I T 0 n ǫ α (−∆c ǫ α )dxdt = α S[n α ǫ (T )] − α S[n α ǫ (0)] + 4 α T 0 |∇ n ǫ α | 2 dxdt C(C L log L ) < ∞.
Combining this estimate, (3.17) and (3.18), we completed the proof of (3.9d). In this way, we obtained estimates on the two terms appearing in the dissipation of the free energy.
• STEP #2. Passing to the limit in L 2 t (δ, T ; L 2 ) for any δ > 0. Here we would like to use the Aubin-Lions compactness lemma: Lemma 3.2 (Aubin-Lions lemma, [4]). Take T > 0 and 1
< p < ∞. Assume that (f n ) n∈N is a bounded sequence of functions in L p ([0, T ]; H) where H is a Banach space. If (f n ) n∈N is also bounded in L p ([0, T ]; V ) where V is compactly embedded in H and (∂f n /∂ t ) n∈N ⊂ L p ([0, T ]; W ) uniformly with respect to n ∈ N where H is imbedded in W , then (f n ) n∈N is relatively compact in L p ([0, T ]; H).
Our goal is to find the appropriate spaces V, H, W for (n ǫ α ) ǫ>0 . We subdivide the proof into steps, each step determines one space in the lemma. We will show that the following estimates are satisfied by the regularized solutions with the constant C L 2 t H 1
x independent of the regularized parameter ǫ:
|n ǫ α | L 2 t ([δ,T ],L 2 x ) C L 2 t H 1 x < ∞, |∇n ǫ α | L 2 t ([δ,T ],L 2 x ) C L 2 t H 1
x < ∞, ∀α ∈ I. We begin with the H = L 2 -estimate of α |n ǫ α | 2
L 2 t ([δ,T ];L 2
x ) . Here we prove that the solutions n ǫ α (t), ∀α ∈ I are L 2 integrable in space for ∀t ∈ [δ, T ]. If the initial data n α0 is L 2 integrable for all α, the solutions to the regularized equation (3.7) stay in L 2 for all time. This is the content of Lemma 3.3. However, the initial constraint (1.3) does not guarantee L p boundedness, so we prove the hypercontractivity property of the equation (1.1), which yields that the solutions become L 2 integrable after an arbitrarily small amount of time δ > 0. This is the content of Lemma 3.4. Lemma 3.3. Consider the regularized multi-species PKS system (3.7) subject to initial condition n α0 ∈ L p , ∀α ∈ I, ∀p ∈ [1, ∞). If the assumptions in the Proposition 3.3 are satisfied, then the solutions to the system (3.7) are bounded in L p for ∀t ∈ [0, T ].
Proof. The p = 1 case is equivalent to the fact that the regularized equations preserve mass.
We do the L p energy estimate formally, i.e., we assume −∆c α = β b αβ n β , and refer the interested readers to the paper [5] for detailed justifications. During the calculation, we will use the following natural implication of the GNS inequality
(f − K) p+1 + dx C GN S (f − K) + dx |∇(f − K) p/2 + | 2 dx C GN S |f log f | 1 log K |∇(f − K) p/2 + | 2 dx =: C GN S η(K) |∇(f − K) p/2 + | 2 dx. (3.19)
Note that if |f log f | 1 is bounded, η(K) is small if one choose K large. Now we estimate the time evolution of α |(n α − K) + | p p with (3.19) as follows
1 p α d dt (n α − K) p + dx = − 4 p − 1 p 2 α |∇(n α − K) p/2 + | 2 dx − α 1 p ∇c α · ∇(n α − K) p + dx − α ∆c α n α (n α − K) p−1 + dx − 4 p − 1 p 2 α |∇(n α − K) p/2 + | 2 dx + p + 1 p α,β |b αβ | (n α − K) p + (n β − K) + dx + p + 1 p K α,β |b αβ | (n α − K) p + dx + K α,β |b αβ | (n β − K) + (n α − K) p−1 + dx + K 2 α,β |b αβ |(n α − K) p−1 + dx
and hence we find
1 p α d dt (n α − K) p + dx − 4 p − 1 p 2 α |∇(n α − K) p/2 + | 2 dx + max α β |b αβ | C GN S α |(n α − K) + | 1 |∇(n α − K) p/2 + | 2 2 + C p (K, B, M)|(n α − K) + | p p + C p (K, B, M) − 4(p − 1) p 2 + η(K) max α β |b αβ | C GN S α |∇(n α − K) p/2 + | 2 dx + C p (K, B, M) α |(n α − K) + | p p + C p (K, B, M).
Due to the estimates (3.9b) and (3.14), the constant η(K) can be made small enough such that the leading order term is negative, and the estimate can be further simplified as follows:
(3.20) d dt α |(n α − K) + | p p C p (K, B, M) α |(n α − K) + | p p + C p (K, B, M).
Now we see that for any finite time interval [0, T ], the L p norm is bounded uniformly independent of ǫ.
Lemma 3.4. Consider the regularized multi-species PKS system (3.7) subject to initial data n 0 satisfying (1.3). If the assumptions in Proposition 3.3 is satisfied, then there exists a continuous function h p ∈ C(R + ) such that for almost any t > 0, |n(·, t)| p h p (t).
Proof. The proof is similar to the corresponding proof in [5] with some modifications. For the sake of completeness, we sketch the proof. First, we fix t > 0 and 1 < p < ∞, and define
(3.21) q(s) := 1 + (p − 1) s t , q ∈ [1, p] for s ∈ [0, t].
Next, we define the following quantities:
F α (s) = R 2 (n α (x, s) − K) q(s) + dx 1/q(s) , (3.22) F(s) = α F q(s) α (s) 1/q(s)
.
(3.23)
By taking the s derivative of the function F q(s) (s), we obtain the following relation
d ds α (n α (x, s) − K) q(s) + dx = q(s)F q(s)−1 d ds F + dq(s)/ds q(s) F q(s) log F q(s) .
Combining it with the log-Sobolev inequality
f 2 log f 2 f 2 dx dx 2σ |∇f | 2 dx − (2 + log(2πσ)) f 2 dx, ∀σ > 0,
and the same argument to prove (3.20), we end up with the following estimate, inside which the notation (·) ′ is used to represent d ds ,
F q−1 d dt F = q ′ q 2 α (n α − K) q + log (n α − K) q + F q dx + α (n α − K) q−1 + ∂ s n α dx q ′ q 2 α (n α − K) q + log (n α − K) q + F q α dx + α (n α − K) q−1 + ∂ s n α dx α 2σq ′ q 2 − 4 q − 1 q 2 + C(B)η(K) |∇(n α − K) q/2 + | 2 2 + α (−2 − log(2πσ)) q ′ q 2 + C(q, B, M, K) (n α − K) q + dx + C(q, B, M, K). (3.24)
Here the constants C(q, B, M, K) depends on the parameter q. However, since q is lying in a compact set [0, p] on the time interval [0, t], it can be chosen such that it only depends on the fixed parameter p. Now by taking σ small enough, we end up with the following differential inequality
F q−1 d ds F (−2 − log(2πσ)) q ′ q 2 + C(p, B, M, K) F q + C(p, B, M, K).
Combining the fact that F(0) is finite and the coefficient (−2 − log(2πσ)) q ′ q 2 + C(p, B, M, K) is time integrable on [0, t] and applying standard ODE estimates, we obtain that F h p (t). This finishes the proof of the lemma.
We now turn to the V -space estimates, where V := H 1 ∩{f | f |x| 2 dx < ∞}: α |∇n ǫ α | 2
L 2 t ([δ,T ];L 2
x ) . In order to get the L 2 t ([δ, T ]; L 2 x ) control of the ∇n ǫ α , we first calculate the time evolution of
|n ǫ α | 2 2 : d dt α |n ǫ α | 2 dx = − 2 α |∇n ǫ α | 2 dx + 2 α ∇n ǫ α · ∇c ǫ α n ǫ α dx.
Integration in time yields that
α |n ǫ α (T )| 2 2 − α |n ǫ α (δ)| 2 2 + α |∇n ǫ α | 2 L 2 t ([δ,T ];L 2 x ) α |n ǫ α ∇c ǫ α | 2 L 2 t ([δ,T ];L 2 x ) . (3.25) We see that since |n ǫ α | L ∞ t (δ,T ;L 2 x ) is bounded independent of ǫ, if the right hand side α |n ǫ α ∇c ǫ α | L 2 t ([δ,T ];L 2 x )
is bounded, |∇n ǫ α | L 2 t (δ,T ;L 2
x ) will be bounded independent of ǫ. By the HLS inequality, we have that
|∇c ǫ α | 4 C HLS β∈I |b αβ | · |n ǫ β | 4/3 .
As a result, we have that
|n ǫ α ∇c ǫ α | 2 |n ǫ α | 4 |∇c ǫ α | 4 β C HLS |b αβ | · |n ǫ α | 4 |n ǫ β | 4/3 .
Since n ǫ α is bounded independent of ǫ in the space L ∞ t (δ, T ; L p x ), ∀α ∈ I, ∀p ∈ (1, ∞), the product n ǫ ∇c ǫ is bounded in L ∞ t (δ, T ; L 2 x ). Combining this fact and the estimate (3.25), we have that α |∇n ǫ α | 2
L 2 t (δ,T ;L 2 x ) is bounded independent of ǫ. Define the space V as H 1 ∩ {f | f |x| 2 dx < ∞}. A bounded set in the space V is precom- pact in L 2 .
Combining the second moment bound (3.11) and the H 1 bound of (n ǫ α ) α∈I , we have that the set (n ǫ α ) ǫ 0 , ∀α ∈ I lies in a compact subspace of L 2 for almost every t ∈ [δ, T ]. Finally, the W -estimate where W := H −1 :
α |∂ t n ǫ α | 2 L 2 t (δ,T ;H −1
x ) is relatively straightforward thanks to the equation (1.1).
• STEP #3. Proof of the free energy dissipation inequality (1.8). Since the solution to the regularized multi-species PKS system has a decreasing free energy E[n ǫ ], we have that
(3.26) E[n ǫ (δ)] E[n ǫ (t)] + α t δ n ǫ α |∇ log n ǫ α − ∇c ǫ α | 2 dxdt, ∀t ∈ [δ, T ].
In order to show (1.8), we need to show proper convergence for each single term in (3.26). We first decompose the free energy dissipation term as follows:
α T δ R 2 n ǫ α |∇ log n ǫ α − ∇c ǫ α | 2 dxdt =4 α T δ R 2 |∇ n ǫ α | 2 dxdt + α T δ R 2 n ǫ α |∇c ǫ α | 2 dxdt (3.27) − 2 α,β T δ R 2 b αβ n ǫ α n ǫ β dxdt.
By the convexity of f → R 2 |∇ √ f | 2 dx, weak semi-continuity and the strong convergence of n ǫ α in L 2 t ([δ, T ]; L 2 x ), we have that the first two terms in (3.27) satisfies the following Proof of proposition 3.4. We prove by contradiction. Assume that at time T ⋆ < ∞, the entropy α S[n ǫ α (T ⋆ )] is uniformly bounded with respect to ǫ. First, from the equation (3.7), we directly calculate the time evolution of the entropy:
inequalities T δ R 2 |∇ √ n α | 2 dxdt lim inf ǫ→0 + T δ R 2 |∇ n ǫ α | 2 dxdt (3.28) T δ R 2 n α |∇c α | 2 dxdt = lim ǫ→0 + T δ R 2 n ǫ α |∇c ǫ α | 2 dxdt.d dt α n ǫ α log n ǫ α dx = − α 4 |∇ √ n ǫ | 2 dx − α,β b αβ n ǫ α K n ǫ α ∆(K ǫ * n ǫ β )dx − α,β b αβ n ǫ α >K n ǫ α ∆(K ǫ * n ǫ β )dx (3.30) =: − α 4 |∇ n ǫ α | 2 dx + I + II.
The term I in (3.30) can be estimated as follows:
I α,β K|b αβ |∆K ǫ | 1 M β . (3.31)
Recall that |∆K ǫ | 1 is bounded independent of ǫ, so term I is bounded independent of ǫ. For the term II in (3.30), we estimate it using the Hölder's inequality, Gagliardo-Nirenberg-Sobolev inequality and Young's inequality as follows:
II α,β |b αβ | n ǫ α K (n ǫ α ) 2 dx + |∆K ǫ | 2 1 * |n ǫ β | 2 2 α,β |b αβ | n ǫ α K n ǫ α dx 1/2 |n ǫ α | 3/2 3 + |∆K ǫ | 2 1 M β K + n ǫ β K (n ǫ β ) 2 dx α,β |b αβ | S 1/2 + [n α ] (log K) 1/2 C GN S |n ǫ α | 1/2 1 |∇ n ǫ α | 2 2 (3.32) +C GN S |∆K ǫ | 2 1 S 1/2 + [n β ] (log K) 1/2 M 1/2 β |∇ n ǫ β | 2 2 + |∆K ǫ | 2 1 M β K α,β |b αβ |C GN S (1 + |∆K ǫ | 2 1 ) S 1/2 + [n α ] (log K) 1/2 M 1/2 α |∇ n ǫ α | 2 2 + α,β |b αβ | · |∆K ǫ | 2 1 M α K.
Here S + denote the positive part of the entropy, i.e., S + [f ] = f log + f dx. Combining the estimates (3.30), (3.31) with (3.32), we end up with
d dt α S[n ǫ α ] α −4 + β |b αβ |C GN S (1 + |∆K ǫ | 2 1 ) S 1/2 + [n ǫ α ] (log K) 1/2 M 1/2 α =:A(t) |∇ n ǫ α | 2 2 + α,β |b αβ |(1 + |∆K ǫ | 2 1 )M α K.
Since the negative part of the entropy and the second moment are bounded (3.12), (3.11), we have that A(t) can be estimated as follows:
A(t) − 4 + C GN S (log K) 1/2 β |b αβ |(1 + |∆K ǫ | 2 1 )M 1/2 α S[n ǫ α (t)] + 1 2 V (T ⋆ ) + 1 2 4 α M α + α,β (b αβ ) − M α M β 2π (t − T ⋆ ) + log(2π)M α + e −1 1/2 (3.33)
Since the entropy α S[n ǫ α ] is uniformly bounded independent of ǫ at time T ⋆ , we could take the K large such that A(t) −2 at time T ⋆ . By continuity, there is a small time τ ǫ such that for ∀t ∈ [T ⋆ , T ⋆ + τ ǫ ),
(3.34) α S[n ǫ α (t)] α S[n ǫ α (T ⋆ )]+(t−T ⋆ ) α,β |b αβ |(1+|∆K ǫ | 2 1 )M α K, ∀t ∈ [T ⋆ , T ⋆ +τ ǫ ].
But then we can pick τ independent of ǫ such that
A(t) − 4 + C(B, M) (log K) 1/2 α S[n ǫ α (T ⋆ )] + Kτ + 1 0.
The solution τ to the above inequality is independent of the choice of ǫ, and [T ⋆ , T ⋆ + τ ) ⊂ [T ⋆ , T ⋆ + τ ǫ ) for any ǫ. Therefore, by Proposition 3.3, we can extend the free energy solution pass the T ⋆ , contradicting the maximality of T ⋆ . As a result, we have completed the proof of the proposition.
Smoothness of the free energy solutions
In this section, we prove Theorem 1.3. The proof is similar to the arguments in [13]. For the sake of brevity, we skip some details and emphasize the main differences. The proof is decomposed into several lemmas. We first introduce the concept of Fisher information and renormalized solutions, then prove the L p integrability of the physically relevant free energy solutions and use standard parabolic equation technique to improve it to C ∞ regularity, and conclude with the proof of the free energy equality.
First note from the physical restrictions (1.12b) and (1.12c) that we have bounded entropy and free energy dissipation, i.e., A t [n] < ∞ and bounded second moment V [n(t)] for all t ∈ [0, T ⋆ ), where T ⋆ is the maximal existence time.
Next we present the following time integral bound for the Fisher information
|∇n α | 2 n α dx,
is time integrable, i.e.,
(4.2) α∈I T 0 F [n α (t)]dt C F M, T, A T [n], sup t∈[0,T ) α V α (t) , T ∈ [0, T ⋆ ).
Proof. The proof is essentially the same as the corresponding one in the single species case. For the sake of brevity, we skip the proof here and refer the interested readers to the proof of Lemma 2.2 and the remark after in the paper [13] for further details.
Remark 4.1. For the supercritical mass case, one can use the relative entropy method to derive the boundness of the entropy and entropy dissipation A T [n] before the blow-up time T ⋆ . We refer the interested reader to the papers [4] and [13] for further details.
The next lemma enable us to take advantage of choosing different renormalizing functions in the later proof.
1 < T ⋆ R 2 Γ(n α (x, t 1 ))dx + t 1 t 0 R 2 Γ ′′ (n α (x, s))|∇n α (x, s)| 2 dxds R 2 Γ(n α (x, t 0 ))dx + t 1 t 0 R 2 (Γ ′ (n α (x, s))n α (x, s) − Γ(n α (x, s))) β∈I b αβ n β (s) + dxds (4.3) R 2 Γ(n α (x, t 0 ))dx + β∈I |b αβ | t 1 t 0 R 2 |Γ ′ (n α (x, s))n α (x, s) − Γ(n α (x, s))| n β (s)dxds,
where Γ : R → R is an arbitrary convex piecewise C 1 function satisfying the following estimates with some constant C β
(4.4) |Γ(u)| C Γ (1 + u(log u) + ), |Γ ′ (u)u − Γ(u)| C Γ (1 + |u|), ∀u ∈ R.
Remark 4.2. Here in order to analyse the PKS equation (1.1) with general chemical generation coefficients, we introduce a stronger restriction on the growth of the normalizing function Γ comparing to the paper [13]. Here we assume that the absolute value of the expression Γ ′ (u)u − Γ(u) grows at most linearly at infinity, whereas in the paper [13], it is only assumed that the positive part (Γ ′ (u)u − Γ(u)) + grows at most linearly.
Proof. The proof is essentially the same as the proof of Lemma 2.5 in the paper [13]. For the sake of simplicity, we do a formal computation and refer the interested readers to [13] for further justifications. By applying the chain rule, we obtain (4.5) ∂ t Γ(n α ) = ∆Γ(n α ) − Γ ′′ (n α )|∇n α | 2 − ∇c α · ∇Γ(n α ) − Γ ′ (n α )∆c α n α , ∀α ∈ I. Now test it against an arbitrary smooth function χ ∈ D(R 2 ) and use the relation −∆c α = β b αβ n β , we have the following relation:
R 2 Γ(n α (t 1 ))χdx+ t 1 t 0 R 2 Γ ′′ (n α )|∇n α (s)| 2 χdxds = R 2 Γ(n α (t 0 ))χdx + t 1 t 0 R 2 Γ ′ (n α ) β b αβ n β n α χ + Γ(n α )∆χ + Γ(n α )∇ · (∇c α χ) dxds.
Rewrite the above relation using the integration by parts and the fact that ∆c α = − β b αβ n β ,
R 2 Γ(n α (t 1 ))χdx + t 1 t 0 R 2 Γ ′′ (n α )|∇n α (s)| 2 χdxds = R 2 Γ(n α (t 0 ))χdx + t 1 t 0 R 2 [Γ ′ (n α )n α − Γ(n α )]
β b αβ n β χ + [Γ(n α )∆χ + Γ(n α )∇c α · ∇χ] dxds. Now take χ → 1, we end up with the relation (4.3). In order to prove the Lemma, one first prove (4.3) with renormalizing function Γ i , i ∈ N, which grows at most linearly at infinity. Next one prove the estimate (4.3) with renormalizing functions with super linear growth (4.4) by taking limit of the inequalities (4.3) subject to approximating linear renormalizing functions (Γ i ) i∈N . One use the Lebesgue dominated convergence theorem to guarantee the convergence of the term
lim i→∞ t 1 t 0 [Γ ′ i (n α )n α − Γ i (n α )] β b αβ n β + dxds.
However, if the function β b αβ n β can be either positive or negative, we have to assume that |Γ ′ (u)u − Γ(u)| grows at most linearly near infinity.
Now we prove the L p estimate of the solution Lemma 4.3. Consider physically relevant free energy solutions (n α ) α∈I to equation (1.1)
subject to initial data (1.3). Let t 0 ∈ [0, T ⋆ ) be the time such that α∈I |n α (t 0 )| p < ∞, for some p ∈ [2, ∞). Then for all time t 1 ∈ [t 0 , T ] ⊂ [t 0 , T ⋆ ), there exists a constant C p := C p (M, T, α∈I |n α (t 0 )| p , V [n(t 0 )], A T ) such that α∈I |n α (t 1 )| p p + p − 1 2p α∈I t 1 t 0 |∇(n p/2 α )| 2 2 ds C p , p ∈ [2, ∞). (4.6)
Proof. The proof is similar to the corresponding one in [13]. We decompose the proof into two steps.
Step 1: We prove a logarithmic improvement to the L log L integrability. The goal is to show that there exists a constant C S 2 := C S 2 (M, T, A T , sup [t 0 ,T ] V [n(t)]) such that the following estimate is satisfied for any t 1 ∈ [t 0 , T ],
α S 2 [n α (t 1 )] α S 2 [n α (t 0 )] + C S 2 , S 2 [f ] := f ( logf ) 2 dx, (4.7)
where the log function is the logarithmic function truncated from below: logu := 1 u e + (log u)1 u>e . (4.8)
For the sake of notational simplicity, we further introduce the bounded truncated logarithmic function log K as follows: (4.9) log K (u) := 1 u e + 1 e<u K log u + 1 u>K log K.
Since (·) log 2 (·) does not satisfy the growth constraint (4.4), we approximate it by the function Γ K (u), K e 2 , Γ K (u) := u( logu) 2 , u K; (2 + log K)u log u − 2K log K, u > K. Now we estimate the time evolution of α Γ K (n α )dx using the renormalization relation (4.3), the positivity of b αβ , (4.11), (4.12) and the definition of log, log K as follows
α Γ K (n α (t 1 ))dx + α t 1 t 0 log K (n α ) n α 1 nα e |∇(n α )| 2 dxds α Γ K (n α (t 0 ))dx (4.13) + α,β |b αβ | t 1 t 0 2n α logn α 1 nα K + 4 log Kn α 1 nα>K n β dxds α Γ K (n α (t 0 ))dx + 4 α,β |b αβ | t 1 t 0 n α log K n α n β dxds.
Now picking a constant A ∈ [e, K], we estimate the last term on the right hand side of (4.13) using GNS inequality as follows:
α,β |b αβ | n α log K n α n β dx = α,β |b αβ | n α log K n α n β 1 n β A dx + n α log K n α n β 1 n β A dx α,β |b αβ | (n α log K n α )(n β log K n β ) log A dx + A n α log K n α dx 2 max α β |b αβ | α 1 log A n α log K n α 4 dx + A(M α + S + [n α ]) (4.14) 2C 2 GN S max α β |b αβ | × α A(M α + S + [n α ]) + 1 log A n α log K n α dx · ∇ n α log K n α 2 dx 2C 2 GN S max α β |b αβ | × α A(M α + S + [n α ]) + 1 log A M α + S + [n α ] · |∇(n α )| 2 n α log K n α 1 nα e dx + F [n α ] .
Now combining (4.13) and (4.14) and taking K then A large, we have the estimate
α Γ K (n α (t 1 ))dx α Γ K (n α (t 0 ))dx + 2T C GN S max α β |b αβ | α A(M α + S + [n α ]) + 4 α t 1 t 0 F [n α ]ds.
For the last term T 2 on the right hand side of (4.16), applying the symmetry of the matrix B (1.6), Hölder inequality and the Young's inequality, we can estimate it as follows
T 2 =2K p−1 α,β t 1 t 0 n α 1 nα>K |b αβ |n β (1 n β >K + 1 n β K )dxds 4K p−1 max α β |b αβ | α t 1 t 0 n 2 α 1 nα>K dxds. (4.20)
Now they are similar to the T 12 term in (4.17) and we skip the treatment for the sake of brevity.
Combining the estimates (4.17), (4.18) and (4.20), we have from (4.16) that
α γ K (n α (t 1 ))dx + α 2(p − 1) p 2 t 1 t 0 |∇(n p/2 α )| 2 1 nα K dxds α γ K (n α (t 0 ))dx + 2 p A p max α β |b αβ | α M α T.
Now we can take A fixed and K to infinity to complete the proof of the lemma.
Next, arguing along the lines of [13], we end up with the conclusion that free energy solutions are classical solution for all positive time. We quote Lemma 4.4 ([13]). Any physically relevant free energy solutions (n α ) α∈I to (1.1) are smooth for any strictly positive time, i.e., (4.21) n α ∈ C ∞ ((δ, T ⋆ ) × R 2 ), ∀δ > 0.
Moreover, we have the following lower semicontinuity of the free energy functional. Lemma 4.5 ([13]). Consider any bounded sequences (n α,k ) α∈I of nonnegative functions in L 1 + (R 2 ) with finite second moment α n α,k |x| 2 dx < ∞. Assume that {n α,k } ∞ k=1 has the same subcritical masses as n α , i.e., |n α,k | 1 = M α , ∀α ∈ I, ∀k ∈ N. If there exists a constant C such that the free energy E[(n α,k ) α∈I ] is uniformly bounded in k, i.e., sup k E[(n α,k ) α∈I ] C < ∞, and {n α,k } ∞ k=1 converges to n α in D ′ (R 2 ) for all α ∈ I, there holds (4.22)
n α ∈ L 1 + (R 2 ), n α |x| 2 dx < ∞, ∀α ∈ I and E[(n α ) α∈I ] lim inf k→∞ E[(n α,k ) α∈I ].
Equipped with lemma 4.4 and 4.5 we turn to the following.
Proof of Theorem 1.3. The smoothness of the solutions is proved in Lemma 4.4. The proof of the equality in (2.1) is similar to the one in [13]. For the sake of completeness, we detailed the proof as follows.
Since the solution n α , α ∈ I is smooth for all positive time, the following equality holds for all t n > 0, where t n → 0 + : Combining this with the Lebesgue dominated convergence theorem, the lower semi-continuity of the functional E proven in the last lemma and the fact that n(t n ) converges to n 0 weakly in D ′ (R 2 ), we have that
E[n 0 ] lim inf n→0 E[n(t n )] lim E[n(t)] + α t tn n α |∇ log n α − ∇c α | 2 dxds =E[n(t)] + α t 0 n α |∇ log n α − ∇c α | 2 dxds. (4.24)
Recalling the definition of the free energy solution, the proof of the free energy dissipation equality is completed.
Uniqueness of the free energy solutions
After proving the smoothness theorem for the system (1.1), we are ready to prove the uniqueness of the physically relevant free energy solutions (n α ) α∈I . To estimate the deviation between two solutions on a small time interval, some smallness estimates are needed. The following lemma provides the functional space where we could seek for smallness. Proof. The proof is similar to the one in the paper [13]. Before estimating the norm t 1/4 |n α | 4/3 , we collect some estimates which we are going to use. It is enough to consider a short interval [0, T ] ⊂ [0, T ⋆ ). From the assumptions (1.12b), (1.12c) we have that the positive part of the entropy is bounded
α S + [n α (t)] C L log L < ∞, ∀t ∈ [0, T ].
Next we prove the estimate
(5.2) α |n α (t)| 2 2 t C L 2 (B, M, |I|, C L log L ) < ∞, ∀t ∈ [0, T ].
Standard L 2 energy estimate yields
(5.3) d dt α |n α | 2 2 + 2 α |∇n α | 2 2 = α,β∈I b αβ n 2 α n β dx.
Applying the Nash inequality, Gagliardo-Nirenberg-Sobolev inequality and the vertical truncation technique applied in the proof of Lemma 3.3, we estimate the right hand side as follows
d dt α |n α | 2 2 − α |∇n α | 2 2 + α,β |b αβ | · |n β | 3 3 − α |∇n α | 2 2 + α,β |b αβ | |n β 1 n β K | 3 3 + |n β 1 n β K | 1/3 1 |n β 1 n β K | 8/3 4 − α |∇n α | 2 2 + α,β |b αβ | K 2 M β + C GN S sup t∈[0,T ] S + [n(t)] 1/3 (log K) 1/3 |n β | 2/3 1 |∇n β | 2 2 − α 1 − β |b αβ | C GN S C 1/3 L log L (log K) 1/3 M 2/3 α |∇n α | 2 2 + α,β |b αβ |K 2 M β − ( α |n α | 2 2 ) 2 2C N max α M 2 α |I| + α,β |b αβ |K 2 M β , (5.4)
where K is a large number chosen such that the coefficient of |∇n α | 2 2 is less than −1/2. Now by comparing |n α | 2 with the solution to the super equation
d dt f = − f 2 2C N max α M 2 α |I| + K 2 α,β |b αβ |M β , f (0) = ∞,
we obtain (5.2). Now we estimate the quantity t 1/4 |n α (t)| 4/3 . By the Hölder's inequality and the boundedness of the entropy, we have that t 1/4 |n α | 4/3 4/3 =t 1/3 n 4/3 α dx n α (log + n α + 2)dx 2/3 t n 2 α (2 + log + n α ) −2 dx
1/3 C(C L log L , M) t n 2 α (2 + log + n α ) −2 dx 1/3 . (5.5)
To estimate the term in the parenthesis, we separate the integral into two parts and use the increasing property of the function s/(2 + log + s) 2 , the conservation of mass and (5.2) to estimate each piece
t n 2 α (2 + log + n α ) −2 dx t nα R n 2 α (2 + log + n α ) −2 dx + t nα>R n 2 α (2 + log + n α ) −2 dx t R (2 + log + R) 2 nα R n α dx + t (2 + log + R) 2 nα R n 2 α dx t MR (2 + log + R) 2 + C L 2 (2 + log + R) 2 .
Now set R := 1/t, we have (5.6) t n 2 α (2 + log + n α ) −2 dx M + C L 2 (2 + log + 1/t) 2 → 0, t → 0 + .
Combining this with (5.5) yields the result. Now we prove the Theorem 1.4. Consider the equation (1.1) in the mild form. Since we have smoothness of the free energy solution, we have that the two formulation are equivalent. Suppose that (n α,1 ) α∈I , (n α,2 ) α∈I are two solutions subject to the same initial data n α0 , α ∈ I, their difference satisfies:
n α,2 (t) − n α,1 (t) = − t 0
e (t−s)∆ ∇ · ((∇c α,2 (s) − ∇c α,1 (s))n α,2 (s)) ds − t 0 e (t−s)∆ ∇ · (∇c α,1 (s)(n α,2 (s) − n α,1 (s))) ds, ∀α ∈ I.
Define the following quantities: The estimate (5.1) yields that lim t→0 + Z α,ℓ (t) = 0. The ∆ α (t) can be further decomposed as follows:
∆ α (T ) sup 0 t T t 1/4
t 0 e (t−s)∆ ∇ · ((∇c α,2 (s) − ∇c α,1 (s))n α,2 (s))ds
4/3 + sup 0 t T t 1/4
t 0 e (t−s)∆ ∇ · (∇c α,1 (s)(n α,2 (s) − n α,1 (s)))ds
4/3 =: sup 0 t T J α,1 (t) + sup 0 t T J α,2 (t). (5.9)
Now we estimate the J α,2 term in (5.9) using the Hölder inequality, Hardy-Littlewood-Sobolev inequality, Minkowski integral inequality and heat semigroup estimate as follows
J α,2 (t) t 1/4 t 0 C (t − s) 3/4 |∇c α,1 | 4 |n α,2 − n α,1 | 4/3 ds t 0 C t 1/4 s 1/2 (t − s) 3/4 ds β∈I |b αβ |Z β,1 (t)∆ α (t) C β∈I |b αβ |Z β,1 (t)∆ α (t). (5.10)
Similarly, we can estimate the J α,1 term as follows:
J α,1 (t) C β |b αβ |∆ β (t)Z α,2 (t). (5.11)
Combining (5.9), (5.11), (5.10) and symmetry of B (1.6), we have that
α ∆ α (T ) α,β |b αβ | sup 0 t T Z β,1 (t)∆ α (t) + α,β |b αβ | sup 0 t T ∆ β (t)Z α,2 (t) α,β |b αβ | sup 0 t T ∆ α (t)(Z β,1 (t) + Z β,2 (t)) max α,β |b αβ | α ∆ α (T ) β 2 ℓ=1 Z β,ℓ (T ) .
Now since Z β,ℓ (t) approaches zero as time approaches 0 + (5.1), there exists a small time T ′ such that
(5.12) α ∆ α (T ′ ) 1 2 α ∆ α (T ′ ), T ′ ∈ [0, T ].
So we have α ∆ α ≡ 0, ∀t ∈ [0, T ′ ]. Now the uniqueness follows if we iterate this argument.
Long time behavior of the free energy solutions
In this section, we studied the long time behavior of the multi-species PKS system (1.1). Since the solution becomes instantly smooth, we could assume that the initial data n α0 is C ∞ ∩ L 1 for all α ∈ I. We rewrite the equation (1.1) in the self-similar variables
X := x R(t)
, τ := log R(t), R(t) := √ 1 + 2t.
We define the solutions N α , C α in the self-similar variables:
n α (x, t) = 1 R 2 (t) N α (X, τ ), c α (x, t) = C α (X, τ ). (6.1)
Rewriting the equation (1.1) in the self-similar variables, we obtain that the N α , C α satisfy the following equations subject to initial data N α (X, τ = 0)(n α0 (X), ∀α ∈ I:
∂ τ N α =∆N α + ∇ · (XN α ) − ∇ · (∇C α N α ), −∆C α = β∈I b αβ N β . (6.2)
In order to prove Theorem 1.5, we show that the solution N α to the equation (6.2) is uniformly bounded in time. This is due to the fact that the L 2 (dx) norm of solutions n α to the original problem and the L 2 (dX) norm of the solutions N α to the equation (6.2) have the following relation:
(6.3) |n α | 2 L 2 (dx) = |N α | 2 L 2 (dX) R 2 (t) = |N α | 2 L 2 (dX) 1 + 2t .
Therefore any uniform in time bound of |N α | L 2 (dX) can be translated to decay of |n α | L 2 (dx) . We decompose our proof into several lemmas. First we show that the second moment of the solutions are uniformly bounded in time.
Lemma 6.1. Consider the solutions N α , α ∈ I to the equation (6.2). The total second moment is uniformly bounded in time, i.e.,
(6.4) α∈I N α (X, τ )|X| 2 dX C V,R < ∞, ∀τ ∈ [0, ∞).
Proof. Similar to the proof of (2.3), we calculate the time evolution of the second moment
d dτ α N α |X| 2 dX = − 2 α N α |X| 2 dX + α 4M α 1 − Q B,M [I] 8π .
Now we see that the total second moment is bounded
α N α |X| 2 dX max 1 2 α 4M α 1 − Q B,M [I] 8π , α (N α ) 0 |X| 2 dX .
Similar to the proof of the estimate (2.1), we can show that the equation (6.2) has the following decreasing free energy for ∀τ 0:
E R [N(τ )] = α∈I N α log N α dX+ α,β∈I b αβ 4π log |X − Y |N α (X)N β (Y )dXdY + 1 2 α∈I N α |X| 2 dX E R [N 0 ].
Now we apply the log-HLS inequality (3.5) to get a bound for the entropy,
S R [N] = α N α log N α dX, obtaining E R [N 0 ] E R [N] α∈I N α log N α dX + α,β∈I (b αβ ) + 4π N α (X) log |X − Y |N β (Y )dXdY − α,β (b αβ ) − 4π |X−Y | 1 N α (X) log |X − Y |N β (Y )dXdY + 1 2 N α |X| 2 dX =(1 − θ) α∈I N α log N α dX + θ α∈I N α log N α dx + 1 4π α,β∈I (b αβ ) + θ N α (X) log |X − Y |N β (Y )dXdY − α,β (b αβ ) − 4π (M α V β + M β V α ) + 1 2 N α |X| 2 dX (1 − θ) α∈I N α log + N α dX − (1 − θ) N α log − N α dX − θC lHLS (B, M) − α,β (b αβ ) − 4π (M α V β + M β V α ) + 1 2 N α |X| 2 dX.
Here the θ ∈ (0, 1) is chosen as in the proof of Proposition 3.1. Now since the second moment is bounded for all time (6.4), we have that C lHLS < ∞ and the negative part of the entropy is uniformly bounded in time, i.e.,
N α (X, τ ) log − N α (X, τ )dX < C < ∞ for ∀τ ∈ [0, ∞),
which in term yields that
(6.5) α∈I N α (X, τ ) log + N α (X, τ )dX < C L log L,R < ∞, ∀τ ∈ [0, ∞).
Once the positive part of the entropy is bounded, we estimate the time evolution α |(N α − K) + | 2 2 as in the proof Lemma 3.3
1 2 d dt α |(N α − K) + | 2 2 −3 + η(K) max α β |b αβ | C GN S α |∇(N α − K) + | 2 dX + C(K, B, M)|(N α − K) + | 2 2 + C(K, B, M),
where η(K)
C L log L,R log K
is made small enough. Now we choose the K large enough and apply the Nash inequality to get
d dt α |(N α − K) + | 2 2 − ( α |(N α − K) + | 2 2 ) 2 C N α |(N α − K) + | 2 1 |I| + C(K, B, M) α |(N α − K) + | 2 2 + C(K, B, M). Since |(N α − K) + | 1 |N α | 1 = M α < ∞, we have that α |(N α − K) + | 2 C L 2 ,R < ∞ for ∀τ ∈ [0, ∞)
. This completes the proof of Theorem 1.5.
7.
Multi-species PKS subject to non-symmetric coupling arrays 7.1. Symmetrizable case. In general, the chemical generation coefficient matrix B is nonsymmetric. This introduces new challenges in the analysis. We will not cover the general situation in this paper. However, in certain cases, one can symmetrize the system. First recall the sign function:
(7.1) sign(f ) = 1, f > 0; 0, f = 0; −1, f < 0.
If sign(b αβ ) = sign(b βα ) and the matrix B is three diagonal, i.e., b αβ = 0 only if |α − β| 1, the system can always be symmetrized. Specifically, all the two species models with sign(b 12 ) = sign(b 21 ) are symmetrizable. To show the method, we consider system (1.1) subject to general 3-by-3 matrix Now we see that the new coefficient matrix is symmetric. For general tridiagonal matrix with sign(b αβ ) = sign(b βα ), the symmetrization is similar.
Remark 7.1. This three diagonal chemical generation matrices B's correspond to the fact that there exists a hierarchical structure in the community, in which one species only communicates to their direct neighbors.
7.2. Essentially dissipative case. In this section, we prove Theorem 1.6.
Proof of Theorem 1.6. First note that if I (|I|) = I, then I (0) is not an empty set. Otherwise one obtain that I (|I|) is an empty set, which is a contradiction. We prove that α |n α (t)| L ∞ t (0,∞;H s x )
C H s < ∞. First we prove the L ∞ bound of the n α 's. We pick all the α 0 ∈ I (0) , and calculate the time evolution of the |n α 0 | 2p 2p , ∀p ∈ [1, ∞) utilising the fact that b α 0 β 0 for all β ∈ I 1 2p
d dt |n α 0 | 2p 2p = − 2p − 1 p 2 |∇(n α 0 ) p | 2 2 − 2p − 1 2p n 2p α 0 ∆c α 0 dx = − 2p − 1 p 2 |∇n α 0 | 2 2 + 2p − 1 2p β∈I b α 0 β n 2p α 0 n β dx 0. (7.2)
As a result, for any p ∈ [1, ∞), |n α 0 | 2p |(n α 0 ) 0 | 2p . Since the initial data is in L 1 ∩ L ∞ , we have that max α 0 ∈I (0) |n α 0 | Lt∞(0,∞;L ∞ x ) C I (0) < ∞. Next we look at all the α 1 's in the set I (1) . Calculating the time evolution of the L 2p norm using the Nash inequality , we have that 1 2p d dt |(n α 1 ) p | 2 2 − 2p − 1 p 2 |∇(n α 1 ) p | 2 2 + 2p − 1 2p
β∈I (0) b α 1 β n β n 2p α 1 − 2p − 1 p 2 |(n α 1 ) p | 4 2 C N |(n α 1 ) p | 2 1 + 2p − 1 2p β∈I (0) b α 1 β |n β | ∞ |(n α 1 ) p | 2 2 .
Since |n β | ∞ < C I (0) < ∞, ∀β ∈ I (0) , we have that Since |n α 1 | L 1 = M α 1 < ∞ and |(n α 1 ) 0 | L ∞ < ∞, by the Moser-Alikakos iteration, we have that |n α 1 | ∞ C I (1) < ∞. By the same argument, we have that sup t∈[0,∞) |n α (t)| ∞ C ∞ < ∞, ∀α ∈ I (|I|) (7.4) Since B is essentially dissipative, I (|I|) = I, we have that |n α | L ∞ t (0,∞;L ∞ x )
C ∞ for all α ∈ I. Next we estimate the H s (2 s ∈ N) norms of the solutions. Assume that we have already obtained the H s−1 estimate, i.e., (7.5) |n α (t)| H s−1 C H s−1 < ∞, ∀t ∈ [0, ∞).
We estimate the time evolution of α |∇ s n α | 2 2 using the GNS inequality and HLS inequality as follows:
Since α |n α | L ∞ t (0,∞;L 2 x ) C ∞ + α M α , we have that α |∇ s n α (t)| 2 C H s (C ∞ , α |∇ s n α0 | 2 , M, B) < ∞
for all t ∈ [0, ∞). This completes the proof of the theorem.
We conclude with a remark concerning the long time behavior of the solutions. We can rewrite the equation (1.1) in the self-similar variables as in Section 6 (6.2). Applying similar techniques from the proof of Theorem 1.6 yields that the solutions n decay in L 2 , i.e., (7.6) α |n α (t)| 2 2 C 1 + t , t ∈ R + .
Here C is a constant which only depends on the initial data. We sketch the proof as follows. As in Section 6, the goal is to show that α |N α | 2 L 2 (dX) is uniformly bounded in time τ ∈ [0, ∞). For the sake of simplicity, we use | · | p to denote | · | L p (dX) . First we estimate the L p norms of the solutions n α 0 , α 0 ∈ I (0) . Combining standard L p energy estimates, Nash inequality and the fact that b α 0 β 0 for all β ∈ I yields that 1 2p
d dτ |(N α 0 ) p | 2 2 = − 2p − 1 p 2 |∇(N α 0 ) p | 2 2 + 2(2p − 1) 2p |(N α 0 ) p | 2 2 + 2p − 1 2p β b α 0 β N 2p α 0 N β dX − 2p − 1 p 2 |(N α 0 ) p | 4 2 C N |(N α 0 ) p | 2 1 + 2(2p − 1) 2p |(N α 0 ) p | 2 2 .
This estimates yields that sup τ ∈[0,∞)
|N α 0 (τ )| 2p 2p max{pC N sup τ ∈[0,∞) |(N α 0 )(τ )| 2p p , |N α 0 (0)| 2p 2p }.
Since |N α 0 | 1 = M α 0 < ∞ and |N α 0 (0)| L 1 ∩L ∞ < ∞, we can apply the Moser-Alikakos iteration to obtain that sup τ ∈[0,∞) |N α 0 (τ )| L 1 ∩L ∞ C I (0) < ∞. (7.7)
Now applying the same iteration technique as the one in the proof of Theorem 1.6 yields the result.
Remark 7.2. Direct application of the free energy method yields following general result: Assume that the matrix B only has positive entries, i.e., B = B + case. Define the support of a symmetric matrix C m×m to be the indices of the rows such that there exists non-zero entries in this row, i.e., supp(C) = {i ∈ {1, 2, ..., m}|C ij = 0 for some j ∈ {1, 2, ..., m}}. If there exists a sequence of positive symmetric matrices {B ℓ } ℓ∈L such that ℓ∈L B ℓ = B and Q B ℓ ,M [J ∩ suppB ℓ ] < Q B ℓ ,M [I ∩ suppB ℓ ] < C ℓ < 8π, for all ∅ = J I and ∀ℓ ∈ L, and ℓ∈L C ℓ 1 α∈suppB ℓ < 8π, for ∀α ∈ I, then there exists a global solution. A conjecture is that if this condition involving the strict inequalities fails, namely, if some of the strict inequalities <'s are replaced by >'s, then there must be a finite time blow-up.
B
:= {b αβ } α,β∈I , B + := {(b αβ ) + } α,β∈I , M := {M α } α∈I , M α = |n α0 | 1 , (1.4)where (·) + denotes the positive part of the function. We introduce the function Q B,M acting on subsets J of the index set I,Q B,M [J ] = α,β∈J b αβ M α M β α∈J M α , J ⊂ I.(1.5) In particular, if J = I, then Q B,M [J ] has a simple matrix representation: Q B,M [I] = BM, M |M| 1
Definition 1. 1 (
1Free energy solutions). For any distributional solutions n to the equation (1.1) subject to initial data n 0 , they are the free energy solutions to (1.1) if the following free energy dissipation inequality holds on some maximal time interval [0, T ⋆ )
Theorem 1 . 3 (
13Smoothnness of the free energy solutions). Consider the equations (1.1) subject to initial condition (1.3) and symmetric chemical generation matrices B. The physically relevant free energy solutions (n α ) α∈I are smooth, i.e., n α ∈ C ∞ ((0, T ⋆ ) × R 2 ), ∀α ∈ I, where T ⋆ is the maximal existence time. Moreover, the equality holds in (1.8).
Theorem 1.4 (Uniqueness of the free energy solutions). Consider the equation (1.1) subject to initial condition (1.3) and symmetric chemical generation matrix B . There exists at most one physically relevant free energy solution.
Theorem 1 . 5 (
15Long time behavior of the free energy solutions).
Definition 1 . 2 .
12Define the sequences of subsets I (0) ⊂ I (1) ⊂ ... ⊂ I (|I|) of I as follows: I (0) := {α ∈ I|b αβ 0, ∀β ∈ I}; I (k) := {α ∈ I|b αβ 0, ∀β ∈ I\I (k−1) }, k ∈ {1, 2, ..., |I|}. If I (|I|) = I, we called the matrix B essentially dissipative. Remark 1.2. The simplest essentially dissipative matrices B'Essentiall dissipative matrices naturally arise when there are chasing-escaping phenomena in the multi-species PKS system (1.1). For example, the system (1.1) subject to chemical generation relation b 12 = −b 21 = 1, b 11 = b 22 = 0 describes the situation that bacteria of species 1 are escaping from bacteria of species 2, whereas bacteria of species 2 are chasing bacteria of species 1.
Theorem 3. 1 ([ 25 ,
125Theorem 4]). Let A = (a αβ ) α,β∈I be a symmetric matrix with positive entries a αβ 0.a) The following Λ I (M) = 0, Λ J (M) 0, ∀∅ = J ⊂ I, if Λ J (M) = 0 for some J , then a αα + Λ J \{α} (M) > 0, ∀α ∈ J ,(3.4) is a necessary and sufficient condition for the lower-bound of the PKS functional min n∈Γ M (R 2 ) Ψ[n]; b) Moreover, the functional Ψ[n] admits a minimizer over Γ M (R 2 ) if and only if Λ I (M) = 0 and Λ J (M) > 0 for any ∅ = J I. In this case, there exists a constant, C = C lHLS depending on M and B = {b αβ }, such that the following holds Ψ[n] −C lHLS (M, B) .
ǫ>0 converges strongly in the L 2 ([δ, T ] × R 2 ) space. The last term on the right hand side of (3.27) converges. Moreover, it can be checked that S[n ǫ α (t)] → S[n α (t)] for almost every t ∈ [δ, T ]. The argument is similar to the one used in [5] Lemma 4.6. As a result, combining these facts and (3.26), |∇ log n α − ∇c α | 2 dxds. Now by the monotone convergence theorem and a Cantor diagonal argument, we have proven (1.8).
Lemma 4. 1 .
1If the conditions in the Theorem 1.3 are satisfied, for any physically relevant free energy solutions to (1.1) and any time T ∈ [0, T ⋆ ), there exists a constant C F such that the Fisher information of the solution
Lemma 4 . 2 .
42Any physically relevant free energy solutions n to (1.1) satisfy the following estimate for any times 0 t 0 t
check that the function Γ K is convex and satisfies the properties (4.4)Γ ′′ K (u) 2log u u 1 e u K + (2 + log K) 1 u 1 u>K log K u u 1 u e 0, (4.11) |Γ ′ K (u)u − Γ K (u)| 2u logu1 u K + 4 log Ku1 u>K C K (1 + u).
( 4 .
423) E[n(t)] = E[n(t n )] + α t tn n α |∇ log n α − ∇c α | 2 dxds.
Lemma 5 . 1 .
51Consider the physically relevant free energy solution n to the system (1.1).
(b αβ (−∇∆ −1 )n β n α ) = ∆n α , α ∈ {1,11 , b 12 , b 13 b 21 , b 22 , b 23 b 31 , b 32 , b 33 , sign(b αβ ) = sign(b βα ), b 13 = b 31 = 0.First we can multiply the equation of n 2 by b 12 /b 21 and redefineñ 2 := b 12 b 21 n 2 to obtain ∂ t n 1 +∇ · (b 11 (−∇∆ −1 )n 1 n 1 + b 21 (−∇∆ −1 )ñ 2 n 1 ) = ∆n 1 ; ∂ tñ2 +∇ · b 21 (−∇∆ −1 )n 1ñ2 + b 21 b 22 b 12 (−∇∆ −1 )ñ 2ñ2 + b 23 (−∇∆ −1 )n 3ñ2 = ∆ñ 2 .Now we can do the same trick on the third equation by multiplying it by b 12 b 23 b 32 b 21 and redefinẽ n 3 := b 12 b 23 n 3 b 32 b 21 , we obtain that∂ tñ2 +∇ · b 21 (−∇∆ −1 )n 1ñ2 + b 21 b 22 b 12 (−∇∆ −1 )ñ 2ñ2 + b 32 b 21 b 12 (−∇∆ −1 )ñ 3ñ2 = ∆ñ 2 , ∂ tñ3 +∇ · b 32 b 21 b 12 (−∇∆ −1 )ñ 2ñ3 + b 32 b 21 b 33 b 12 b 23 (−∇∆ −1 )ñ 3ñ3 = ∆ñ 3 .
1 β |C I (0) , |(n α 1 ) 0 | 2p 2p }. (7.3)
Theorem 1.1 (Global existence: subcritical mass). Consider the equation (1.1) subject to initial conditions (1.3). If the symmetric chemical generation matrix B (B + = 0) and the mass vector M satisfy the following subcritical mass constraint
). Observe that (1.10) implies -and is therefore more restrictive than the second part of the general characterization for global existence in [14, Theorem 1] which requires max{χ 1 M 1 , χ 2 M 2 } < 8π.While the last two examples show that the sub-critical mass condition (1.9b) may or may not be sharp for general |I| 2 species, the necessity of the upper-bound in (1.9a) is stated in the following.Theorem 1.2 (Blow-up: supercritical mass). Consider the equations (1.1) subject to smooth initial data n α ∈ H s , ∀α ∈ I, s 2 with finite second moment, and governed by a symmetric chemical generation matrix (1.6). If Q B,M [I] > 8π, then the solution blows up at a finite time.Remark 1.1. Theorem 1.2 tells us that the bound Q B,M [I] 8π is necessary for existence of global-in-time free energy solution. A sufficient condition for this (strict) bound to hold is given by, consult Proposition 3.2 below,
The precise characterization of B's such that(1.9) holds remains open; consult our conjecture in remark 7.2 below.The precise characterization of B's such that both conditions (1.9) hold remains open. We prove below the a sufficient condition, claimed in (1.11), for the upper-bound (1.9a) to hold.
|∇ ℓ c α | 2 4 |∇ s+1−ℓ n α | 2 |b αβ | · |∇ ℓ−1 n β | 2 4/3 |∇ s+1−ℓ n α | 2 C GN S |n α |d
dt α
|∇ s n α | 2
2
−
α
|∇ s+1 n α | 2
2 +
α
|∇c α | 2
∞ |∇ s n α | 2
2 +
α
s+1
ℓ=2
4
−
α
|∇ s+1 n α | 2
2 +
α,β
|b αβ |(M 2
β + C 2
∞ )|∇ s n α | 2
2
+
α,β
s+1
ℓ=2
4
−
α
|∇ s n α |
2+2/s
2
2/s
2
+
α
|∇ s n α | 2
2 +
α
|n α | 2
2 .
p+1 p 2 t 1 t 0 |∇(n p/2 β )| 2 1 K/2 n β K dxds + K p−1 t 1 t 0 |∇n β | 2 n β 1 n β >K dxds .(4.19)Since S 2 is bounded on the time interval [t 0 , t 1 ] (4.7), if K is large enough, these terms can be absorbed by the left hand side of (4.16).
Acknowledgment. Research was supported in part by NSF grants DMS16-13911, RNMS11-07444 (KI-Net) and ONR grants N00014-1812465 and N00014-21-12773.By the Lemma 4.2, we have that the estimate (4.7) holds with the constant C S 2 depending on T, A T and sup 0 t T V [n(t)].Step 2: As in[13], we define the following renormalization function γ K , K e approximating (·) p :We can estimate the |γ ′ K (u)u − γ K (u)| as followsApplying this estimate in the (4.3) yieldsFor the second term T 1 on the right hand side of (4.16), we decompose it as follows:The treatment of the T 11 term is similar to the corresponding one in the proof of Lemma 3.3. It can be estimated using the Gagliardo-Nirenberg-Sobolev inequality, Chebyshev inequality and a classical vertical truncation technique with truncation level A ∈ (0, K) as follows:Here we can see that if we choose K then A large enough, the second term can be absorbed by the dissipative term on the left hand side of (4.16). The second term T 12 in (4.17) has a different flavor. Here the improved integrability of the solution (4.7) is applied to gain extra smallness on this nonlinear term. Similar to the paper[13], we apply the bound (4.7), the Sobolev inequality and Cauchy-Schwarz inequality to estimate the T 12 term in (4.17) as follows:|∇n β | 2 n β (1 K n β K/2 + 1 n β >K )dx n β 1 n β K/2 ds α,β |b αβ | 32(p − 1)C S sup t 0 t t 1 S 2 [n β (t)] p(log K) 2
Simultaneous finite time blow-up in a two-species model for chemotaxis. E E E Arenas, A Stevens, J J L Velázquez, Analysis (Munich). 29E. E. E. Arenas, A. Stevens, and J. J. L. Velázquez. Simultaneous finite time blow-up in a two-species model for chemotaxis. Analysis (Munich), 29, 2009.
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics. X Bai, M Winkler, Indiana University Mathematics Journal. 65X. Bai and M. Winkler. Equilibration in a fully parabolic two-species chemotaxis system with compet- itive kinetics. Indiana University Mathematics Journal 65, 2016.
Intermediate asymptotics for critical and supercritical aggregation equations and Patlak-Keller-Segel models. J Bedrossian, Comm. Math. Sci. 9J. Bedrossian. Intermediate asymptotics for critical and supercritical aggregation equations and Patlak- Keller-Segel models. Comm. Math. Sci., 9:1143-1161, 2011.
Infinite time aggregation for the critical Patlak-Keller-Segel model in R 2. A Blanchet, J Carrillo, N Masmoudi, Comm. Pure Appl. Math. 61A. Blanchet, J. Carrillo, and N. Masmoudi. Infinite time aggregation for the critical Patlak-Keller-Segel model in R 2 . Comm. Pure Appl. Math., 61:1449-1481, 2008.
Two-dimensional Keller-Segel model: Optimal critical mass and qualitative properties of the solutions. A Blanchet, J Dolbeault, B Perthame, E. J. Diff. Eqn. 44A. Blanchet, J. Dolbeault, and B. Perthame. Two-dimensional Keller-Segel model: Optimal critical mass and qualitative properties of the solutions. E. J. Diff. Eqn, 2006(44):1-32, 2006.
Blow-up, concentration phenomenon and global existence for the Keller-Segel model in high dimension. V Calvez, L Corrias, M A Ebde, Communications in Partial Differential Equations. 374V. Calvez, L. Corrias, and M. A. Ebde. Blow-up, concentration phenomenon and global existence for the Keller-Segel model in high dimension. Communications in Partial Differential Equations Vol. 37 , Iss. 4, pages 561-584, 2012.
Uniqueness of bounded solutions to aggregation equations by optimal transport methods. J Carrillo, J Rosado, Proc. 5th Euro. Congress of Math. 5th Euro. Congress of MathAmsterdamJ. Carrillo and J. Rosado. Uniqueness of bounded solutions to aggregation equations by optimal trans- port methods. Proc. 5th Euro. Congress of Math. Amsterdam, 2008.
High-order positivity-preserving hybrid finitevolume-finite-difference methods for chemotaxis systems. A Chertock, Y Epshteyn, H Hu, A Kurganov, Advances in Computational Mathematics. 441A. Chertock, Y. Epshteyn, H. Hu, and A. Kurganov. High-order positivity-preserving hybrid finite- volume-finite-difference methods for chemotaxis systems. Advances in Computational Mathematics, 44(1):327-350, 2018.
Remarks on the blowup and global existence for a two species chemotactic keller-segel system in r2. C Conca, E Espejo, K Vilches, European J. Appl. Math. 22C. Conca, E. Espejo, and K. Vilches. Remarks on the blowup and global existence for a two species chemotactic keller-segel system in r2. European J. Appl. Math., 22, 2011.
Existence, uniqueness and asymptotic behavior of the solutions to the fully parabolic Keller-Segel system in the plane. L Corrias, M Escobedo, J Matos, Journal of Differential Equations. 2576L. Corrias, M. Escobedo, and J. Matos. Existence, uniqueness and asymptotic behavior of the solutions to the fully parabolic Keller-Segel system in the plane. Journal of Differential Equations Volume 257, Issue 6, pages 1840-1878, 2014.
Asymptotic decay for the solutions of the parabolic-parabolic Keller-Segel chemotaxis system in critical spaces. L Corrias, B Perthame, Mathematical and Computer Modelling. 477L. Corrias and B. Perthame. Asymptotic decay for the solutions of the parabolic-parabolic Keller-Segel chemotaxis system in critical spaces. Mathematical and Computer Modelling, 47(7), pages 755-764, 2008.
Microbial competition in porous environments can select against rapid biofilm growth. K Z Coytea, H Tabuteaue, E A Gaffneyb, K R Fostera, W M Durhama, 10.1073/pnas.1525228113Proc. Natl. Acad. Sci. USA. 1142K. Z. Coytea, H. Tabuteaue, E. A. Gaffneyb, K. R. Fostera, and W. M. Durhama. Microbial competition in porous environments can select against rapid biofilm growth. Proc. Natl. Acad. Sci. USA, vol. 114 no. 2, E161-E170, doi: 10.1073/pnas.1525228113, 2017.
Uniqueness and long time asymptotic for the Keller-Segel equation: the parabolic-elliptic case. G Egana, S Mischler, Arch. Ration. Mech. Anal. 2203G. Egana and S. Mischler. Uniqueness and long time asymptotic for the Keller-Segel equation: the parabolic-elliptic case. Arch. Ration. Mech. Anal. 220 (2016), no. 3, pages 1159-1194.
Sharp condition for blow-up and global existence in a two species chemotactic keller-segel system in R 2. E E Espejo, K Vilches, C Conca, European J. Appl. Math. 24E. E. Espejo, K. Vilches, and C. Conca. Sharp condition for blow-up and global existence in a two species chemotactic keller-segel system in R 2 . European J. Appl. Math., 24, 2013.
Equilibrium of two populations subject to chemotaxis. A Fasano, A Mancini, M Primicerio, Math. Models Methods Appl. Sci. 14A. Fasano, A. Mancini, and M. Primicerio. Equilibrium of two populations subject to chemotaxis. Math. Models Methods Appl. Sci., 14, 2004.
Mixing, flocking and cooperation -analytical studies of transport phenomena in biology. S He, College ParkUniversity of MarylandPh.D. ThesisS. He. Mixing, flocking and cooperation -analytical studies of transport phenomena in biology. Ph.D. Thesis, University of Maryland, College Park, June 2018.
A users guide to pde models for chemotaxis. T Hillen, K Painter, Journal of mathematical biology. 581-2T. Hillen and K. Painter. A users guide to pde models for chemotaxis. Journal of mathematical biology, 58(1-2), pages 183-217, 2009.
From 1970 until present: the Keller-Segel model in chemotaxis and its consequences. I. D Horstmann, Jahresber. Deutsch. Math.-Verein. 1053D. Horstmann. From 1970 until present: the Keller-Segel model in chemotaxis and its consequences. I, Jahresber. Deutsch. Math.-Verein, 105(3):103-165, 2003.
On explosions of solutions to a system of partial differential equations modelling chemotaxis. W Jäger, S Luckhaus, Trans. Amer. Math. Soc. 3292W. Jäger and S. Luckhaus. On explosions of solutions to a system of partial differential equations modelling chemotaxis. Trans. Amer. Math. Soc., 329(2):819-824, 1992.
Global strong solution to the semi-linear Keller-Segel system of parabolicparabolic type with small data in scale invariant spaces. H Kozono, Y Sugiyama, Journal of Differential Equations. 2471H. Kozono and Y. Sugiyama. Global strong solution to the semi-linear Keller-Segel system of parabolic- parabolic type with small data in scale invariant spaces. Journal of Differential Equations Volume 247, Issue 1, pages 1-32, 2009.
Numerical study of two-species chemotaxis models. A Kurganov, M Lukacova-Medvidova, Discrete and Continuous Dynamical Systems. Series B. 19A. Kurganov and M. Lukacova-Medvidova. Numerical study of two-species chemotaxis models. Discrete and Continuous Dynamical Systems. Series B., 19, 2014.
Blow-up of radially symmetric solutions to a chemotaxis system. T Nagai, Adv. Math. Sci. Appl. 52T. Nagai. Blow-up of radially symmetric solutions to a chemotaxis system. Adv. Math. Sci. Appl., 5(2):581-601, 1995.
Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis. T Nagai, T Senba, K Yoshida, Funkcialaj Ekvacioj. 40T. Nagai, T. Senba, and K. Yoshida. Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis. Funkcialaj Ekvacioj, 40, pages 411-433, 1997.
Asymptotically self-similar solutions for the parabolic system modelling chemotaxis. Selfsimilar solutions of nonlinear PDE. Y Naito, Banach Center Publications74WarszawaInstitute of mathematics, Polish academy of sciencesY. Naito. Asymptotically self-similar solutions for the parabolic system modelling chemotaxis. Self- similar solutions of nonlinear PDE, Banach Center Publications, Institute of mathematics, Polish acad- emy of sciences, Warszawa, 74, pages 149-160.
Moser-Trudinger and logarithmic HLS inequalities for systems. I Shafrir, G Wolansky, Journal of the European Mathematical Society. 7I. Shafrir and G. Wolansky. Moser-Trudinger and logarithmic HLS inequalities for systems. Journal of the European Mathematical Society, 7, 2005.
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity. Y Tao, M Winkler, Journal of Differential Equations. 2521Y. Tao and M. Winkler. Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity. Journal of Differential Equations Volume 252, Issue 1, pages 692-715, 2012.
Multi-components chemotactic system in the absence of conflicts. G Wolansky, European Journal of Applied Mathematics. 136G. Wolansky. Multi-components chemotactic system in the absence of conflicts. European Journal of Applied Mathematics, Volume 13, Issue 6, 2002.
| []
|
[
"Comparative Climates of TRAPPIST-1 planetary system: results from a simple climate-vegetation model",
"Comparative Climates of TRAPPIST-1 planetary system: results from a simple climate-vegetation model"
]
| [
"Tommaso Alberti [email protected] ",
"Vincenzo Carbone ",
"Fabio Lepreti ",
"Antonio Vecchio ",
"\nDipartimento di Fisica\nDipartimento di Fisica\nDipartimento di Fisica\nUniversità della Calabria\nPonte P. Bucci, Cubo 31C, Università della Calabria, Ponte P. Bucci, Cubo 31C, Università della Calabria, Ponte P. Bucci, Cubo 31C87036, 87036, 87036Rende, Rende, RendeCS), (CS), (CSItaly, Italy, Italy\n",
"\nLESIA -Observatoire de Paris\nPSL Research University\n5 place Jules Janssen92190MeudonFrance\n"
]
| [
"Dipartimento di Fisica\nDipartimento di Fisica\nDipartimento di Fisica\nUniversità della Calabria\nPonte P. Bucci, Cubo 31C, Università della Calabria, Ponte P. Bucci, Cubo 31C, Università della Calabria, Ponte P. Bucci, Cubo 31C87036, 87036, 87036Rende, Rende, RendeCS), (CS), (CSItaly, Italy, Italy",
"LESIA -Observatoire de Paris\nPSL Research University\n5 place Jules Janssen92190MeudonFrance"
]
| []
| The recent discovery of the planetary system hosted by the ultracool dwarf star TRAPPIST-1 could open new perspectives into the investigation of planetary climates of Earth-sized exoplanets, their atmospheres and their possible habitability. In this paper, we use a simple climate-vegetation energy-balance model to study the climate of the seven TRAPPIST-1 planets and the climate dependence on the global albedo, on the fraction of vegetation that could cover their surfaces and on the different greenhouse conditions. The model allows us to investigate whether liquid water could be maintained on the planetary surfaces (i.e., by defining a "surface water zone") in different planetary conditions, with or without the presence of greenhouse effect.It is shown that planet TRAPPIST-1d seems to be the most stable from an Earth-like perspective, since it resides in the surface water zone for a wide range of reasonable values of the model parameters. Moreover, according to the model outer planets (f, g and h) cannot host liquid water on their surfaces, even for Earth-like conditions, entering a snowball state. Although very simple, the model allows to extract the main features of the TRAPPIST-1 planetary climates. | 10.3847/1538-4357/aa78a2 | [
"https://arxiv.org/pdf/1706.06005v1.pdf"
]
| 118,972,556 | 1706.06005 | ed1ebb046455833e5497ae54b6570a5b632e06d1 |
Comparative Climates of TRAPPIST-1 planetary system: results from a simple climate-vegetation model
Tommaso Alberti [email protected]
Vincenzo Carbone
Fabio Lepreti
Antonio Vecchio
Dipartimento di Fisica
Dipartimento di Fisica
Dipartimento di Fisica
Università della Calabria
Ponte P. Bucci, Cubo 31C, Università della Calabria, Ponte P. Bucci, Cubo 31C, Università della Calabria, Ponte P. Bucci, Cubo 31C87036, 87036, 87036Rende, Rende, RendeCS), (CS), (CSItaly, Italy, Italy
LESIA -Observatoire de Paris
PSL Research University
5 place Jules Janssen92190MeudonFrance
Comparative Climates of TRAPPIST-1 planetary system: results from a simple climate-vegetation model
Received ; accepted arXiv:1706.06005v1 [astro-ph.EP] 19 Jun 2017 -2 -Subject headings: planets and satellites: atmospheres, planets and satellites: terrestrial planets
The recent discovery of the planetary system hosted by the ultracool dwarf star TRAPPIST-1 could open new perspectives into the investigation of planetary climates of Earth-sized exoplanets, their atmospheres and their possible habitability. In this paper, we use a simple climate-vegetation energy-balance model to study the climate of the seven TRAPPIST-1 planets and the climate dependence on the global albedo, on the fraction of vegetation that could cover their surfaces and on the different greenhouse conditions. The model allows us to investigate whether liquid water could be maintained on the planetary surfaces (i.e., by defining a "surface water zone") in different planetary conditions, with or without the presence of greenhouse effect.It is shown that planet TRAPPIST-1d seems to be the most stable from an Earth-like perspective, since it resides in the surface water zone for a wide range of reasonable values of the model parameters. Moreover, according to the model outer planets (f, g and h) cannot host liquid water on their surfaces, even for Earth-like conditions, entering a snowball state. Although very simple, the model allows to extract the main features of the TRAPPIST-1 planetary climates.
Introduction
The sharp acceleration of exoplanets discovery in the recent years (NASA Exoplanet Archive 2017; Mayor & Queloz 1995;Marcy & Butler 1996;Petigura et al. 2013;Gillon et al. 2016Gillon et al. , 2017 and the presumed habitability of some of them (Kasting et al. 1993;Scharf 2009;Spiegel et al. 2008;Kopparapu et al. 2013;Gillon et al. 2017), are changing our point of view on planetary science.
According to the usual definition (Kasting et al. 1993;Kopparapu et al. 2013), a planet resides in the so-called circumstellar habitable zone (HZ) if, being a terrestrial-mass planet with a CO 2 -H 2 O-N 2 atmosphere, it can sustain liquid water on its surface (Kasting et al. 1993;Kopparapu et al. 2013). The above requirements, coupled with the assumption of an Earth-like geology for the resulting greenhouse effect and carbon-silicate weathering cycle, imply that the surface temperature must be in the range 0 − 100 • C. Typically, apart from orbital features (e.g., eccentricity, period, transit time, inclination) and rough estimates of mass and radius, little information is directly known about exoplanets. For instance, the planetary surface temperature can be roughly estimated by using equilibrium conditions from energy-balance climate models depending on the distance of the planet from the hosting star and planetary outgoing energy. However, in such cases these estimates could be incorrect, since no information about the planetary atmosphere is included in these models (as for Venus which has an estimated temperature of ∼ 300 K while the true surface temperature is about 737 K). Nevertheless, more complex energy-balance climate models, including the greenhouse effect and/or heat diffusion, provide insight into the climate on a planet (Alberti et al. 2015, and references therein).
Despite some recent comments on the metrics used to define the habitable zone in relation to public interest in scientific results (Tasker et al. 2017;Moore et al. 2017), the recently discovered TRAPPIST-1 system (Gillon et al. 2016, formed by seven temperate (with equilibrium temperatures 400 K) Earth-sized planets orbiting around a nearby ultracool dwarf, increased the attention to studying climate conditions of terrestrial exoplanets (Bolmont et al. 2017;O'Malley-James & Kaltenegger 2017;Bourrier et al. 2017;Wolf 2017). Since, as pointed out above, the estimates of equilibrium temperatures which do not take into account the greenhouse effect and the albedo feedback (so-called null Bond albedo hypothesis) cannot be sufficiently reliable, improved and advanced climate models, considering the planetary albedo and the atmospheric composition, are required Wolf 2017). By using both a 1-D radiative-convective climate model and a more sophysticated 3-D model, Gillon et al. (2017) found that inner planets, T-b, T-c, and T-d (in the following we indicate as T-x the x-th TRAPPIST-1 planet), show a runaway greenhouse scenario, while outer planets, T-e, T-f, and T-g, could host water oceans on their surfaces, assuming Earth-like atmosphere. Concerning the seventh planet T-h, due to the low stellar irradiance received, it cannot sustain surface liquid water oceans. However, One of the drawbacks of the more detailed climate models is the necessary large pool of assumptions of atmospheric and surface conditions. In this paper we investigate the possible climates of the TRAPPIST-1 planetary system by using a simple zero-dimensional energy-balance model (Rombouts & Ghil 2015;Alberti et al. 2015), which allows the extraction of global information on the the climate evolution by using the actual knowledges about the planetary system. This model has the advantage of transparency through minimal assumptions, allowing a comparative sets of models to be studied. We study several situations, from completely rocky planets to Earth-like conditions, both neglecting and considering the greenhouse effect, to explore different possible climates and make a comparative study of TRAPPIST-1 planetary system climates.
The climate model
The main features of the climate dynamics of Earth-like planets can be recovered through a zero-dimensional model based on two equations describing the time evolution of the global average temperature T and of the fraction of land A covered by vegetation:
C T dT dt = [1 − α(T, A)] S(a, L ) − R(T ) ,(1)dA dt = A [β(T )(1 − A) − γ] .(2)
Here C T is the planet heat capacity, α(T, A) is the planetary albedo, S(a, L ) = L /(4πa 2 )
is the mean incoming radiation which depends on the star-planet distance a (in au) and on the star luminosity L , R(T ) is the outgoing energy from the planet, β(T ) and γ are the vegetation growth and death rates, respectively (Watson & Lovelock 1983;Rombouts & Ghil 2015;Alberti et al. 2015). The albedo of the planet depends on the fraction of land
p, namely α(T, A) = (1 − p)α o (T ) + p[α v A + α g (1 − A)],
where α o , α v and α g represent the albedos of ocean, vegetation and bare-ground, respectively. The albedo of the ocean is assumed to be linearly dependent on temperature as
α o (T ) = α max + (α min − α max ) T − T low T up − T low
in a range of temperatures T ∈ [T low , T up ], resulting in α o (T ) = α max for an ocean completely covered by ice (T ≤ T low ) and α o (T ) = α min for an ice-free ocean (T ≥ T up ).
The outgoing energy is described by a black-body radiation process, modulated by a grayness function, in order to take into account the greenhouse effect
R(T ) = 1 − m tanh T T 0 6 σT 4 ,(3)
where σ = 5.67 × 10 −8 W m −2 K −4 is the Stefan-Boltzmann constant, m ∈ [0, 1] is a grayness parameter (m = 0.5 − 0.6 for an Earth-like planet (Sellers 1969;Alberti et al. 2015)), and T 0 represents the mean global planetary temperature. The growth-rate β(T ) of vegetation is a quadratic function of temperature, β(T ) = max [0; 1 − k(T − T opt ) 2 ] (being k a parameter for the growth curve width and T opt an optimal temperature), while the death-rate γ is assumed to be constant (Watson & Lovelock 1983;Alberti et al. 2015).
The free parameters k, T opt , γ and α v are related to vegetation (as in Rombouts & Ghil 2015;Alberti et al. 2015). In the following, since we assume an Earth-like vegetation, these parameters are set to Earth's conditions. A complete list of the used parameters and their corresponding values is shown in Table 1.
[ht]
As shown in previous studies (Sellers 1969
SWZ({θ}) = 1 if 273 ≤ T K ≤ 373K , 0 otherwise .(4)
Note that SWZ({θ}), in the parameter space {θ}, generally defines a range where equilibrium temperatures calculated from the model are compatible with the presence of Table 1. Values of the model parameters.
Symbol
Value Units
C T 500 W yr K −1 m −2 α v 0.1 α g 0.4 α max 0.85 α min 0.25 T low 263 K T up 300 K T opt 283 K k 0.004 yr −1 K −2 γ 0.1 yr −1
liquid water on planetary surface, independently from their atmospheric composition.
TRAPPIST-1 planetary climates
The possible climates of the TRAPPIST-1 planetary system are investigated by numerically solving Eq.s (1)-(2) through a second order Runge-Kutta scheme for time integration and looking at the stationary equilibrium solutions (Alberti et al. 2015). The luminosity of the star is set to
S(a, L ) = 0.0005 a 2 S ,(5)
where, based on stellar properties of TRAPPIST-1 (Gillon et al. 2016, we assumed that L = 0.0005L , d p = a * d (being d = 1 au the Sun-Earth distance), and S = L /4πd 2 = 342.5 W m −2 is the mean solar radiation observed at the top of the Earth's atmosphere. The initial temperatures are set equal to the equilibrium temperatures obtained by assuming a null Bond albedo (see Table 1 in Gillon et al. 2017) and the scale parameter a is chosen as the mean distance of each T-x planet to the TRAPPIST-1 star .
First of all we consider the case of rocky planets (p = 1) with no vegetation (A = 0) and no greenhouse effect (m = 0). In Figure 1 we show the stationary solutions for the temperature of the planets, obtained from Eq.s (1)-(2), as functions of the star-planet distance, for different values of the bare-ground albedo α g .
When α g is set to zero (i.e., for black dots in Figure 1), solutions reported in Table 1 or exit from the surface water zone. For example T-b could host surface liquid water on its surface only for a range of α g close to α g 0.5. This suggests that in the simple case when planets are mainly rocky and without atmospheres, their residence in the surface water zone is dependent on their surface albedo. This consequently implies that the vegetation coverage is a main feedback acting as a thermal regulator for planetary temperature.
For the above reasons, in the following we investigate the climate properties of the planetary system when planetary conditions, related to different surface vegetation coverage and bare-ground albedo, are considered. We will show, in detail, T-x temperatures for three different situations: i) rocky planets without oceans or ice (p = 1) and without greenhouse effect (m = 0); ii) Earth-like land distribution (p = 0.3) without greenhouse effect (m = 0); iii) Earth-like land distribution (p = 0.3) with a greenhouse effect similar to that observed on Earth (m = 0.6). This gradual approach is useful to investigate planetary climates starting from different conditions, in order to make a comparative study on the possible climates of TRAPPIST-1 planets by considering several possible situations.
The stationary solutions for temperatures in the plane (α g , A 0 ) are shown in Figure 2 for the case of rocky planets (p = 1). As said before, only the first three planets can be in the surface water zone when no atmosphere is considered (m = 0). Moving from low to high values of A 0 , planetary surface temperature changes accordingly, increasing as A 0 increases. More specifically, for A 0 ≥ 0.5 T-c displays only a weak temperature variation with α g . This indicates that vegetation acts as a feedback to maintain conditions for which SWZ({θ}) = 1. A similar behavior is recovered for T-d but for higher values of A 0 (A 0 0.8), suggesting that this exoplanet should be almost completely covered by vegetation to reside in the surface water zone. Conversely, due to its lower distance from the star, T-b shows an opposite behavior, entering the surface water zone for lower values of A 0 , namely A 0 0.5. For planets T-e, T-f, T-g and T-h, although the equilibrium temperature increases with A 0 , the stellar irradiance is not enough to have global temperatures compatible with a surface water zone. We remark that the presence of liquid water in a planet without atmosphere does not make intuitive sense. The atmospheric envelope is indeed a fundamental component of a climate system to develop and maintain conditions for life on a planet. Therefore the case just discussed is presented mainly for a comparison of surface temperature conditions with the more Earth-like cases which are reported below.
We turn now to a situation where planets have an Earth-like land distribution. This Gillon et al. (2017) who showed that outer planets, with Earth-like atmospheres, could host water oceans on their surfaces. This could be related to the main difference between our model and that used in Gillon et al. (2017). While in our model the greenhouse effect is included by using a parametric approach (Sellers 1969;Alberti et al. 2015), Gillon et al. (2017) utilized a 1-D radiative-convective cloud-free model in which greenhouse effect is taken into account by considering the contribution of several types of greenhouse gases (e.g., CO 2 , H 2 O, N 2 ) with different partial pressures (Wordsworth et al. 2010). However, our results are quite in agreement with that reported by Wolf (2017) The surface water zone changes significantly with the planetary atmospheric composition, such that, for instance, the first three planets can reside in the surface water zone for a wide range of the parameters, or, in some cases, even none of them can be there.
In particular, it is interesting to see whether T-e, which is at the center of the system, could enter the surface water zone, since previous studies by Wolf (2017) have suggested this planet has the best chance to have water oceans on its surface. For this reason, we investigate changes in the surface water zone keeping fixed α g = 0.4 and varying both p and m.
In Figure 5 we show the stationary solutions for temperatures in the plane (p, m). As values of m, suggesting that its atmosphere must have higher levels of greenhouse gases, such as CO 2 or N 2 , with respect to those observed on Earth. This result is quite different from that obtained by Wolf (2017) according to which T-e has the best chance to be a habitable ocean-covered planet. This discrepancy could be related to the fact that our model considers the greenhouse effect in a parametric way, while the 3-D model used in Wolf (2017) directly uses the contribution of several types of greenhouse gases, as N 2 and CO 2 (similarly to Wordsworth et al. 2010). Indeed, a zero-dimensional climate model can determine the effective planetary emissivity of long wave radiation emitted to space, while a radiative-convective model considers different processes of energy transport, from radiative transfer through atmospheric layers to heat transport by convection. This allows to directly investigate the effects of varying greenhouse gas concentrations on thermal energy balance.
However, although the results are different, the number of unknowns is such that it is not possible to know which climate model is more likely. Finally, outer planets (T-f, T-g and T-h) seem to be not in the surface water zone, entering a snowball state (Wolf 2017), even if the greenhouse effect increases to higher levels than those observed on Earth.
Conclusions
In this paper we investigated the climate of the TRAPPIST-1 planetary system by using a zero-th order energy-balance model which allows us to outline the main features of the different planets. We found that the surface water zone, defined as the circumstellar region where a planet can host liquid water on its surface, is strongly dependent on the different parameters of the model and, in particular, on the initial fraction of vegetation coverage, the bare-ground albedo and the presence of oceans. More specifically, the "inner" three planets T-b, T-c, and T-d seem to be located in the surface water zone for several values of the parameters, as described before, while planet T-e, at variance to what have been reported in Gillon et al. (2017), can present water oceans only for greenhouse effect conditions different from the Earth. The climate of planet T-d seems to be the most stable from an Earth-like perspective, because this planet resides in the range of SWZ({θ}) = 1 for a wide interval of reasonable values of the different parameters. This result is not in agreement with that reported by Wolf (2017) for which the best candidate for a habitable ocean-covered surface is the planet T-e. This difference could be related to the different models employed, as in our energy-balance model a parametric description of the greenhouse effect is used, while in the 3D climate model by Wolf (2017) the contribution of several types of greenhouse gases is taken into account. However, since the number of unknowns makes difficult to choose one model with respect to another, different approaches, based either on simple or more complex climate models, can be useful. In this framework, our model has the advantage of transparency through minimal assumptions, allowing a comparative sets of cases to be studied.
Here we showed that the TRAPPIST-1 system can have different climates and that equilibrium temperatures depend on the global albedo, that is, on the mean physical conditions of the planetary surface. However, this parameter is strongly variable, since the vegetation could cover only a fraction of the surface, as for example the case of Earth.
Moreover, also the greenhouse effect needs to be properly considered since it is one of the main feedback in regulating thermal energy balance. Investigating these features require more sophisticated models, extended to space variables, at least with a description of the atmospheric heat diffusion. The model is actually under investigation and results will be reported in a forthcoming paper.
We acknowledge S. Savaglio for useful discussions and for her interest in our work. We thank the anonymous reviewer for fruitful and helpful suggestions.
since only little is known about the planetary system, several approaches and hypothesis can be helpful in investigating both planetary climates and atmospheric composition (De Wit et al. 2016; Bolmont et al. 2017; O'Malley-James & Kaltenegger 2017; Wolf 2017).
;Watson & Lovelock 1983;Rombouts & Ghil 2015;Alberti et al. 2015, and references therein), this set of parameters produces results in agreement with the observed Earth's surface temperature. In particular, the model also shows oscillatory solutions which can reproduce the observed sawtooth-like behavior of paleoclimate changes (seeRombouts & Ghil 2015, for more details). By using the above set of parameters, we perform a parametric study of the solutions as functions of the initial fraction A 0 of land covered by vegetation and of the bare-ground albedo α g . Moreover, we use different values of p and m in order to investigate the effect of land/ocean distribution and the role of the greenhouse effect on planetary climates. We define a "surface water zone (SWZ)" as the circumstellar region where the planetary surface temperature ranges between 273 K and 373 K. It depends on the set {θ} of the variable parameters of the model and can be expressed as a a step-wise function
Fig. 1 .
1ofGillon et al. (2017) are obtained, since they are equilibrium solutions of an energy-balance climate model with null Bond albedo hypothesis. Moreover, in this case only T-c and T-d reside in the surface water zone (SWZ). However, as the bare-ground albedo α g is changed, different conditions can be observed, that is, as α g increases some T-x planets can enter in -Equilibrium solutions for temperatures as functions of the star-planet distance for four values of α g . Values refer to rocky planets without vegetation and greenhouse effect (p = 1, A = 0, m = 0). The red dashed lines represent the temperature range for which planets are in the SWZ.
Fig. 2 .Fig. 3 .Fig. 4 .
234-Equilibrium solutions for temperatures in the plane (α g , A 0 ), for rocky planets without vegetation and greenhouse effect (p = 1, m = 0). The surface water zone, when present, is shown by dashed lines . is done by setting p = 0.3, corresponding to a planet covered by land, ocean and ice. We also assume that the greenhouse effect is negligible (m = 0).Figure3 shows the stationary solutions for temperatures when p = 0.3 and m = 0. The main difference with the previous case concerns planet T-d. Indeed, when an Earth-like land distribution is considered, the global surface temperature is always lower than 273 K, that is, T-d is not in the surface water zone. For both T-b and T-c there are wide ranges of the parameters for which SWZ({θ}) = 1, while the other planets cannot host surface liquid water on their surfaces. -Equilibrium solutions for temperatures in the plane (α g , A 0 ), for planets with a fraction of land and oceans similar to the Earth (p = 0.3). Greenhouse effect is not included (m = 0). The surface water zone, when present, is shown through dashed lines.The observed changes in equilibrium temperatures suggest that oceans play a primary role in setting the thermal equilibrium conditions for planetary surface temperature, at least when the greenhouse effect is not considered.Let us now consider the same land distribution (p = 0.3), but with an Earth-like greenhouse (m = 0.6). The stationary solutions for temperatures are shown inFigure 4. The planetary climates change with respect to the previous case, since temperatures increase. In particular, T-d again resides in the surface water zone for a wide range of A 0 and α g , although for A 0 < 0.5 the planet leaves that zone when α g > 0.5. As for the previous -Equilibrium solutions for temperatures in the plane (α g , A 0 ), for planets with an Earth-like fraction of land and oceans and greenhouse effect (p = 0.3, m = 0.6). The surface water zone, when present, is shown through dashed lines. cases, even using the Earth's value of m, outer exoplanets cannot reach global surface temperatures in the range[273 K,373 K]. This implies that these planets need a different greenhouse effect with respect to the Earth, and consequently a different atmospheric composition, to enter the surface water zone. These results are quite different from those by
, who showed, by using a 3-D climate model, that outer planets (i.e., T-f, T-g, and T-h) are not warmed enough, falling beyond the habitable zone and entering a snowball state. On the other hand, inner planets T-b and T-c show higher surface temperatures, so that to reside in the surface water zone their greenhouse effect should be similar or less efficient than on the Earth. In particular, T-b cannot be in the surface water zone for Earth-like greenhouse effect, while, when low values of A 0 and high value of α g are considered, T-c is in the surface water zone for a narrow range of the model parameters. These results on inner planets are also in agreement withWolf (2017) who reported that the inner three planets (T-b, T-c, and T-d) could reside in the traditional liquid water habitable zone but only with runaway greenhouse conditions.
Fig. 5 .
5expected, the planetary surface temperature strongly depends on the greenhouse effect conditions. In particular, T-b has a very narrow range of surface liquid water for very -Equilibrium solutions for temperatures in the plane (p, m) for Earth-like bareground conditions (α g = 0.4). The surface water zone, when present, is shown by dashed lines. low values of m, while planets T-c and T-d have a wide range of parameters for which SWZ({θ}) = 1. Interestingly, for high values of p and m, T-e enters the surface water zone. In an Earth-like situation this planet cannot have surface liquid water for lower
. T Alberti, L Primavera, A Vecchio, F Lepreti, V Carbone, E Bolmont, F Selsis, J E Owen, I Ribas, S N Raymond, J Leconte, M Gillon, Phys. Rev. E. 923728MNRASAlberti, T., Primavera, L., Vecchio, A., Lepreti, F. & Carbone, V. 2015, Phys. Rev. E, 92, Bolmont, E., Selsis, F., Owen, J. E., Ribas, I., Raymond, S. N., Leconte, J. & Gillon, M. 2017, MNRAS, 464, 3728
. V Bourrier, D Ehrenreich, P J Wheatley, Astron. Astrophys. 5993Bourrier, V., Ehrenreich, D., Wheatley, P. J., et al. 2017, Astron. Astrophys., 599, L3
. J De Wit, Nature. 53769De Wit, J., et al. 2016, Nature, 537, 69
. M Gillon, Nature. 533221Gillon, M., et al. 2016, Nature, 533, 221
. M Gillon, Nature. 542456Gillon, M., et al. 2017, Nature, 542, 456
. J F Kasting, D P Whitmire, R T Reynolds, Icarus. 101108Kasting, J. F., Whitmire, D. P. & Reynolds, R. T. 1993, Icarus, 101, 108
. R K Kopparapu, ApJ. 765131Kopparapu, R. K., et al. 2013, ApJ, 765, 131
. M Mayor, D Queloz, Nature. 378355Mayor, M. & Queloz, D. 1995, Nature, 378, 355
. G W Marcy, R P Butler, ApJ. 464147Marcy, G. W. & Butler, R. P. 1996, ApJ, 464, L147
. W B Moore, A Lenardic, A M Jellinek, C L Johnson, C Goldblatt, R D Lorenz, Nature Astronomy. 143Moore, W. B., Lenardic, A., Jellinek, A. M., Johnson, C. L., Goldblatt, C. & Lorenz, R. D. 2017, Nature Astronomy 1, 0043
. Nasa Exoplanet Archive, California Institute of TechnologyNASA Exoplanet Archive (California Institute of Technology, 2017), http://go.nature.com/2jqeO98
. O'malley-James , J T Kaltenegger, L , MNRASLett. 46926O'Malley-James, J. T., & Kaltenegger, L. 2017, MNRASLett., 469, L26
. E A Petigura, A W Howard, G W Marcy, PNAS11019273Petigura, E. A., Howard A. W. & Marcy, G. W. 2013, PNAS, 110 (48), 19273
. J Rombouts, M Ghil, Nonlin. Proc. Geophys. 22Rombouts, J. & Ghil, M. 2015, Nonlin. Proc. Geophys., 22, 275-288
. C A Scharf, Extrasolar Planets and Astrobiology. University Science Book Eds.Scharf, C.A. 2009 Extrasolar Planets and Astrobiology (University Science Book Eds., 2009)
. W D Sellers, J. Appl. Meteorol. 8Sellers, W.D. 1969, J. Appl. Meteorol., 8, 392-400
. D S Spiegel, K Menou, C A Scharf, ApJ. 6811609Spiegel, D.S., Menou, K. & Scharf, C.A. 2008, ApJ, 681, 1609
. E Tasker, Nature Astronomy. 142Tasker, E., et al. 2017, Nature Astronomy 1, 0042
. A J Watson, J E Lovelock, Tellus B. 35Watson, A.J. & Lovelock, J.E. 1983, Tellus B, 35, 284-289
. E T Wolf, ApJ. 8391Wolf, E. T. 2017, ApJ, 839, L1
. R D Wordsworth, F Forget, F Selsis, Astron. Astrophys. 522A22 This manuscript was prepared with the AAS L A T E X macros v5.2Wordsworth, R. D., Forget, F., Selsis, F., et al. 2010, Astron. Astrophys., 522, A22 This manuscript was prepared with the AAS L A T E X macros v5.2.
| []
|
[
"Creep properties and deformation mechanisms of single- crystalline γ′-strengthened superalloys in dependence of the Co/Ni ratio",
"Creep properties and deformation mechanisms of single- crystalline γ′-strengthened superalloys in dependence of the Co/Ni ratio"
]
| [
"N Volz \nDepartment of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany\n",
"C H Zenk \nDepartment of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany\n",
"N Karpstein \nDepartment of Materials Science & Engineering\nCenter for Nanoanalysis and Electron Microscopy (CENEM)\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute of Micro-and Nanostructure Research\nFriedrich Alexander University of Erlangen-Nuremberg\n91058IZNF, ErlangenGermany\n",
"M Lenz \nDepartment of Materials Science & Engineering\nCenter for Nanoanalysis and Electron Microscopy (CENEM)\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute of Micro-and Nanostructure Research\nFriedrich Alexander University of Erlangen-Nuremberg\n91058IZNF, ErlangenGermany\n",
"E Spiecker \nDepartment of Materials Science & Engineering\nCenter for Nanoanalysis and Electron Microscopy (CENEM)\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute of Micro-and Nanostructure Research\nFriedrich Alexander University of Erlangen-Nuremberg\n91058IZNF, ErlangenGermany\n",
"M Göken \nDepartment of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany\n",
"S Neumeier \nDepartment of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany\n"
]
| [
"Department of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany",
"Department of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany",
"Department of Materials Science & Engineering\nCenter for Nanoanalysis and Electron Microscopy (CENEM)\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute of Micro-and Nanostructure Research\nFriedrich Alexander University of Erlangen-Nuremberg\n91058IZNF, ErlangenGermany",
"Department of Materials Science & Engineering\nCenter for Nanoanalysis and Electron Microscopy (CENEM)\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute of Micro-and Nanostructure Research\nFriedrich Alexander University of Erlangen-Nuremberg\n91058IZNF, ErlangenGermany",
"Department of Materials Science & Engineering\nCenter for Nanoanalysis and Electron Microscopy (CENEM)\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute of Micro-and Nanostructure Research\nFriedrich Alexander University of Erlangen-Nuremberg\n91058IZNF, ErlangenGermany",
"Department of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany",
"Department of Materials Science & Engineering\nFriedrich-Alexander-Universität Erlangen-Nürnberg\nInstitute I: General Materials Properties\n91058ErlangenGermany"
]
| []
| Co-base superalloys are considered as promising high temperature materials besides the wellestablished Ni-base superalloys. However, Ni appears to be an indispensable alloying element also in Co-base superalloys. To address the influence of the base elements on the deformation behavior, high-temperature compressive creep experiments were performed on a single crystal alloy series that was designed to exhibit a varying Co/Ni ratio and a constant Al, W and Cr content. Creep tests were performed at 900 °C and 250 MPa and the resulting microstructures and defect configurations were characterized via electron microscopy. The minimum creep rates differ by more than one order of magnitude with changing Co/Ni ratio. An intermediate CoNi-base alloy exhibits the overall highest creep strength. Several strengthening contributions like solid solution strengthening of the γ phase, effective diffusion coefficients or stacking fault energies were quantified. Precipitate shearing mechanisms differ significantly when the base element content is varied. While the Ni-rich superalloys exhibit SISF and SESF shearing, the Co-rich alloys develop extended APBs when the γ′ phase is cut. This is mainly attributed to a 2 difference in planar fault energies, caused by a changing segregation behavior. As result, it is assumed that the shearing resistivity and the occurring deformation mechanisms in the γ′ phase are crucial for the creep properties of the investigated alloy series. | 10.1080/14786435.2021.2017051 | [
"https://arxiv.org/pdf/2109.06767v1.pdf"
]
| 237,503,131 | 2109.06767 | b872156620dfef4e9a435a0418ee557b165b2fee |
Creep properties and deformation mechanisms of single- crystalline γ′-strengthened superalloys in dependence of the Co/Ni ratio
N Volz
Department of Materials Science & Engineering
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute I: General Materials Properties
91058ErlangenGermany
C H Zenk
Department of Materials Science & Engineering
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute I: General Materials Properties
91058ErlangenGermany
N Karpstein
Department of Materials Science & Engineering
Center for Nanoanalysis and Electron Microscopy (CENEM)
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute of Micro-and Nanostructure Research
Friedrich Alexander University of Erlangen-Nuremberg
91058IZNF, ErlangenGermany
M Lenz
Department of Materials Science & Engineering
Center for Nanoanalysis and Electron Microscopy (CENEM)
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute of Micro-and Nanostructure Research
Friedrich Alexander University of Erlangen-Nuremberg
91058IZNF, ErlangenGermany
E Spiecker
Department of Materials Science & Engineering
Center for Nanoanalysis and Electron Microscopy (CENEM)
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute of Micro-and Nanostructure Research
Friedrich Alexander University of Erlangen-Nuremberg
91058IZNF, ErlangenGermany
M Göken
Department of Materials Science & Engineering
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute I: General Materials Properties
91058ErlangenGermany
S Neumeier
Department of Materials Science & Engineering
Friedrich-Alexander-Universität Erlangen-Nürnberg
Institute I: General Materials Properties
91058ErlangenGermany
Creep properties and deformation mechanisms of single- crystalline γ′-strengthened superalloys in dependence of the Co/Ni ratio
1 ___________________________________________________________________________
Co-base superalloys are considered as promising high temperature materials besides the wellestablished Ni-base superalloys. However, Ni appears to be an indispensable alloying element also in Co-base superalloys. To address the influence of the base elements on the deformation behavior, high-temperature compressive creep experiments were performed on a single crystal alloy series that was designed to exhibit a varying Co/Ni ratio and a constant Al, W and Cr content. Creep tests were performed at 900 °C and 250 MPa and the resulting microstructures and defect configurations were characterized via electron microscopy. The minimum creep rates differ by more than one order of magnitude with changing Co/Ni ratio. An intermediate CoNi-base alloy exhibits the overall highest creep strength. Several strengthening contributions like solid solution strengthening of the γ phase, effective diffusion coefficients or stacking fault energies were quantified. Precipitate shearing mechanisms differ significantly when the base element content is varied. While the Ni-rich superalloys exhibit SISF and SESF shearing, the Co-rich alloys develop extended APBs when the γ′ phase is cut. This is mainly attributed to a 2 difference in planar fault energies, caused by a changing segregation behavior. As result, it is assumed that the shearing resistivity and the occurring deformation mechanisms in the γ′ phase are crucial for the creep properties of the investigated alloy series.
Introduction
Plenty of studies on creep deformation mechanisms have been carried out on various Co-and CoNi-based alloys under different test conditions to analyze their potential compared to conventional Ni-base superalloys [1][2][3][4][5][6][7][8][9][10][11][12]. Depending on alloy composition, loading direction, temperature and applied stress, the deformation mechanisms differ significantly. For example, Suzuki and Pollock [1] reported that tensile creep deformation in the temperature range from 600 °C to 900 °C and at stresses between 240 MPa and 530 MPa is based on matrix deformation by 〈110〉 slip and γ′ deformation on 〈110〉{111} and 〈112〉{111} slip systems. Titus et al. [2] showed that the γ′ precipitates in a Co-base superalloy, tested at 900 °C, are sheared by a/3〈112〉 partial dislocations, which create superlattice intrinsic (SISF) and extrinsic (SESF) stacking faults, whereas in a CoNi-based alloy, γ′ is sheared by a/2〈110〉 dislocations, leaving behind anti-phase boundaries (APBs). Additionally, Eggeler et al. [3] found a fault configuration during tensile creep at 900 °C and 310 MPa where an SISF is embedded in an APB (ASAconfiguration). These configurations differ significantly from the ones observed in Ni-base superalloys, which are known to deform mainly by matrix dislocation movement in the high temperature creep regime [13][14][15][16]. Of course, it has to be considered that the so-called high temperature creep regime is defined to start at lower temperatures for Co-base superalloys (e.g. 900 °C) than for Ni-base superalloys (e.g. 1000 °C). However, deformation by γ′ shearing under the formation of superlattice stacking faults was found for Ni-base superalloys as well, although at lower temperatures like 750 °C and high stresses [17]. At intermediate temperatures of 850 °C it was found that the deformation of Ni-base superalloys is dominated by matrix deformation in the early stages of creep and the resistance of γ′ against shearing is the strengthlimiting factor [18,19]. When γ′ cutting occurred at these temperatures, APB coupled a/2〈110〉 dislocation pairs were observed [18,19].
Another important difference between Co-and Ni-base superalloys, leading to different creep properties, is the sign of the γ/γ′ lattice misfit. While most cast Ni-base superalloys exhibit a negative misfit [20][21][22][23], it is typically positive for Co-base superalloys [1,6,24,25]. The lattice misfit between the γ and γ′ phase -no matter if positive or negative -leads to a directional coarsening (rafting) of the γ′ phase during creep at high temperatures. However, the preferred direction of rafting changes with the sign of the lattice misfit. The precipitates align parallel to the external stress axis in alloys with negative misfit in compression and perpendicular to it under tensile loading. It was found for Co-and Ni-base superalloys that the rafting behavior can have beneficial effects for the creep properties, depending on the test parameters [6,7,26,27].
It was already found that the Ni content in Co-Al-W-based superalloys significantly influences the partitioning behavior of other alloying elements. The changing compositions of the γ and γ′ phases result in a different amount of solid solution strengthening, γ/γ′ lattice misfits [28][29][30] and planar defect energies [2,11,31,32]. As a result, the overall creep properties vary significantly between Co-and Ni-base superalloys. To address the influence of the base elements Co and Ni systematically, a model alloy series has been developed by Zenk et al. [29,30]. They investigated polycrystalline (PX) specimens of alloys with varying Co/Ni ratios.
It was found that the alloying elements distribute more evenly, the lattice misfit switches from negative to positive values and the creep properties deteriorate with increasing Co content [29,30]. The aim of the present study is to analyze how the variation in microstructure and thermophysical properties, which is induced by a change of the Co/Ni ratio, influences the creep properties and deformation behavior of single-crystalline (SX) sample material. Therefore, creep tests at 900 °C and 250 MPa were performed and the resulting defect structures were characterized by scanning (SEM) and transmission electron microscopy (TEM).
2.
Experimental procedures
Materials and processing
The nominal compositions of the investigated alloys are given in Table 1. Polycrystalline samples of these alloys were investigated in earlier studies in terms of thermophysical and mechanical properties and the interested reader is referred to [29,30]. It is worth noting that the alloys in those studies additionally contained boron to overcome grain boundary embrittlement.
The number X in the alloy denotations represents the Co-fraction with respect to the overall content of the base elements Co and Ni, i.e. in NCX, X = c(Co) / (c(Co)+c(Ni)) * 100.
Therefore, NC0 is a pure Ni-base alloy and NC100 a pure Co-base superalloy, whereas NC25, NC50 and NC75 are intermediate alloys with increasing Co-content. To study the deformation mechanisms in more detail, single-crystalline rods of these compositions were produced using the Bridgman process at withdrawal rates of 3 mm/min. EBSD measurements were used to determine the misorientation of the cast material. Based on these measurements, 〈001〉-oriented segments were extracted from the rods with a deviation less than 5°. Creep specimens and samples for microstructure analysis were prepared from these segments after heat-treatments. All alloys were solution annealed at 1250 °C for 24 h and aged at 900 °C for 100 h in vacuum to provide a homogeneous two-phase γ/γ′ microstructure. The cooling rate between the two heat-treatment temperatures and from 900 °C to room temperature was approximately 300 °C/h. For SEM microstructure analysis, the samples were ground to 4000 grit and mechanically polished using diamond suspensions, followed by a chemomechanical fine polishing (Struers, OPS). The microstructure was investigated using a Zeiss
Crossbeam 1450 EsB and backscattered electron imaging (BSD). The γ′ area fraction was measured by ImageJ. From that the γ′ volume fraction was calculated according to the shape factor of the precipitates, as described in [33].
Cylindrical samples with a diameter of about 5 mm and a height of about 7.5 mm were used to perform compression creep tests at 900 °C and 250 MPa. TEM specimens were produced by precision-cutting of disks of about 200 µm thickness, followed by grinding to 2500 grit. The final thinning was done using a twin-jet polishing machine with a 60 % perchloric acid in methanol and 2-butoxyethanol electrolyte (Struers, A3 electrolyte). A Philips CM200 at 200 kV and a FEI Titan Themis 3 at 300 kV were used to analyze the deformation structures in TEM bright-field (BF) and dark-field (DF) imaging and scanning TEM (STEM) mode using a high angle annular dark-field (HAADF) detector.
Quantification of strengthening contributions
Thermo-Calc (TC) was used in an attempt to calculate the stacking fault energies of the matrix phase of NC-alloys. The γ compositions were experimentally determined (APT) and are given in Table 3. The model originally developed by Olson and Cohen [34], which is primarily based on the Gibbs energy difference of the fcc-γ and hcp-ε phases, was used for that:
= 2 � → + � + 2 / Equation 1
is the stacking fault energy, the molar surface density, → the molar Gibbs energy difference between the hcp-ε and the fcc-γ phases of the same composition, a molar strain energy term associated with the lattice distortion around the partial dislocations and stacking fault and / is the interfacial energy between γ and ε on the {111} stacking fault habit plane.
The molar surface density ρ was calculated from the molar volume of the fcc phase at 900 °C using the TCNI10 database. As the hcp phase in TCNI10 is not sufficiently well described for this purpose, → was calculated using the TTNI8 database.
The molar strain energy term was also calculated according to [34] from the molar volumes of the individual phases (TCNI10), the strain ε 33 along the ε-phase's c-axis associated with the γ→ε phase transformation, the Poisson ratio ν and the shear modulus μ:
≈ 2(1 − ) 9(1 + ) � − � + 7 − 5 15(1 − ) 2 3 33 2
Equation 2
As there is no data on this specific alloy system available, we estimated 33 to be -0.67 % based on various works investigating the martensitic transformation of Co and Co-alloys [35][36][37][38][39][40]. The values for Co range from -0.3 % to -0.8 % in these studies, however, varying 33 in this range does not significantly alter the findings of this study. The value ν was assumed to be 0.33. Since the shear modulus µ of the matrix composition is not known, the value for Haynes188 [41] as given by the official data sheet (61 GPa) at 900 °C was used for all alloys, since the composition is comparable to the NCX alloys and the elastic stiffness is not expected to vary much throughout the system. The strain energy determined in this way was found to be about two orders of magnitude smaller than the Gibbs energy. All values used for the variables in the calculation of the matrix stacking fault energy as well as intermediate and final results are summarized in Table 2. The contribution of the solid solution hardening σss of the γ phase was estimated experimentally using the Labusch theory [43]. This theory was then modified by Gypen and Deruyttere for various alloying elements in multicomponent alloy systems [44,45]. According to this approach and the addition of Varvenne et al. [46] and Galindo-Nava et al. [47] for two-phase alloys, the strengthening contribution of the matrix phase by solid solution hardening can be calculated as:
σ = (1 − γ′ ) �� β 3 2 ⁄ � 2 3 ⁄
Equation 3
The value fγ′ gives the γ′ volume fraction and (1-fγ′) limits the calculation of the solid solution strengthening to the γ phase in the two phase system, since most of the dislocation activity is located in this phase, as shown later. xi is the atomic fraction of the element i in the γ phase of the alloy and taken from APT measurements on PX sample material of the NCX alloy series derived from [30]. Later in the manuscript, the strengthening contribution assuming a single phase fcc alloy with the composition of the γ phase and also the strengthening contribution with respect to the two phase microstructure by taking the different γ′ volume fractions into account will be shown. The constants β of alloying elements were calculated according to Fleischer [48] and describe the lattice and shear modulus misfit between the solute and Ni. It can be calculated according to equation 4.
β = 3 2 µ(η′ + 16δ ) 3 2 �
Equation 4
As before, the shear modulus μ of Haynes188 at 900 °C was used for all alloys. The constant η′ describes the difference in shear moduli and can be calculated as η′ i = where r i is the atomic radius of solute and the atomic radius of Nickel. Since the atomic radius and shear modulus of Ni and Co do not differ significantly, Ni was used as reference element for all alloys investigated. Due to this, Co was also not considered as a solid solution strengthening element, since its effect can be neglected in the reference system Ni, according to the applied model. We know that the models described above are considered to be valid only for small solute additions and the composition especially of the base element is changing significantly in our alloy series. Additionally, the solid solution strengthening effect by changing the stacking fault energy is not covered by these models. However, since the effect of Co is negligible due to the small differences regarding atomic size and shear modulus to Ni, application of the models is assumed to be reasonable for our alloys. To calculate the solid solution strengthening at the creep test temperature of 900 °C, also temperature dependent shear moduli and atomic radii were used. Shear moduli were linearly extrapolated to 900 °C from temperature-dependent measurements taken from refs. [49][50][51][52][53] if they were not available directly. The atomic radii of the solutes at 900 °C were taken from thermodynamic calculations using Thermo-Calc with the SGTE unary database version 5.1. All values which were used for the calculations are listed in Table 3.
= 0 � − � Equation 5
where R is the universal gas constant and 0 and are the frequency factor and the activation energy of the diffusing alloying elements, respectively, equivalent to the diffusion of a single solute. They can be calculated as:
0 = �� 0 , � −1 Equation 6 = + � ,
Equation 7
Thus 0 is the harmonic mean of the frequency factors 0 of the solutes i in the base element of an alloy. The effective activation energy is calculated from the activation energy for self diffusion in the base element and the activation energies for diffusion of the solute in the base element, weighted according the elemental content .
Results
Initial state
The two-phase γ/γ′ microstructures of the five alloys NC0, 25, 50, 75 and 100 after solution and aging heat-treatment are shown in Figure 1. While NC0 shows mainly cubic precipitates, the γ′ phase approaches a more globular morphology with increasing Co content, until a Ni/Co ratio of 1:1 (NC50) is reached. When the Co content is further increased, the precipitates start getting cubic again. This indicates a change of the γ/γ′ lattice misfit from likely negative in the Ni-rich alloys to nearly zero in NC50 (see Figure 5c), and to positive values in the Co-rich alloys NC75
and NC100, which confirms the findings of Zenk et al. who investigated the polycrystalline variants of these alloys [30]. Additionally, the contrast inversion between precipitates and matrix indicates a change in the elemental segregation between the two phases γ and γ′ (compare Figure 1 a and e). This is consistent with the findings in [30] where the heavy element W, showing strong electron-backscattering, changes its preferred partitioning from the matrix on the Ni-rich side to the precipitate phase on the Co-rich side. All other alloying elements do not change their partitioning preference, however, they distribute more equally with increasing Co content [30].
Creep properties
The compression creep properties of the five investigated alloys at 900 °C and 250 MPa are shown in Figure 2a. The minimum creep rates, evaluated from these data, are shown in Figure 2b. NC25, a Ni-rich alloy, exhibits the best creep properties. A minimum creep rate of 1.
Directional coarsening during creep
The microstructures after compressive creep tests to plastic strain values of 5-8 % are shown in force for this process due to the near-zero lattice misfit [30]. The rafting in the Ni-rich alloys NC0 and NC25 aligns parallel to the external compressive load axis (see Figure 3 a and b), which is expected for negative γ/γ′ lattice misfit alloys. In NC0, the horizontal channels have not yet closed entirely and the directional coarsening in NC25 seems to have advanced further, despite the fact that the lattice misfit in NC0 is larger and a more pronounced rafting would be expected. However, this can be explained by the test duration: while NC25 was exposed to the test conditions for about 430 h, NC0 was only tested for 240 h. Since rafting is a diffusioncontrolled process, the longer creep test results in a more pronounced directional coarsening [55]. The rafted γ′ microstructure in the alloys with positive lattice misfit, NC75 and NC100, is aligned perpendicular to the external stress (see Figure 3 d and e). Of these two, NC100 exhibits a considerably more evolved raft-microstructure due to the higher lattice misfit.
Deformation mechanisms
All creep tests at 900 °C and 250 MPa were repeated and interrupted at a plastic strain of about 0.2 % to 0.5 % (see Figure 2) to study the active deformation mechanisms in the early stages of creep. The corresponding TEM micrographs from [001] cross-sections extracted perpendicular to the stress axis are shown in Figure 4.
NC0, the Ni-base alloy, predominantly shows matrix deformation at a plastic strain of about 0.5 % (Figure 4 a,f), which is typical for Ni-base superalloys in this creep regime [18,[56][57][58].
Most of the dislocations form networks around the γ′ precipitates as their propagation is effectively hindered by the precipitates. Similar observations of dense dislocation networks at the γ/γ′ interface have also been reported for other Ni-base superalloys [13,[59][60][61][62]. Only occasionally, the γ′ phase is cut by partial dislocations, resulting in the formation of SISFs. The character of the SFs was characterized by analyzing fringe contrasts in dark-field micrographs.
This mechanism also holds true for later stages of creep. The sample crept to about 6 % plastic strain shows predominantly matrix dislocations, which are surrounding the γ′ precipitates Shearing of the γ′ precipitates under the formation of SISFs can also be observed in NC25 after a deformation of only 0.2 %, however, deformation in the matrix via channel dislocation glide seems to be the dominant mechanism, too (Figure 4 b,g). It is worth recognizing that whenever cutting occurs in NC25, the stacking faults extend over several precipitates (but are interrupted in the matrix phase between them), which is in contrast to the mechanism observed in NC0.
This indicates a slightly reduced precipitate stacking fault energy in this alloy compared to NC0, leading to a larger dissociation distance of partial dislocations. Additionally, a mechanism recently described by Eggeler et al. [3] in tensile crept specimens of a Ni-containing Co-base alloy at 900 °C could be observed: an SISF embedded in an APB (labelled as ASA configuration, Figure 4 b). When deformation proceeds to higher strains, the mechanism does not change significantly (Figure 4 l). Besides a higher amount of matrix and interfacial dislocations, also the frequency of cutting events is increasing, however, the resulting planar defects remain SISFs and the ASA-configurations.
NC50 predominantly shows matrix deformation as well, however, sometimes γ′ is sheared and stacking faults extending over several precipitates can be observed (Figure 4 c,h). In contrast to The Co-rich alloy NC75 shows another deformation mechanism in the early creep stage. Again, the highest dislocation activity is observed in the γ matrix, however, when dislocations shear into the precipitates, extended APBs are formed (Figure 4 d, i). These sometimes extend over several precipitates. This was also found in CoNi-based superalloys during tensile creep at 900 °C, where cutting of a/2〈011〉 dislocations were observed creating the APBs [10]. It is also possible that these APBs originally formed as ASA-configurations, however, the SISFs get fully transformed into APBs. The sample crept to higher strains also shows extensive cutting under the formation of APBs accompanied by matrix deformation (Figure 4 n).
In the sample with lower creep strain, alloy NC100 exhibits matrix dislocations moving in pairs with a significant splitting distance (Figure 4 e,j). When shearing of γ′ occurs, even the individual superpartials dissociate to a 4-fold splitting. One of these events can be observed in
Discussion
The creep properties and deformation mechanisms of the NCX alloys differ significantly with the variation of the Co/Ni ratio. According to the TEM investigations, most of the deformation is located in the γ phase, which seems to allow a discussion of the creep properties in terms of solid solution strengthening and directional coarsening, however, all microstructural features like γ′ volume fraction, defect energies or diffusion properties have to be considered as well.
Furthermore, interesting differences with the changing Co and Ni content also occur in early stages of creep, whenever the γ′ precipitates are sheared by dislocations.
Strengthening contributions
In general, the Ni-rich alloys show better creep properties compared to the Co-rich ones (see Figure 2 a and b). This might be explained by the more even distribution of alloying elements on the Co-rich side, which is known from Zenk et al. [30]. Especially W and Cr are strongly enriched in the γ matrix in NC0 and NC25, while the segregation tendency towards γ decreases with increasing Co content. This could lead to an enhanced solid solution strengthening effect in γ for these two Ni-rich alloys compared to the Co-rich ones. To prove that, the solid solution strengthening of the γ compound at 900 °C was calculated using a thermodynamic approach and the combined models of Fleischer [48], Gypen and Deruyttere [44,45] and Galindo-Nava et al. [47], as shown in Figure 5a.
The calculated strengthening contribution, weighted by the γ′ fraction in each alloy (which is also discussed below), decreases systematically with increasing Co content. Among the two Nirich alloys, NC25 outmatches the pure Ni-base superalloy NC0 in creep resistance. It is likely that the addition of Co in NC25 also acts as a solid-solution strengthener, since it is enriched in the γ phase. It is, however, not segregating as strongly as Cr and also the beneficial effects cannot be seen in the calculations, since this was not taken into account by the model presented in chapter 2.2. However, the calculations of solid solution strengthening fit to the results of the creep tests in the way that the creep strength decreases with increasing Co-content.
Interestingly, the trend of solid solution strengthening is reversed when only the γ phase is considered and not weighted by the increasing γ′ volume fraction in the Co-rich alloys. These findings imply that the solid solution strengthening is mainly influenced by the fraction of matrix phase. While the strength of the pure γ phase slightly increases with increasing Cocontent, the effect for the two-phase alloy vanishes since the γ′ fraction increases and the γ fraction decreases accordingly. However, both calculations cannot fully explain the creep properties of the investigated alloy series.
The solid solution constants β we calculated using the model by Gypen and Deruyttere [44,45] and used for the estimation of the solid solution strengthening are listed in Table 3 Co-content. This is described and discussed in detail in part B of the supplementary material.
Although the TEM investigations reveal pronounced deformation in the matrix phase, its solid solution strengthening estimates cannot fully explain the creep behavior and other factors must be at play. Some of the microstructural and thermophysical properties of the alloy series that could help explaining the experimental findings are shown in Figure 5 and discussed in the following. Figure 5b illustrates the γ′ volume fraction of the NCX alloys as a function of the Co-content.
The precipitate fraction is steadily increasing with increasing Co-content. This is different from the findings in [30] where a maximum of the γ′ volume fraction in polycrystalline material was reported for NC75. However, the difference between NC75 and NC100 is small as it is also the case in our study. From NC0 to NC25 the volume fraction is increasing by about 15 %, which might explain the better creep properties of this alloy [64], even if the solid solution strengthening is less pronounced according to the calculations. However, the creep properties are not improved further with increasing γ′ volume fractions, even though this would be expected for Ni-base [64] and Co-base superalloys [5]. Therefore, further properties have to be considered.
It was reported in literature, that fully developed γ′ rafts lead to a strengthening effect [6,7,26,27,55]. Although a slight directional coarsening was observed in NC0 and NC25, no direct effect can be attributed to the orientation of the rafts with respect to the external load.
According to the micrographs in Figure 3a and b, the horizontal γ channels are not completely closed and presumably the forming rafts do not act as effective obstacles. Furthermore, the γ′ precipitates of the Co-base alloys reported in [7,55], where a strengthening by rafting was found during compressive creep, form plate-like morphologies during directional coarsening. For the Ni-base alloys with negative lattice misfit (NC0 and NC25), however, rod-like shapes were found in the samples crept under compression. This difference in morphology and a less pronounced rafting possibly also explains the absence of a positive effect of the rafting in NC0 and NC25. The Co-rich alloys, NC75 and NC100, exhibit a double minimum in strain rate during the creep test. This behavior is attributed to the pronounced N-type rafting, as previously described in the literature for Co-Al-W-Ta alloys [6,7]. When the vertical γ channels close and no extensive γ′ shearing occurs, the dislocations have to bypass the precipitates by glide and climb on longer paths, which leads to a measurable strengthening effect. This effect vanishes at later stages of creep when the γ′ phase coarsens and γ′ shearing becomes more pronounced. The γ/γ′ lattice misfit, which also determines the morphology of the precipitates of the NCX alloys, is given in Figure 5c, as measured on polycrystalline samples by Zenk et al. [30]. It was already shown for example by Grose and Ansell [65] that higher coherency stresses can improve the mechanical properties. Assuming this, NC100 should obtain the highest strengthening contribution due to the highest lattice misfit whereas NC50, which exhibits almost globular precipitates, might then exhibit the lowest contribution. Since the Co-rich alloys show significantly lower creep strength at the tested conditions compared to the Ni-rich alloys, the strengthening by coherency stresses is also not the dominant mechanism in this alloy series.
However, the lattice misfit also determines the morphology of the γ′ precipitates. The near-zero misfit of NC50 results in globular precipitates which might be unfavorable [66] and possibly explain why the creep properties of this alloy are worse compared to NC25 with a higher misfit.
However, the more cubic shape of the Co-base alloys do not result in better creep properties either.
It is known that diffusion is an additional key parameter during high temperature deformation.
For example the directional coarsening or the dislocation motion are strongly affected by diffusion properties [67][68][69][70]. Therefore, we calculated the effective diffusion coefficients for the γ matrix compositions of the NCX alloys using a model derived by Zhu et al. [54] as described in the appendix. The results are shown in Figure 5d. It can be seen, that the effective diffusion coefficient is significantly lowered with increasing Co-content. The decrease from NC0 to NC25 followed by a slight increase to NC50 fits very well to the minimum strain rates observed during creep. This local minimum in the effective diffusion coefficient is most certainly caused by the enrichment of Cr in the γ phase, which has a positive effect on activation energy and frequency factor. When the Co-content is further increased, the model predicts an ongoing decrease of , although the concentration of all solutes in γ, except for Al, is decreasing. Thus, it can be stated that the diffusivity of the base elements Co and Ni is the dominating mechanism, since their high content outmatches the effect of the minor solutes.
According to the model, a high Co content is beneficial, since 0 , is smaller and , is larger, compared to the equivalent values of Ni, independent of the assumptions made for the intermediate compositions. Additionally, the diffusivity of the solutes in Co was found to be slightly lower than in Ni [71]. However, the experimentally observed creep resistance is decreasing when the Co-content in NC25 is further increased. Therefore, also the diffusional properties of the matrix phase cannot fully explain the creep behavior, yet.
As a further strengthening contribution, we considered the stacking fault energy of the γ matrix, suggesting that a high stacking fault energy and therefore a small dissociation distance of partial dislocations promotes recombination and cross-slip. As a consequence, a high stacking fault energy is assumed to be disadvantageous for the creep properties compared to low .
The stacking fault energies of the matrix compositions (given in Table 3) of the NCX alloys were calculated using Thermo-Calc. The results are illustrated in Figure 5e. Consequently, it is evident, that none of the strengthening contributions described above can explain the variation of creep behavior among the NCX alloys alone. Even though the matrix properties would promise better creep performance of the Co-rich alloys, the opposite trend is observed. A conclusion from this might be that the matrix properties are less important on the Co-rich side of the system: if a partial dislocation is not forced to recombine to cross-slip or climb to bypass a precipitate because the planar fault energies are low, neither SFE nor diffusivity in the matrix will be the key factor for the creep behavior. At the same time, the solid solution strengthening as the remaining contribution we considered is not increasing strongly enough to counterbalance the negative impact that the hypothesized decreased shear resistance of γ′ has.
Deformation mechanisms
As described in section 3.4 and shown in Figure 4, the dislocation activity in γ′ is relatively low,
indicating that the main influences on the creep properties are the matrix strength and directional coarsening. However, the different properties of the matrix phase in the investigated alloys discussed above could not properly explain the creep properties and would even predict the opposite trend. We therefore assume that the planar defect energies in γ and γ′ and the associated deformation mechanisms play a key role for the overall creep properties of the alloys.
Since this has not been quantified, yet, the formation of the different defect configurations will be discussed only qualitatively in the following.
In NC0, γ′ shearing and the formation of SISFs could be observed only occasionally, with increasing frequency at later creep stages. Similar behavior was observed for NC25 and is shown in Figure 4 b, g and l. Interestingly, the formation of an ASA-configuration was found, which was up to now only reported for Co-and CoNi-based alloys in tensile creep [3,4,9,72].
This configuration was now also confirmed to occur in a negative misfit Ni-based alloy creepdeformed under compression. According to Eggeler et al. [3], this configuration is formed as follows: a leading a/3〈112〉 super partial dislocation (formed by reaction of two matrix dislocations with dissimilar Burgers vectors) shears through γ′ and creates a SISF extending across the whole precipitate. The trailing a/6〈112〉 Shockley partial dislocation follows and enters the γ′ precipitate from all sides, partially transforming the SISF into an APB. As result, the trailing partial forms a loop separating the SISF (inside the loop) from the surrounding APB, both located on the {111} slip plane. Since the APB energy is lower on the {001} planes, the APB migrates from {111} to {001} [3]. It is assumed that the ASA-configuration in NC25
forms in a similar way. As marked in Figure 4 b, in some precipitates extended APBs were found in the early creep stages. It is assumed that these are former ASA-configurations where the whole SISF is transformed to an APB which subsequently migrates onto the {001} planes.
The alloy with a Co/Ni ratio of 1:1, NC50, exhibits extended stacking faults over several γ′ precipitates and intermediate channels. Both SISFs and SESFs were found in this alloy,
indicating that the addition of Co affects the defect energies significantly. The formation of SESFs involves glide of two identical Shockley partial dislocations on adjacent glide planes and a successive reordering process, which is most likely also happening in NC50 [67,[73][74][75][76].
However, the often described growth of SESFs into microtwins in the later creep stages was not observed in NC50.
In the Co-rich alloy NC75, APBs were observed (see Figure 4 d, i, n), indicating again a change in the planar defect energies and other dislocation reactions. Similar mechanisms were reported by Eggeler et al. [10] for Co-and CoNi-base alloys in tensile creep tests also at 900 °C. They attribute the formation of APBs to the shearing of γ′ by a/2〈011〉 dislocations. Shearing via APB coupled dislocation pairs was also shown for Ni-base superalloys [18,19]. However, the splitting distance is significantly smaller in those studies and does not span entire precipitates, as it is the case in NC75. It was already determined by Okamoto et al. [77] that the APB energy on the {111} planes of single-phase Co3(Al,W) is nearly 40 % lower compared to Ni3Al, which fits well to our observations that APB formation is preferred in the Co-rich alloys.
The pure Co-base alloy NC100 exhibits a 4-fold dislocation dissociation when γ′ shearing occurs (see Figure 4 e, j). The middle part was also identified as an APB, indicating that the two superpartial dislocations of type a/2〈011〉 further dissociate into individual Shockley partial dislocations of type a/6〈112〉. Additionally, dislocations moving in pairs were observed in the γ channels. This could not be found in any of the other alloys.
It was already found earlier that Co and Cr additions to fcc-Ni reduce of binary alloys [78].
This is even more pronounced with increasing solute content. Our calculations are in very good agreement with these findings since the SFE is calculated to be significantly reduced by adding more Co. Furthermore, we know that the segregation of Co and Cr to the γ phase is less pronounced in the Co-rich alloys [30]. Consequently, the content of these elements in the γ′ phase is higher compared to the Ni-rich alloys, which might then affect the planar fault energy of the precipitate phase as well. From the TEM investigations we know that shearing of γ′ is more pronounced in the Co-rich alloys, however, it can not be clarified whether this is caused by the general difference in stoichiometry of Ni3Al compared to Co3(Al,W) or by the addition of Cr or any other reason. In any case, enhanced shearing of γ′ by dislocations in the Co-rich alloys deteriorates the overall creep properties since the obstacle effect of the γ′ precipitates is diminished.
Summary and Conclusion
The properties discussed above imply that the changing creep behavior of the NCX alloys cannot be attributed directly to the changing Co/Ni ratio. Rather, it is necessary to uncover which properties are changing when the base element content is varied. It was already found that the Co/Ni ratio influences the partitioning behavior of the other alloying elements, which results in changing γ/γ′ lattice misfits [30]. In our study, we also calculated that the solid solution strengthening contribution of the γ matrix decreases with increasing Co-content, since it is dominated by the decreasing γ fraction. A quantification of the precipitation strengthening contribution to compare the NCX alloys could not be performed. Since the deformation mechanisms are not fully understood and the assumption of different parameters like fault energies could not be made, commonly used models could not be applied. Additionally, we found that the different solid solution strengthening contribution, accompanied by the inversed rafting behavior due to the opposite lattice misfit, results in a significantly altered compressive creep behavior. The deformation structures were characterized and interesting differences in the defect configurations were reported. The results imply that the variation of the γ′ planar fault energies with the changing Co/Ni ratio are the primary reason for the observed trend in creep properties. However, further work is needed to analyze the defects in more detail and to quantify their influence.
In brief summary, the creep properties and deformation behavior of a single crystalline alloy series 75(Co/Ni)-9Al-8W-8Cr, designed to map the transition from γ′-strengthened Ni-base to Co-base superalloys, was investigated at 900 °C and 250 MPa. To explain the creep properties, different strengthening contributions were quantified using existing models and Thermo-Calc.
We conclude that just changing the base element influences several material properties, which creates a rather complex alloy series. Nevertheless, we could evaluate different characteristics and the following conclusion can be stated:
• The Ni-rich but Co-containing alloy NC25 exhibits the best creep properties and the creep strength significantly decreases with increasing Co-content.
• The Co-rich alloys NC75 and NC100 show a double minimum creep behavior due to a temporary strengthening by directional coarsening of the γ′ phase perpendicular to the external load.
• The partitioning behavior of all alloying elements is crucial for the mechanical properties since especially W and Al are considered as strong solid-solution strengtheners at 900 °C in this alloy series. The partitioning behavior is mainly influenced by the Co/Ni ratio.
• None of the common strengthening contributions like solid solution strengthening, γ′ volume fraction, γ/γ′ lattice misfit, diffusion coefficients or stacking fault energies alone could explain the changing creep behavior of the investigated alloy series. It is assumed that the shearing resistance of the precipitates and the deformation mechanisms play the key role in the overall creep properties.
• The deformation mechanisms change significantly with a variation of the Co/Ni ratio, especially when γ′ deformation is considered. With increasing Co-content, the γ′ cutting mechanisms change from SISF-shearing over ASA-shearing to SESF-shearing and finally APB-shearing. These changes are attributed to dramatic variations in the energies of the various types of possible planar faults in these alloys.
Blades -a Scientific Approach for Developing the Next Generation of Single Crystal
Superalloys".
Appendix -Calculation of effective diffusion coefficients
To calculate the effective diffusion coefficients of the γ compositions, a model derived by Zhu et al. [54] was used, as described in the experimental part of the manuscript. According to their model, the frequency factors 0 and activation energies for self-diffusion of the base elements and of the solutes in the base elements have to be known. This can easily be done for the alloys NC0 and NC100, which are pure Ni-base or Co-base alloys, respectively, since these parameters were already determined by other groups. However, the scope of our manuscript was to present changes induced by a change of the base element content and therefore the base elements Co and Ni are mixed in the alloys NC25, NC50 and NC75. Therefore, we propose a method to use mean values weighted according to the Co-content ( ) to determine the effective diffusion coefficients of these alloys. For 0 we took harmonic mean values of the frequency factors of the solutes in Co and Ni, respectively, which was calculated as
� 0 , / = �� 1 − 0 , � + � 0 , �� −1 .
Equation A1
For the activation energies of the alloying elements we used the weighted arithmetic mean of the values of the individual solutes in Ni and Co, respectively:
� , / = (1 − ) , + ,
Equation A2
The determination of the activation energy of the base system is more complicated since one has to consider the activation energies for diffusion of Co in Co, Ni in Ni, Ni in Co and Co in
Ni. Thus, we calculated a weighted mean of self-diffusion and the diffusion of the equivalent counterpart element:
� , / = (1 − ) , + ,
Equation A3
� , / = (1 − ) , + ,
Equation A4
To get one value for (see equation 5 in the manuscript) from � , / and � , / we again calculated a weigthed arithmetic mean:
= (1 − ) � , / + � , / Equation A5
The combination of equations A3, A4 and A5 results in:
= (1 − ) 2 , + ( − 2 )� , + , � + 2 ,
Equation A6
The calculated activation energies for the NCX alloys, acquired by the procedure presented above, are presented in Figure A1. For NC0 ( = 0) and NC100 ( = 1), equation the values described and discussed in the manuscript (solid lines). Of course, the weighted solid solution strengthening is calculated to be lower since the atomic radii differences between Al and Ni (the reference element) is smaller in this case, however, the trend stays the same. Also with the atomic radius calculated to be 0.128 nm at 900 °C, the solid solution strengthening is decreasing on the Co-rich side, since the dominant factor is the γ and γ′ fraction of the alloy.
Contrary, the isolated γ solid solution strength is calculated to decrease with increasing Cocontent as well using rAl=0.128 nm, whereas it was proposed to increase using rAl=0.147 nm.
This indicates that the atomic radii values, which are used in the model by Gypen and Deruyttere [2,3], have to be considered carefully. However, weakening the influence of Al using rAl=0.128 nm is in agreement with experimental results on the lattice parameter change in Ni-Al binary systems that propose a significantly smaller impact of Al compared to W [5]. This would result in less lattice strains and therefore in lower solid solution strengthening. As a consequence, it might be necessary to adjust the models proposed by Fleischer [6], Gypen and Deruyttere [2,3] and Galindo-Nava et al. [4] in a way that not the pure element atomic radii are used. Instead the atomic volumes, which a solute exhibits in a binary system with the base element, as it could be calculated by TC for example, might be more suitable.
Part B: Quantification of the solid solution strengthening using Thermo-Calc
The solid solution strengthening of the matrix phase was also calculated directly using the TCNI10 database and the corresponding property model implemented in Thermo-Calc 2021a.
Internally, this model is based on Walbrühl et al. [7], who assume the same concentration dependence as Labusch [1], but fit their model directly to experimental hardness data. As before, the matrix composition as determined from APT reconstructions was used. σss was evaluated at the creep test temperature of 900 °C. The results are also shown in Figure S 1 (open circles).
Similar to the results described in the manuscript and above, an increasing solid solution strengthening contribution with increasing Co-content is found. However, the absolute values differ significantly. Especially the increase from the pure Ni-alloy NC0 to the Co-containing alloy NC25 is much more pronounced, which is consistent with the increasing creep resistance between these two alloy compositions. Furthermore, the Thermo-Calc property model reveals a maximum in the strengthening effect for NC50 while our calculations applying the model by Gypen and Deruyttere [2,3] estimate a steady increase with a maximum for NC100. The model used by TC was derived by Walbrühl et al. [7] and is also based on a model originally proposed by Labusch [1]. However, a non-linear composition dependence of the strengthening parameter is applied in their model and they fit their model directly to experimental hardness data, which is different to the model by Gypen and Deruyttere [2,3]. We assume that these differences in the models cause the changing and non-conform calculations.
with µi being the shear modulus of solute and µ the shear modulus of Nickel. The constant δ describes the difference in atomic radii and can be derived from δ i =
Figure 1 :
1SEM (BSE) images of the microstructures of the NCX alloys after solution and aging heat-treatments at 1250 °C for 24 h and at 900 °C for 100 h, respectively.
Figure 2 :
2a) Compressive creep properties of the NCX alloys at 900 °C and 250 MPa. Repeated tests were interrupted at strain levels of about 0.2-0.5 % plastic strain, which can be seen from the inset figure that is a magnification of the low strain part of the graph. b) Minimum strain rates evaluated from the data shown in a).
Figure 3 .
3NC50 does not exhibit any directional coarsening (Figure 3 c) since there is no driving
Figure 3 :
3SEM (BSE) images of the microstructures after creep at 900 °C and 250 MPa to 5-8% plastic strain showing the directional coarsening of alloys NC0, NC25, NC75 and NC100 and the non-directional coarsening of alloy NC50.
(Figure 4 k
4). Cutting of γ′ and the formation of superlattice stacking faults is observed only occasionally.
NC0 and NC25, the alloy NC50 mostly exhibits superlattice extrinsic stacking faults (SESF), when dislocations shear through a precipitate. The micrograph after plastic deformation of about 5 % does not show additional effects(Figure 4 m). The main deformation is located in the matrix phase and extended stacking faults are observed, however, with a higher density.
Figure 4 e
4. This observation corroborates the assumption of low planar fault energies. At the later stage of deformation the APBs seem to be more extended, comparable to NC75(Figure 4o). Stacking faults were observed occasionally at this stage. Nevertheless, the main deformation still takes place in the matrix phase.
Figure 4 :
4Deformation mechanisms of NC0, NC25, NC50, NC75 and NC100 observed by (S)TEM after compressive creep tests at 900 °C and 250 MPa. All micrographs are taken from 〈001〉 cross sections.
Figure 5 :
5Strengthening contributions of the NCX alloy series as measured or calculated. a) Solid solution strengthening of the γ phases at 900 °C calculated after [44,45,47] for the pure γ phase composition (spheres) and with respect to the γ′ volume fraction (squares), b) γ′ volume fraction evaluated from micrographs, c) γ/γ′ lattice misfit measured on polycrystalline samples in [30], d) effective diffusion coefficient of the γ phase and e) stacking fault energy of the γ phase composition as calculated using Thermo-Calc.
The graph shows a steady decrease of the stacking fault energy with increasing Co-content. This would suggest easier cross-slip of dislocations on the Ni-rich side due to smaller splitting distances of the partial dislocations. Enhanced cross-slip would result in lower creep strength. However, the trend in the investigated alloy series is exactly the opposite. The creep properties are actually better in the Ni-rich alloys, although TEM investigations revealed dominant deformation in the matrix phase and TC predict the stacking fault energy to be higher.
, , respectively. As described above, the values for the intermediate alloys are weighted according to self-diffusion and diffusion of the equivalent counterpart Co/Ni in a way that, for example, all four values , , Using for the NCX alloys calculated in this way, we could obtain the effective diffusion coefficients of the matrix compositions as given inFigure 5d.
Figure A1 :
A1Activation energies for the NCX alloys according to the procedure proposed in the appendix. Activation energies from literature for Co in Co[79], Ni in Co[80], Co in Ni[81] and Ni in Ni[82] are shown as comparison.
Figure S 1 :
1Solid solution strengthening contribution of the γ phase at 900 °C calculated after[2][3][4] for the pure γ phase composition (squares, solid line), with respect to the γ′ volume fraction (diamonds, solid line). These calculations are repeated with a different atomic radius for Al of 0.128 nm as used in[4] (grey squares and diamonds, dashed line). Additionally, Thermo-Calc was used to calculate the solid solution strengthening using different model as proposed by Walbrühl et al.[7] (open circles).
Table 1 :
1Nominal composition of NC0, 25, 50, 75 and 100 in at.%.NCX
Co
Ni
Al
W
Cr
NC0
-
75.00
9.00
8.00
8.00
NC25
18.75
56.25
9.00
8.00
8.00
NC50
37.50
37.50
9.00
8.00
8.00
NC75
56.25
18.75
9.00
8.00
8.00
NC100
75.00
-
9.00
8.00
8.00
Table 2 :
2Calculated and literature values for estimating the stacking fault energies in single phase alloys of the experimentally determined matrix compositions (seeTable 3) of alloys NCX aged at 900°C. Molar volumes Vm,γ and Vm,ε of the respective phases, molarsurface density of the matrix phase
, molar Gibbs energy
and
and their
difference
→ , Shear modulus µ of Haynes 188, Poisson ratio ν, strain
along the
new hcp c axis during the γ→ε transformation, molar strain energy
associated with a
stacking fault and the resulting stacking fault energy
. All values except for the
interfacial energy / (0K) correspond to a temperature of 900 °C.
comment
NC0
NC25
NC50
NC75
NC100
Vm,γ / cm 3 /mole
TCNI10
7.1585
7.2031
7.230
7.2809
7.4209
Vm,ε / cm 3 /mole
TCNI10
7.3804
7.3699
7.3650
7.4062
7.4415
ρ γ / mole/m 2
-
2.92×10 5 2.91×10 5 2.90×10 5 2.89×E×10 5 2.85×10 5
G m
ε / J/mole
TTNI 8
-67317
-70702
-71338
-71599
-67932
G m
γ / J/mole
TTNI 8
-64734
-68174
-69430
-70080
-66698
ΔG m
γ→ε / J/mole
calc.
2583
2528
1908
1519
1234
µ / GPa
[41]
61
ν
-
0.33
ε 33 / %
[35-40]
-0.67
E m
str
calc.
54.02
33.48
24.22
21.90
7.69
2σ γ/ε / mJ/m 2
[42]
-3.4
2ρΔG m
γ→ε / mJ/m 2
-
151
147
111
88
70
2ρE m
str / mJ/m 2
-
3.16
1.95
1.41
1.27
0.44
γ SFE / mJ/m 2
-
151
146
109
86
67
Table 3 :
3Shear moduli µ and atomic radii at 900 °C used for the calculation of the solid TCNI10 database). The shear modulus of Haynes188 [41] was used, since it was not available for the NCX matrix composition. The calculated values for β are also presented.solution strengthening and the experimentally determined (APT) composition of the γ
phases , taken from [30]. Shear moduli of solutes were linearly extrapolated from refs.
[49-53] if not available at 900 °C. Atomic radii were calculated using Thermo-Calc
(Co
Ni
Al
W
Cr
Haynes188 [41]
µi (900°C) / GPa
49.2
56.4
12.2
143.3 101.0
61
ri (900°C) / nm
0.126 0.126 0.147 0.144 0.131
-
xi (NC0) / at.%
-
76.0
3.9
8.1
12.0
-
xi (NC25) / at.%
25.1
49.9
3.8
7.8
13.4
-
xi (NC50) / at.%
45.6
30.5
4.9
6.8
12.2
-
xi (NC75) / at.%
59.7
16.1
7.2
6.1
10.9
-
xi (NC100) / at.% 75.8
-
8.7
6.1
9.4
-
βi / MPa/at.% 2/3
5.2
-
524.5 505.8 125.0
-
To quantify diffusional effects during creep, a model derived by Zhu et al. [54] was used to
calculate the so called effective diffusion coefficient
. This empirical parameter can be
described as the average mobility of vacancies and is defined by:
The intermediate alloy NC50 exhibits a sharp minimum at about 0.2 % plastic strain and, subsequently, a significant softening, similar to NC25. The minimal strain rate, however, only reaches roughly 4.0×10 -8 1/s, which is significantly higher compared to NC25. The deformation behavior seems to change completely for the Co-rich alloys NC75 and NC100. Both alloys exhibit a double minimum curve shape including a local minimum at small plastic strains, followed by an increase in strain rate and again a hardening to a global minimum strain rate at 6 % and 5 % plastic strain, respectively. In summary, the Ni-rich alloys exhibit significantly better creep properties at the test parameters of 900 °C and 250 MPa.5×10 -
8 1/s is reached for NC25 at about 0.5 % plastic strain, followed by a slight continuous
softening. The pure Ni-base alloy NC0 shows a more constant strain rate during the creep test.
and amount to 525, 506 and 125 MPa/at.% 2/3 for Al, W and Cr, respectively. These values indicate that Al and W are considered as nearly equal solid solution strengtheners at 900 °C and both have a higher impact compared to Cr in the Ni reference system. Recently, Wang et al. [63] alsoreported W to be a good solid solution strengthener in Ni binary systems while this is not the case for Cr, which is in good agreement with our findings. The high solid solution strengthening character of Al at 900 °C calculated in our study is mainly caused by the stronger temperature dependency of the atomic radius compared to Ni and the other solutes, which was derived by Thermo-Calc. Consequently, the increasing Al content in the γ phase with increasing Cocontent dominates the solid solution strengthening in our calculations, since the W content is even slightly reduced.However, the β values differ significantly from the ones calculated by Galindo-Nava et al.[47] for identical solutes. While the value for Al we find is significantly higher compared to ref.[47], β of W and Cr are much smaller. Two reasons may cause this effect. First, we wanted to calculate the solid solution strengthening at 900 °C and thus linearly extrapolated the shear moduli of the individual elements to high temperatures and computed theoretical atomic radii at 900 °C using Thermo-Calc. As a result, the differences with Galindo-Nava et al.[47] are reasonable, since they calculated solid solution strengthening assuming shear moduli and atomic radii of the solutes to be not at test temperature. Second, the atomic radii used for thecalculations are different, especially the one of Al which was assumed to be 0.143 nm at room
temperature (0.147 nm at 900 °C) compared to 0.124 nm in [47] (0.128 nm at 900 °C, assuming
identical thermal behavior). A detailed discussion on the chosen atomic radii and the resulting
differences can be found in part A of the supplementary material.
Additionally to the here presented calculations, the solid solution strengthening was also
calculated using Thermo-Calc, where a different model is implemented. The qualitative results
are similar to our findings, showing an increase in solid solution strengthening with increasing
AcknowledgmentThe authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG) through projects A7 and B3 of the collaborative research center SFB/TR 103 "From Atoms to TurbineThe manuscript describes and discusses the solid solution strengthening of the matrix compositions as determined by a model that was originally proposed by Labusch[1]and later modified by Gypen and Deruyettere[2,3]and Galindo-Nava et al.[4]. It was briefly mentioned in the manuscript that the atomic radii, which are used for the calculations, need to be chosen carefully. This became evident in the comparison of the results of this study and the findings of Galindo-Nava et al.[4]. Here, especially the atomic radii used for the calculations are different, especially the one of Al which was assumed to be 0.143 nm at room temperature (0.147 nm at 900 °C) compared to 0.124 nm in[4](0.128 nm at 900 °C, assuming identical thermal behavior). We chose to use the metallic atomic radii, as they are also implemented in the used Thermo-Calc (TC) database equally for every alloying element, whereas the value used in[4]is closer to the covalent atomic radius of Al. To clarify especially the influence of the atomic radii of Al, where different values of the radius are used in literature, we also conducted the same calculations with rAl=0.124 at room temperature, the value used by Galindo-Nava et al[4]. The results are shown inFigure S 1in gray symbols and dashed lines in comparison with
High-temperature strength and deformation of γ/γ′ two-phase Co-Al-W-base alloys. A Suzuki, T M Pollock, Acta Mater. 56A. Suzuki, T.M. Pollock, High-temperature strength and deformation of γ/γ′ two-phase Co-Al-W-base alloys, Acta Mater. 56 (2008) 1288-1297.
. 10.1016/j.actamat.2007.11.014https://doi.org/10.1016/j.actamat.2007.11.014.
Creep-induced planar defects in L12-containing Co-and CoNi-base single-crystal superalloys. M S Titus, Y M Eggeler, A Suzuki, T M Pollock, 10.1016/j.actamat.2014.08.033Acta Mater. 82M.S. Titus, Y.M. Eggeler, A. Suzuki, T.M. Pollock, Creep-induced planar defects in L12- containing Co-and CoNi-base single-crystal superalloys, Acta Mater. 82 (2015) 530-539. https://doi.org/10.1016/j.actamat.2014.08.033.
Planar defect formation in the γ′ phase during high temperature creep in single crystal CoNi-base superalloys. Y M Eggeler, J Müller, M S Titus, A Suzuki, T M Pollock, E Spiecker, Acta Mater. 113Y.M. Eggeler, J. Müller, M.S. Titus, A. Suzuki, T.M. Pollock, E. Spiecker, Planar defect formation in the γ′ phase during high temperature creep in single crystal CoNi-base superalloys, Acta Mater. 113 (2016) 335-349.
. 10.1016/j.actamat.2016.03.077https://doi.org/10.1016/j.actamat.2016.03.077.
Tension/Compression asymmetry of a creep deformed single crystal Co-base superalloy. M Lenz, Y M Eggeler, J Müller, C H Zenk, N Volz, P Wollgramm, G Eggeler, S Neumeier, M Göken, E Spiecker, 10.1016/j.actamat.2018.12.053Acta Mater. 166M. Lenz, Y.M. Eggeler, J. Müller, C.H. Zenk, N. Volz, P. Wollgramm, G. Eggeler, S. Neumeier, M. Göken, E. Spiecker, Tension/Compression asymmetry of a creep deformed single crystal Co-base superalloy, Acta Mater. 166 (2019) 597-610. https://doi.org/10.1016/j.actamat.2018.12.053.
On the Precipitation-Strengthening Contribution of the Ta-Containing Co3(Al,W)-Phase to the Creep Properties of γ/γ' Cobalt-Base Superalloys. A Bezold, N Volz, F Xue, C H Zenk, S Neumeier, M Göken, 10.1007/s11661-020-05626-2Metall. Mater. Trans. A. 51A. Bezold, N. Volz, F. Xue, C.H. Zenk, S. Neumeier, M. Göken, On the Precipitation- Strengthening Contribution of the Ta-Containing Co3(Al,W)-Phase to the Creep Properties of γ/γ' Cobalt-Base Superalloys, Metall. Mater. Trans. A. 51 (2020) 1567-1574. https://doi.org/10.1007/s11661-020-05626-2.
Double minimum creep in the rafting regime of a single-crystal Co-base superalloy. F Xue, C H Zenk, L P Freund, M Hoelzel, S Neumeier, M Göken, 10.1016/j.scriptamat.2017.08.039Scr. Mater. 142F. Xue, C.H. Zenk, L.P. Freund, M. Hoelzel, S. Neumeier, M. Göken, Double minimum creep in the rafting regime of a single-crystal Co-base superalloy, Scr. Mater. 142 (2018) 129-132. https://doi.org/10.1016/j.scriptamat.2017.08.039.
Understanding raft formation and precipitate shearing during double minimum creep in a γ′-strengthened single crystalline Co-base superalloy. F Xue, C H Zenk, L P Freund, S Neumeier, M Göken, Philos. Mag. F. Xue, C.H. Zenk, L.P. Freund, S. Neumeier, M. Göken, Understanding raft formation and precipitate shearing during double minimum creep in a γ′-strengthened single crystalline Co-base superalloy, Philos. Mag. (2020) 1-28.
. 10.1080/14786435.2020.1836415https://doi.org/10.1080/14786435.2020.1836415.
Impact of the Co/Ni-Ratio on Microstructure, Thermophysical Properties and Creep Performance of Multi-Component γ′-Strengthened Superalloys. C H Zenk, N Volz, C Zenk, P J Felfer, S Neumeier, Crystals. 10C.H. Zenk, N. Volz, C. Zenk, P.J. Felfer, S. Neumeier, Impact of the Co/Ni-Ratio on Microstructure, Thermophysical Properties and Creep Performance of Multi-Component γ′-Strengthened Superalloys, Crystals. 10 (2020).
. 10.3390/cryst10111058https://doi.org/10.3390/cryst10111058.
Creep Behavior of Quinary γ ′-Strengthened Co-Based Superalloys. R K Rhein, P G Callahan, S P Murray, J.-C Stinville, M S Titus, A Van Der Ven, T M Pollock, 10.1007/s11661-018-4768-zMetall. Mater. Trans. A. 49R.K. Rhein, P.G. Callahan, S.P. Murray, J.-C. Stinville, M.S. Titus, A. Van der Ven, T.M. Pollock, Creep Behavior of Quinary γ ′-Strengthened Co-Based Superalloys, Metall. Mater. Trans. A. 49 (2018) 4090-4098. https://doi.org/10.1007/s11661-018-4768-z.
Creep deformation-induced antiphase boundaries in L12-containing single-crystal cobalt-base superalloys. Y M Eggeler, M S Titus, A Suzuki, T M Pollock, 10.1016/j.actamat.2014.04.037Acta Mater. 77Y.M. Eggeler, M.S. Titus, A. Suzuki, T.M. Pollock, Creep deformation-induced antiphase boundaries in L12-containing single-crystal cobalt-base superalloys, Acta Mater. 77 (2014) 352-359. https://doi.org/10.1016/j.actamat.2014.04.037.
High resolution energy dispersive spectroscopy mapping of planar defects in L12-containing Co-base superalloys. M S Titus, A Mottura, G Babu Viswanathan, A Suzuki, M J Mills, T M Pollock, Acta Mater. 89M.S. Titus, A. Mottura, G. Babu Viswanathan, A. Suzuki, M.J. Mills, T.M. Pollock, High resolution energy dispersive spectroscopy mapping of planar defects in L12-containing Co-base superalloys, Acta Mater. 89 (2015) 423-437.
. 10.1016/j.actamat.2015.01.050https://doi.org/10.1016/j.actamat.2015.01.050.
Shearing of γ' particles in Co-base and Co-Ni-base superalloys. L Feng, D Lv, R K Rhein, J G Goiri, M S Titus, A Van Der Ven, T M Pollock, Y Wang, 10.1016/j.actamat.2018.09.013Acta Mater. 161L. Feng, D. Lv, R.K. Rhein, J.G. Goiri, M.S. Titus, A. Van der Ven, T.M. Pollock, Y. Wang, Shearing of γ' particles in Co-base and Co-Ni-base superalloys, Acta Mater. 161 (2018) 99-109. https://doi.org/10.1016/j.actamat.2018.09.013.
High-temperature and low-stress creep anisotropy of single-crystal superalloys. L Jácome, P Nörtershäuser, J.-K Heyer, A Lahni, J Frenzel, A Dlouhy, C Somsen, G Eggeler, Acta Mater. 61L. Agudo Jácome, P. Nörtershäuser, J.-K. Heyer, A. Lahni, J. Frenzel, A. Dlouhy, C. Somsen, G. Eggeler, High-temperature and low-stress creep anisotropy of single-crystal superalloys, Acta Mater. 61 (2013) 2926-2943.
. 10.1016/j.actamat.2013.01.052https://doi.org/10.1016/j.actamat.2013.01.052.
Ledges and grooves at γ/γ′ interfaces of single crystal superalloys. A B Parsa, P Wollgramm, H Buck, A Kostka, C Somsen, A Dlouhy, G Eggeler, 10.1016/j.actamat.2015.02.005Acta Mater. 90A.B. Parsa, P. Wollgramm, H. Buck, A. Kostka, C. Somsen, A. Dlouhy, G. Eggeler, Ledges and grooves at γ/γ′ interfaces of single crystal superalloys, Acta Mater. 90 (2015) 105- 117. https://doi.org/10.1016/j.actamat.2015.02.005.
Dislocation structures in γ-γ′ interfaces of the single-crystal superalloy SRR 99 after annealing and high temperature creep. M Feller-Kniepmeier, T Link, 10.1016/0921-5093(89Mater. Sci. Eng. A. 113M. Feller-Kniepmeier, T. Link, Dislocation structures in γ-γ′ interfaces of the single-crystal superalloy SRR 99 after annealing and high temperature creep, Mater. Sci. Eng. A. 113 (1989) 191-195. https://doi.org/10.1016/0921-5093(89)90306-7.
Microstructural aspects of high temperature deformation of monocrystalline nickel base superalloys: some open problems. H Mughrabi, 10.1179/174328408X361436Mater. Sci. Technol. 25H. Mughrabi, Microstructural aspects of high temperature deformation of monocrystalline nickel base superalloys: some open problems, Mater. Sci. Technol. 25 (2009) 191-204. https://doi.org/10.1179/174328408X361436.
Primary creep in single crystal superalloys: Origins, mechanisms and effects. C M F Rae, R C Reed, Acta Mater. 55C.M.F. Rae, R.C. Reed, Primary creep in single crystal superalloys: Origins, mechanisms and effects, Acta Mater. 55 (2007) 1067-1081.
. 10.1016/j.actamat.2006.09.026https://doi.org/10.1016/j.actamat.2006.09.026.
Creep resistance of CMSX-3 nickel base superalloy single crystals. T M Pollock, A S Argon, 10.1016/0956-7151Acta Metall. Mater. 409290195T.M. Pollock, A.S. Argon, Creep resistance of CMSX-3 nickel base superalloy single crystals, Acta Metall. Mater. 40 (1992) 1-30. https://doi.org/10.1016/0956- 7151(92)90195-K.
Creep of precipitation-hardened nickel-base alloy single crystals at high temperatures. G R Leverant, B H Kear, J M Oblak, Metall. Trans. 4G.R. Leverant, B.H. Kear, J.M. Oblak, Creep of precipitation-hardened nickel-base alloy single crystals at high temperatures, Metall. Trans. 4 (1973) 355-362.
The effects of different alloying elements on the thermal expansion coefficients, lattice constants and misfit of nickel-based superalloys investigated by X-ray diffraction. F Pyczak, B Devrient, H Mughrabi, F. Pyczak, B. Devrient, H. Mughrabi, The effects of different alloying elements on the thermal expansion coefficients, lattice constants and misfit of nickel-based superalloys investigated by X-ray diffraction, Superalloys 2004. (2004) 827-836.
In situ determination of γ′ phase volume fraction and of relations between lattice parameters and precipitate morphology in Ni-based single crystal superalloy. A Royer, P Bastie, M Veron, 10.1016/S1359-6454(98Acta Mater. 46A. Royer, P. Bastie, M. Veron, In situ determination of γ′ phase volume fraction and of relations between lattice parameters and precipitate morphology in Ni-based single crystal superalloy, Acta Mater. 46 (1998) 5357-5368. https://doi.org/10.1016/S1359- 6454(98)00206-7.
Transmisson electron microscopy of phase composition and lattice misfit in the Re-containing nickel-base superalloy CMSX-10. C Schulze, M Feller-Kniepmeier, 10.1016/S0921-5093(99Mater. Sci. Eng. A. 281C. Schulze, M. Feller-Kniepmeier, Transmisson electron microscopy of phase composition and lattice misfit in the Re-containing nickel-base superalloy CMSX-10, Mater. Sci. Eng. A. 281 (2000) 204-212. https://doi.org/10.1016/S0921-5093(99)00713-3.
The temperature dependent lattice misfit of rhenium and ruthenium containing nickel-base superalloys -Experiment and modelling. S Neumeier, F Pyczak, M Göken, 10.1016/j.matdes.2020.109362Mater. Des. 198109362S. Neumeier, F. Pyczak, M. Göken, The temperature dependent lattice misfit of rhenium and ruthenium containing nickel-base superalloys -Experiment and modelling, Mater. Des. 198 (2021) 109362. https://doi.org/10.1016/j.matdes.2020.109362.
Mechanical properties and lattice misfit of γ/γ′ strengthened Co-base superalloys in the Co-W-Al-Ti quaternary system. C H Zenk, S Neumeier, H J Stone, M Göken, 10.1016/j.intermet.2014.07.006Intermetallics. 55C.H. Zenk, S. Neumeier, H.J. Stone, M. Göken, Mechanical properties and lattice misfit of γ/γ′ strengthened Co-base superalloys in the Co-W-Al-Ti quaternary system, Intermetallics. 55 (2014) 28-39. https://doi.org/10.1016/j.intermet.2014.07.006.
Thermophysical and mechanical properties of advanced single crystalline Co-base superalloys. N Volz, C H Zenk, R Cherukuri, T Kalfhaus, M Weiser, S K Makineni, C Betzing, M Lenz, B Gault, S G Fries, J Schreuer, R Vaßen, S Virtanen, D Raabe, E Spiecker, S Neumeier, M Göken, 10.1007/s11661-018-4705-1Metall. Mater. Trans. A. 49N. Volz, C.H. Zenk, R. Cherukuri, T. Kalfhaus, M. Weiser, S.K. Makineni, C. Betzing, M. Lenz, B. Gault, S.G. Fries, J. Schreuer, R. Vaßen, S. Virtanen, D. Raabe, E. Spiecker, S. Neumeier, M. Göken, Thermophysical and mechanical properties of advanced single crystalline Co-base superalloys, Metall. Mater. Trans. A. 49 (2018) 4099-4109. https://doi.org/10.1007/s11661-018-4705-1.
Enhancement of the high-temperature tensile creep strength of monocrystalline nickel-base superalloys by pre-rafting in compression. U Tetzlaff, H Mughrabi ; Pollock, T M Kissinger, R D Bowman, R R Green, K A Mclean, M , U. Tetzlaff, H. Mughrabi, Enhancement of the high-temperature tensile creep strength of monocrystalline nickel-base superalloys by pre-rafting in compression, Pollock TM Kissinger RD Bowman RR Green KA McLean M. (2000).
. 10.7449/2000/Superalloys_2000_273_282.pdfhttp://www.tms.org/superalloys/10.7449/2000/Superalloys_2000_273_282.pdf (accessed August 8, 2017).
The effect of raft formation on the hightemperature creep deformation behaviour of the monocrystalline nickel-base superalloy CMSX-4, Strength Mater. H Mughrabi, W Schneider, V Sass, C Lang, ICSMA 10.H. Mughrabi, W. Schneider, V. Sass, C. Lang, The effect of raft formation on the high- temperature creep deformation behaviour of the monocrystalline nickel-base superalloy CMSX-4, Strength Mater. ICSMA 10. (1994) 705-708.
Phase equilibria and microstructure on γ′ phase in Co-Ni-Al-W system. K Shinagawa, T Omori, J Sato, K Oikawa, I Ohnuma, R Kainuma, K Ishida, Mater. Trans. 49K. Shinagawa, T. Omori, J. Sato, K. Oikawa, I. Ohnuma, R. Kainuma, K. Ishida, Phase equilibria and microstructure on γ′ phase in Co-Ni-Al-W system, Mater. Trans. 49 (2008) 1474-1479.
Intermediate Co/Ni-base model superalloys -Thermophysical properties, creep and oxidation. C H Zenk, S Neumeier, N M Engl, S G Fries, O Dolotko, M Weiser, S Virtanen, M Göken, Scr. Mater. 112C.H. Zenk, S. Neumeier, N.M. Engl, S.G. Fries, O. Dolotko, M. Weiser, S. Virtanen, M. Göken, Intermediate Co/Ni-base model superalloys -Thermophysical properties, creep and oxidation, Scr. Mater. 112 (2016) 83-86.
. 10.1016/j.scriptamat.2015.09.018https://doi.org/10.1016/j.scriptamat.2015.09.018.
The role of the base element in γ/γ′ strengthened cobalt-nickel base superalloys. C H Zenk, S Neumeier, M Kolb, N Volz, S G Fries, O Dolotko, I Povstugar, D Raabe, M Göken, 10.1002/9781119075646.ch103Superalloys. M. Hardy, E. Huron, U. Glatzel, B. Griffin, B. Lewis, C.M. Rae, V. Seetharaman, S. TinWarrendale PAThe Minerals, Metals & Materials SocietyC.H. Zenk, S. Neumeier, M. Kolb, N. Volz, S.G. Fries, O. Dolotko, I. Povstugar, D. Raabe, M. Göken, The role of the base element in γ/γ′ strengthened cobalt-nickel base superalloys, in: M. Hardy, E. Huron, U. Glatzel, B. Griffin, B. Lewis, C.M. Rae, V. Seetharaman, S. Tin (Eds.), Superalloys 2016, The Minerals, Metals & Materials Society, Warrendale PA, 2016: pp. 971-980. https://doi.org/10.1002/9781119075646.ch103.
A first-principles study of the effect of Ta on the superlattice intrinsic stacking fault energy of L12-Co3(Al,W). A Mottura, A Janotti, T M Pollock, 10.1016/j.intermet.2012.04.020Intermetallics. 28A. Mottura, A. Janotti, T.M. Pollock, A first-principles study of the effect of Ta on the superlattice intrinsic stacking fault energy of L12-Co3(Al,W), Intermetallics. 28 (2012) 138-143. https://doi.org/10.1016/j.intermet.2012.04.020.
First-principles study of the partitioning and site preference of Re or Ru in Co-based superalloys with interface. M Chen, C.-Y. Wang, 10.1016/j.physleta.2010.05.065Phys. Lett. A. 374M. Chen, C.-Y. Wang, First-principles study of the partitioning and site preference of Re or Ru in Co-based superalloys with interface, Phys. Lett. A. 374 (2010) 3238-3242. https://doi.org/10.1016/j.physleta.2010.05.065.
The Importance of Diffusivity and Partitioning Behavior of Solid Solution Strengthening Elements for the High Temperature Creep Strength of Ni-Base Superalloys. S Giese, A Bezold, M Pröbstle, A Heckl, S Neumeier, M Göken, 10.1007/s11661-020-06028-0Metall. Mater. Trans. A. 51S. Giese, A. Bezold, M. Pröbstle, A. Heckl, S. Neumeier, M. Göken, The Importance of Diffusivity and Partitioning Behavior of Solid Solution Strengthening Elements for the High Temperature Creep Strength of Ni-Base Superalloys, Metall. Mater. Trans. A. 51 (2020) 6195-6206. https://doi.org/10.1007/s11661-020-06028-0.
A general mechanism of martensitic nucleation: Part I. General concepts and the FCC→ HCP transformation. G B Olson, M Cohen, Metall. Trans. A. 7G.B. Olson, M. Cohen, A general mechanism of martensitic nucleation: Part I. General concepts and the FCC→ HCP transformation, Metall. Trans. A. 7 (1976) 1897-1904.
A theory of the transformation in pure cobalt. J W Christian, W Hume-Rothery, Proc. R. Soc. Lond. Ser. Math. Phys. Sci. 206J.W. Christian, W. Hume-Rothery, A theory of the transformation in pure cobalt, Proc. R. Soc. Lond. Ser. Math. Phys. Sci. 206 (1951) 51-64.
. 10.1098/rspa.1951.0055https://doi.org/10.1098/rspa.1951.0055.
Theory of the martensitic transformation in cobalt. P Tolédano, G Krexner, M Prem, H.-P Weber, V P Dmitriev, Phys. Rev. B. 64144104P. Tolédano, G. Krexner, M. Prem, H.-P. Weber, V.P. Dmitriev, Theory of the martensitic transformation in cobalt, Phys. Rev. B. 64 (2001) 144104.
. 10.1103/PhysRevB.64.144104https://doi.org/10.1103/PhysRevB.64.144104.
S Ram, 10.1016/S0921-5093(00Allotropic phase transformations in HCP, FCC and BCC metastable structures in Co-nanoparticles, RQ10 Tenth Int. Conf. Rapidly Quenched Metastable Mater. S. Ram, Allotropic phase transformations in HCP, FCC and BCC metastable structures in Co-nanoparticles, RQ10 Tenth Int. Conf. Rapidly Quenched Metastable Mater. 304-306 (2001) 923-927. https://doi.org/10.1016/S0921-5093(00)01647-6.
Gradient-corrected density functional calculation of elastic constants of Fe, Co and Ni in bcc, fcc and hcp structures. G Y Guo, H H Wang, Chin J Phys. 38G.Y. Guo, H.H. Wang, Gradient-corrected density functional calculation of elastic constants of Fe, Co and Ni in bcc, fcc and hcp structures, Chin J Phys. 38 (2000) 949-961.
Unusual stability of fcc Co(110)/Cu(110). G R Harp, R F C Farrow, D Weller, T A Rabedeau, R F Marks, Phys. Rev. B. 48G.R. Harp, R.F.C. Farrow, D. Weller, T.A. Rabedeau, R.F. Marks, Unusual stability of fcc Co(110)/Cu(110), Phys. Rev. B. 48 (1993) 17538-17544.
. 10.1103/PhysRevB.48.17538https://doi.org/10.1103/PhysRevB.48.17538.
Introduction to solid state physics. C Kittel, P Mceuen, P Mceuen, WileyNew YorkC. Kittel, P. McEuen, P. McEuen, Introduction to solid state physics, Wiley New York, 1996.
Haynes188 data sheet, Haynes International. Haynes International, Haynes International, Haynes188 data sheet, Haynes International, n.d. https://www.haynesintl.com/alloys/alloy-portfolio_/High-temperature- Alloys/HAYNES188alloy.aspx.
Stacking fault energy of face-centered cubic metals: thermodynamic and ab initio approaches. R Li, S Lu, D Kim, S Schönecker, J Zhao, S K Kwon, L Vitos, J. Phys. Condens. Matter. 28395001R. Li, S. Lu, D. Kim, S. Schönecker, J. Zhao, S.K. Kwon, L. Vitos, Stacking fault energy of face-centered cubic metals: thermodynamic and ab initio approaches, J. Phys. Condens. Matter. 28 (2016) 395001.
A Statistical Theory of Solid Solution Hardening. R Labusch, 10.1002/pssb.19700410221Phys. Status Solidi B. 41R. Labusch, A Statistical Theory of Solid Solution Hardening, Phys. Status Solidi B. 41 (1970) 659-669. https://doi.org/10.1002/pssb.19700410221.
Multi-component solid solution hardening -Part 1 Proposed model. L A Gypen, A Deruyttere, 10.1007/BF00540987J. Mater. Sci. 12L.A. Gypen, A. Deruyttere, Multi-component solid solution hardening -Part 1 Proposed model, J. Mater. Sci. 12 (1977) 1028-1033. https://doi.org/10.1007/BF00540987.
Multi-component solid solution hardening -Part 2 Agreement with experimental results. L A Gypen, A Deruyttere, J. Mater. Sci. 12L.A. Gypen, A. Deruyttere, Multi-component solid solution hardening -Part 2 Agreement with experimental results, J. Mater. Sci. 12 (1977) 1034-1038.
. 10.1007/BF00540988https://doi.org/10.1007/BF00540988.
Theory of strengthening in fcc high entropy alloys. C Varvenne, A Luque, W A Curtin, 10.1016/j.actamat.2016.07.040Acta Mater. 118C. Varvenne, A. Luque, W.A. Curtin, Theory of strengthening in fcc high entropy alloys, Acta Mater. 118 (2016) 164-176. https://doi.org/10.1016/j.actamat.2016.07.040.
On the prediction of the yield stress of unimodal and multimodal γ ′ Nickel-base superalloys. E I Galindo-Nava, L D Connor, C M F Rae, 10.1016/j.actamat.2015.07.048Acta Mater. 98E.I. Galindo-Nava, L.D. Connor, C.M.F. Rae, On the prediction of the yield stress of unimodal and multimodal γ ′ Nickel-base superalloys, Acta Mater. 98 (2015) 377-390. https://doi.org/10.1016/j.actamat.2015.07.048.
Substitutional solution hardening. R L Fleischer, 10.1016/0001-6160(63)90213-XActa Metall. 11R.L. Fleischer, Substitutional solution hardening, Acta Metall. 11 (1963) 203-209. https://doi.org/10.1016/0001-6160(63)90213-X.
Temperature dependent magnetic contributions to the high field elastic constants of nickel and an Fe-Ni alloy. G A Alers, J R Neighbours, H Sato, 10.1016/0022-3697(60J. Phys. Chem. Solids. 13G.A. Alers, J.R. Neighbours, H. Sato, Temperature dependent magnetic contributions to the high field elastic constants of nickel and an Fe-Ni alloy, J. Phys. Chem. Solids. 13 (1960) 40-55. https://doi.org/10.1016/0022-3697(60)90125-6.
Single-Crystal Elastic Properties of Tungsten from 24° to 1800°C. R Lowrie, A M Gonas, 10.1063/1.1709158J. Appl. Phys. 38R. Lowrie, A.M. Gonas, Single-Crystal Elastic Properties of Tungsten from 24° to 1800°C, J. Appl. Phys. 38 (1967) 4505-4509. https://doi.org/10.1063/1.1709158.
Temperature dependence of the elastic constants of aluminium. R F S Hearmon, 10.1016/0038-1098(81)90509-3Solid State Commun. 37R.F.S. Hearmon, Temperature dependence of the elastic constants of aluminium, Solid State Commun. 37 (1981) 915-918. https://doi.org/10.1016/0038-1098(81)90509-3.
The elastic constants of chromium. S B Palmer, E W Lee, 10.1080/14786437108227390Philos. Mag. J. Theor. Exp. Appl. Phys. 24S.B. Palmer, E.W. Lee, The elastic constants of chromium, Philos. Mag. J. Theor. Exp. Appl. Phys. 24 (1971) 311-318. https://doi.org/10.1080/14786437108227390.
The properties of metallic cobalt. W Betteridge, 10.1016/0079-6425(79)90004-5Prog. Mater. Sci. 24W. Betteridge, The properties of metallic cobalt, Prog. Mater. Sci. 24 (1980) 51-142. https://doi.org/10.1016/0079-6425(79)90004-5.
A model for the creep deformation behaviour of nickel-based single crystal superalloys. Z Zhu, H Basoalto, N Warnken, R C Reed, 10.1016/j.actamat.2012.05.023Acta Mater. 60Z. Zhu, H. Basoalto, N. Warnken, R.C. Reed, A model for the creep deformation behaviour of nickel-based single crystal superalloys, Acta Mater. 60 (2012) 4888-4900. https://doi.org/10.1016/j.actamat.2012.05.023.
Understanding creep of a single-crystalline Co-Al-W-Ta superalloy by studying the deformation mechanism, segregation tendency and stacking fault energy. N Volz, F Xue, C H Zenk, A Bezold, S Gabel, A P A Subramanyam, R Drautz, T Hammerschmidt, S K Makineni, B Gault, M Göken, S Neumeier, 10.1016/j.actamat.2021.117019Acta Mater. 214117019N. Volz, F. Xue, C.H. Zenk, A. Bezold, S. Gabel, A.P.A. Subramanyam, R. Drautz, T. Hammerschmidt, S.K. Makineni, B. Gault, M. Göken, S. Neumeier, Understanding creep of a single-crystalline Co-Al-W-Ta superalloy by studying the deformation mechanism, segregation tendency and stacking fault energy, Acta Mater. 214 (2021) 117019. https://doi.org/10.1016/j.actamat.2021.117019.
Directional coarsening in nickel-base single crystals with high volume fractions of coherent precipitates. T M Pollock, A S Argon, 10.1016/0956-7151(94Acta Metall. Mater. 42T.M. Pollock, A.S. Argon, Directional coarsening in nickel-base single crystals with high volume fractions of coherent precipitates, Acta Metall. Mater. 42 (1994) 1859-1874. https://doi.org/10.1016/0956-7151(94)90011-6.
〈100〉 Dislocations in nickel-base superalloys: Formation and role in creep deformation. T Link, A Epishin, M Klaus, U Brückner, A Reznicek, 10.1016/j.msea.2005.06.001Mater. Sci. Eng. A. 405T. Link, A. Epishin, M. Klaus, U. Brückner, A. Reznicek, 〈100〉 Dislocations in nickel-base superalloys: Formation and role in creep deformation, Mater. Sci. Eng. A. 405 (2005) 254-265. https://doi.org/10.1016/j.msea.2005.06.001.
Improvement of Creep strength in a nickel-base single-crystal superalloy by heat treatment. P Caron, T Khan, 10.1016/0025-5416(83Mater. Sci. Eng. 61P. Caron, T. Khan, Improvement of Creep strength in a nickel-base single-crystal superalloy by heat treatment, Mater. Sci. Eng. 61 (1983) 173-184. https://doi.org/10.1016/0025-5416(83)90199-4.
Correlation of microstructure and creep stages in the< 100> oriented superalloy SRR 99 at 1253 K. M Feller-Kniepmeier, T Link, Metall. Mater. Trans. A. 20M. Feller-Kniepmeier, T. Link, Correlation of microstructure and creep stages in the< 100> oriented superalloy SRR 99 at 1253 K, Metall. Mater. Trans. A. 20 (1989) 1233-1238.
The development of γ/γ_ interfacial dislocation networks during creep in Ni-base superalloys. R D Field, T M Pollock, W H Murphy, Superalloys 1992 Seventh Int. Symp., Metals and Materials Society. 557R.D. Field, T.M. Pollock, W.H. Murphy, The development of γ/γ_ interfacial dislocation networks during creep in Ni-base superalloys, in: Superalloys 1992 Seventh Int. Symp., Metals and Materials Society, 1992: p. 557.
Characterization of interfacial dislocation networks in a creep-deformed nickel-base superalloy. R R Keller, H J Maier, H Mughrabi, 10.1016/0956-716X(93)90531-VScr. Metall. Mater. 28R.R. Keller, H.J. Maier, H. Mughrabi, Characterization of interfacial dislocation networks in a creep-deformed nickel-base superalloy, Scr. Metall. Mater. 28 (1993) 23-28. https://doi.org/10.1016/0956-716X(93)90531-V.
Interfacial dislocation networks strengthening a fourth-generation single-crystal TMS-138 superalloy. J X Zhang, T Murakumo, Y Koizumi, T Kobayashi, H Harada, S Masaki, Metall. Mater. Trans. A. 33J.X. Zhang, T. Murakumo, Y. Koizumi, T. Kobayashi, H. Harada, S. Masaki, Interfacial dislocation networks strengthening a fourth-generation single-crystal TMS-138 superalloy, Metall. Mater. Trans. A. 33 (2002) 3741-3746.
. 10.1007/s11661-002-0246-7https://doi.org/10.1007/s11661-002-0246-7.
Solid-solution strengthening effects in binary Ni-based alloys evaluated by high-throughput calculations. M.-X Wang, H Zhu, G.-J Yang, K Liu, J.-F Li, L.-T Kong, 10.1016/j.matdes.2020.109359Mater. Des. 198109359M.-X. Wang, H. Zhu, G.-J. Yang, K. Liu, J.-F. Li, L.-T. Kong, Solid-solution strengthening effects in binary Ni-based alloys evaluated by high-throughput calculations, Mater. Des. 198 (2021) 109359. https://doi.org/10.1016/j.matdes.2020.109359.
Creep behaviour of Ni-base singlecrystal superalloys with various γ′ volume fraction. T Murakumo, T Kobayashi, Y Koizumi, H Harada, 10.1016/j.actamat.2004.04.028Acta Mater. 52T. Murakumo, T. Kobayashi, Y. Koizumi, H. Harada, Creep behaviour of Ni-base single- crystal superalloys with various γ′ volume fraction, Acta Mater. 52 (2004) 3737-3744. https://doi.org/10.1016/j.actamat.2004.04.028.
The influence of coherency strain on the elevated temperature tensile behavior of Ni-15Cr-AI-Ti-Mo alloys. D A Grose, G S , Metall. Trans. A. 12D.A. Grose, G.S. Ansell, The influence of coherency strain on the elevated temperature tensile behavior of Ni-15Cr-AI-Ti-Mo alloys, Metall. Trans. A. 12 (1981) 1631-1645.
Optimal precipitate shapes in nickel-base γ-γ′ alloys. J S Van Sluytman, T M Pollock, 10.1016/j.actamat.2011.12.008Acta Mater. 60J.S. Van Sluytman, T.M. Pollock, Optimal precipitate shapes in nickel-base γ-γ′ alloys, Acta Mater. 60 (2012) 1771-1783. https://doi.org/10.1016/j.actamat.2011.12.008.
On the microtwinning mechanism in a single crystal superalloy. D Barba, E Alabort, S Pedrazzini, D M Collins, A J Wilkinson, P A J Bagot, M P Moody, C Atkinson, A Jérusalem, R C Reed, Acta Mater. 135D. Barba, E. Alabort, S. Pedrazzini, D.M. Collins, A.J. Wilkinson, P.A.J. Bagot, M.P. Moody, C. Atkinson, A. Jérusalem, R.C. Reed, On the microtwinning mechanism in a single crystal superalloy, Acta Mater. 135 (2017) 314-329.
. 10.1016/j.actamat.2017.05.072https://doi.org/10.1016/j.actamat.2017.05.072.
D Barba, T M Smith, J Miao, M J Mills, R C Reed, 10.1007/s11661-018-4567-6Segregation-Assisted Plasticity in Ni-Based Superalloys. 49D. Barba, T.M. Smith, J. Miao, M.J. Mills, R.C. Reed, Segregation-Assisted Plasticity in Ni- Based Superalloys, Metall. Mater. Trans. A. 49 (2018) 4173-4185. https://doi.org/10.1007/s11661-018-4567-6.
On the atomic solute diffusional mechanisms during compressive creep deformation of a Co-Al-W-Ta single crystal superalloy. J He, C H Zenk, X Zhou, S Neumeier, D Raabe, B Gault, S K Makineni, Acta Mater. 184J. He, C.H. Zenk, X. Zhou, S. Neumeier, D. Raabe, B. Gault, S.K. Makineni, On the atomic solute diffusional mechanisms during compressive creep deformation of a Co-Al-W-Ta single crystal superalloy, Acta Mater. 184 (2020) 86-99.
. 10.1016/j.actamat.2019.11.035https://doi.org/10.1016/j.actamat.2019.11.035.
The development of γ-γ′ lamellar structures in a nickel-base superalloy during elevated temperature mechanical testing. R A Mackay, L J Ebert, Metall. Trans. A. 16R.A. MacKay, L.J. Ebert, The development of γ-γ′ lamellar structures in a nickel-base superalloy during elevated temperature mechanical testing, Metall. Trans. A. 16 (1985) 1969-1982.
Diffusion of solutes in fcc Cobalt investigated by diffusion couples and first principles kinetic Monte Carlo. S Neumeier, H U Rehman, J Neuner, C H Zenk, S Michel, S Schuwalow, J Rogal, R Drautz, M Göken, 10.1016/j.actamat.2016.01.028Acta Mater. 106S. Neumeier, H.U. Rehman, J. Neuner, C.H. Zenk, S. Michel, S. Schuwalow, J. Rogal, R. Drautz, M. Göken, Diffusion of solutes in fcc Cobalt investigated by diffusion couples and first principles kinetic Monte Carlo, Acta Mater. 106 (2016) 304-312. https://doi.org/10.1016/j.actamat.2016.01.028.
Transient minimum creep of a γ′ strengthened Co-base single-crystal superalloy at 900 °C. H J Zhou, H Chang, Q Feng, 10.1016/j.scriptamat.2017.03.031Scr. Mater. 135H.J. Zhou, H. Chang, Q. Feng, Transient minimum creep of a γ′ strengthened Co-base single-crystal superalloy at 900 °C, Scr. Mater. 135 (2017) 84-87. https://doi.org/10.1016/j.scriptamat.2017.03.031.
On the composition of microtwins in a single crystal nickelbased superalloy. D Barba, S Pedrazzini, A Vilalta-Clemente, A J Wilkinson, M P Moody, P A J Bagot, A Jérusalem, R C Reed, Scr. Mater. 127D. Barba, S. Pedrazzini, A. Vilalta-Clemente, A.J. Wilkinson, M.P. Moody, P.A.J. Bagot, A. Jérusalem, R.C. Reed, On the composition of microtwins in a single crystal nickel- based superalloy, Scr. Mater. 127 (2017) 37-40.
. 10.1016/j.scriptamat.2016.08.029https://doi.org/10.1016/j.scriptamat.2016.08.029.
The high temperature decrease of the critical resolved shear stress in nickelbase superalloys. M Kolbe, Mater. Sci. Eng. A. M. Kolbe, The high temperature decrease of the critical resolved shear stress in nickel- base superalloys, Mater. Sci. Eng. A. 319-321 (2001) 383-387.
. 10.1016/S0921-5093(01https://doi.org/10.1016/S0921-5093(01)00944-3.
Microtwinning during intermediate temperature creep of polycrystalline Ni-based superalloys: mechanisms and modelling. G B Viswanathan, S Karthikeyan, P M Sarosi, R R Unocic, M J Mills, Philos. Mag. 86G.B. Viswanathan, S. Karthikeyan, P.M. Sarosi, R.R. Unocic, M.J. Mills, Microtwinning during intermediate temperature creep of polycrystalline Ni-based superalloys: mechanisms and modelling, Philos. Mag. 86 (2006) 4823-4840.
. 10.1080/14786430600767750https://doi.org/10.1080/14786430600767750.
Segregation assisted microtwinning during creep of a polycrystalline L12-hardened Cobase superalloy. L P Freund, O M D M Messé, J S Barnard, M Göken, S Neumeier, C M F Rae, Acta Mater. 123L.P. Freund, O.M.D.M. Messé, J.S. Barnard, M. Göken, S. Neumeier, C.M.F. Rae, Segregation assisted microtwinning during creep of a polycrystalline L12-hardened Co- base superalloy, Acta Mater. 123 (2017) 295-304.
. 10.1016/j.actamat.2016.10.048https://doi.org/10.1016/j.actamat.2016.10.048.
Plastic deformation of polycrystals of Co3(Al,W) with the L12 structure. N L Okamoto, T Oohashi, H Adachi, K Kishida, H Inui, P Veyssière, 10.1080/14786435.2011.586158Philos. Mag. 91N.L. Okamoto, T. Oohashi, H. Adachi, K. Kishida, H. Inui, P. Veyssière, Plastic deformation of polycrystals of Co3(Al,W) with the L12 structure, Philos. Mag. 91 (2011) 3667-3684. https://doi.org/10.1080/14786435.2011.586158.
The influence of alloying, temperature, and related effects on the stacking fault energy. P C J Gallagher, Metall. Trans. 1. P.C.J. Gallagher, The influence of alloying, temperature, and related effects on the stacking fault energy, Metall. Trans. 1 (1970) 2429-2461.
Isotope effect and selfdiffusion in face-centred cubic cobalt. W Bussmann, Ch Herzig, W Rempp, K Maier, H Mehrer, 10.1002/pssa.2210560109Phys. Status Solidi A. 56W. Bussmann, Ch. Herzig, W. Rempp, K. Maier, H. Mehrer, Isotope effect and self- diffusion in face-centred cubic cobalt, Phys. Status Solidi A. 56 (1979) 87-97. https://doi.org/10.1002/pssa.2210560109.
Lattice and Grain Boundary Self-Diffusion in Nickel. A R Wazzan, 10.1063/1.1703047J. Appl. Phys. 36A.R. Wazzan, Lattice and Grain Boundary Self-Diffusion in Nickel, J. Appl. Phys. 36 (1965) 3596-3599. https://doi.org/10.1063/1.1703047.
Interdiffusion and its size effect in nickel solid solutions of Ni-Co, Ni-Cr and Ni-Ti systems. S B Jung, T Yamane, Y Minamino, K Hirao, H Araki, S Saji, 10.1007/BF00729354J. Mater. Sci. Lett. 11S.B. Jung, T. Yamane, Y. Minamino, K. Hirao, H. Araki, S. Saji, Interdiffusion and its size effect in nickel solid solutions of Ni-Co, Ni-Cr and Ni-Ti systems, J. Mater. Sci. Lett. 11 (1992) 1333-1337. https://doi.org/10.1007/BF00729354.
K Monma, H Suto, H Oikawa, O N The, High-Temperature Creep And Diffusion In Nickel-Base Solid Between, Solutions, Iii, DIFFUSION OF Ni$sup 63$ AND W$sup 185$ IN NICKEL-TUNGSTEN ALLOYS. K. Monma, H. Suto, H. Oikawa, ON THE RELATION BETWEEN HIGH-TEMPERATURE CREEP AND DIFFUSION IN NICKEL-BASE SOLID SOLUTIONS. III. DIFFUSION OF Ni$sup 63$ AND W$sup 185$ IN NICKEL-TUNGSTEN ALLOYS, Nippon Kinzoku Gakkaishi Jpn. (1964). https://www.osti.gov/biblio/4036696.
A Statistical Theory of Solid Solution Hardening. R Labusch, 10.1002/pssb.19700410221Phys. Status Solidi B. 41R. Labusch, A Statistical Theory of Solid Solution Hardening, Phys. Status Solidi B. 41 (1970) 659-669. https://doi.org/10.1002/pssb.19700410221.
Multi-component solid solution hardening -Part 1. L A Gypen, A Deruyttere, L.A. Gypen, A. Deruyttere, Multi-component solid solution hardening -Part 1
Proposed model. 10.1007/BF00540987J. Mater. Sci. 12Proposed model, J. Mater. Sci. 12 (1977) 1028-1033. https://doi.org/10.1007/BF00540987.
Multi-component solid solution hardening -Part 2. L A Gypen, A Deruyttere, L.A. Gypen, A. Deruyttere, Multi-component solid solution hardening -Part 2
Agreement with experimental results. 10.1007/BF00540988J. Mater. Sci. 12Agreement with experimental results, J. Mater. Sci. 12 (1977) 1034-1038. https://doi.org/10.1007/BF00540988.
On the prediction of the yield stress of unimodal and multimodal γ ′ Nickel-base superalloys. E I Galindo-Nava, L D Connor, C M F Rae, 10.1016/j.actamat.2015.07.048Acta Mater. 98E.I. Galindo-Nava, L.D. Connor, C.M.F. Rae, On the prediction of the yield stress of unimodal and multimodal γ ′ Nickel-base superalloys, Acta Mater. 98 (2015) 377-390. https://doi.org/10.1016/j.actamat.2015.07.048.
Y Mishima, S Ochiai, T Suzuki, Lattice parameters of Ni(γ), Ni3Al(γ') and. Y. Mishima, S. Ochiai, T. Suzuki, Lattice parameters of Ni(γ), Ni3Al(γ') and
Ni3Ga(γ') solid solutions with additions of transition and B-subgroup elements. 10.1016/0001-6160(85Acta Metall. 33Ni3Ga(γ') solid solutions with additions of transition and B-subgroup elements, Acta Metall. 33 (1985) 1161-1169. https://doi.org/10.1016/0001-6160(85)90211-1.
Substitutional solution hardening. R L Fleischer, 10.1016/0001-6160(63)90213-XActa Metall. 11R.L. Fleischer, Substitutional solution hardening, Acta Metall. 11 (1963) 203-209. https://doi.org/10.1016/0001-6160(63)90213-X.
Modelling of solid solution strengthening in multicomponent alloys. M Walbrühl, D Linder, J Ågren, A Borgenstam, 10.1016/j.msea.2017.06.001Mater. Sci. Eng. A. 700M. Walbrühl, D. Linder, J. Ågren, A. Borgenstam, Modelling of solid solution strengthening in multicomponent alloys, Mater. Sci. Eng. A. 700 (2017) 301-311. https://doi.org/10.1016/j.msea.2017.06.001.
| []
|
[
"NARAIN GUPTA'S THREE NORMAL SUBGROUP PROBLEM AND GROUP HOMOLOGY Dedicated to the memory of Chander Kanta Gupta and Narain Gupta",
"NARAIN GUPTA'S THREE NORMAL SUBGROUP PROBLEM AND GROUP HOMOLOGY Dedicated to the memory of Chander Kanta Gupta and Narain Gupta"
]
| [
"Roman Mikhailov ",
"Inder Bir ",
"S Passi "
]
| []
| []
| This paper is about application of various homological methods to classical problems in the theory of group rings. It is shown that the third homology of groups plays a key role in Narain Gupta's three normal subgroup problem. For a free group F and its normal subgroups R, S, T, and the corresponding ideals in the integral group ring, a complete description of the normal subgroup F ∩ (1 + rst) is given, provided R ⊆ T and the third and the fourth homology groups of R/R ∩ S are torsion groups. | 10.1016/j.jalgebra.2019.02.007 | [
"https://arxiv.org/pdf/1611.01313v1.pdf"
]
| 119,716,252 | 1611.01313 | f39848267b7c302074aa604d38393d9418bb86ec |
NARAIN GUPTA'S THREE NORMAL SUBGROUP PROBLEM AND GROUP HOMOLOGY Dedicated to the memory of Chander Kanta Gupta and Narain Gupta
4 Nov 2016
Roman Mikhailov
Inder Bir
S Passi
NARAIN GUPTA'S THREE NORMAL SUBGROUP PROBLEM AND GROUP HOMOLOGY Dedicated to the memory of Chander Kanta Gupta and Narain Gupta
4 Nov 2016
This paper is about application of various homological methods to classical problems in the theory of group rings. It is shown that the third homology of groups plays a key role in Narain Gupta's three normal subgroup problem. For a free group F and its normal subgroups R, S, T, and the corresponding ideals in the integral group ring, a complete description of the normal subgroup F ∩ (1 + rst) is given, provided R ⊆ T and the third and the fourth homology groups of R/R ∩ S are torsion groups.
Introduction
It is well-known that the second (co)homology of groups plays an important role in the theory of groups; in particular, in the theory of central extensions. The third cohomology of a group classifies k-invariants for crossed modules or homotopy 2-types ( [10], [11], [19]). However, it is not easy to find an explicit application of the third (co)homology in grouptheoretical questions which are formulated without the language of homological algebra. In this paper, we show that the third homology of groups plays a key role in the solution of a problem in free group rings.
Let F be a free group and Z[F ] its integral group ring. For every two-sided ideal a in Z[F ], we have a normal subgroup D(F, a) := F ∩ (1 + a) of F . The identification of such normal subgroups in free groups is a recurring problem in the theory of group rings (see [9], [14], [18]). As demonstrated in our works ( [15], [16], [17]), homology of groups and derived functors of non-additive functors can provide a useful tool for investigating such subgroups. In the present article we use this homological approach to address Narain Gupta's problem ( [9], Problem 6.3, p. 119) in free group rings which, in general, has been rather intractable so far.
Given a normal subgroup R of F , let r denote the two-sided ideal of Z[F ] generated by the augmentation ideal ∆(R) of the integral group ring Z[R], i.e., r = ∆(R)Z[F ]. Clearly D(F, r) = R. For two normal subgroups R, S of F , it is known that D(F, rs) = [R ∩ S, R ∩ S], the derived subgroup of R ∩ S ( [4]; [9], Theorem 1.6, p. 3). If R, S, T are three normal subgroups of F , a currently open problem formulated by Narain Gupta (loc. cit.) asks for the identification of the normal subgroup D(F, rst). The answer to this general problem is known for the following special cases:
I(R, S, T ) := [(R ∩ S) ′ ∩ (S ∩ T ) ′ , R ∩ T ](R ∩ (S ∩ T ) ′ ) ′ ((R ∩ S) ′ ∩ T ) ′ .
Observe that, for all the above-mentioned known identifications of D(F, rst), we have D(F, rst) = I(R, S, T ).
(1.5)
The object of the present work is to investigate the case when R ⊆ T (or equivalently, in view of the canonical anti-automorphism of Z[F ], when T ⊆ R). It is easy to see that a complete answer for this case, together with the known results (1.2, 1.3, 1.4), will provide identification of D(F, rst) whenever one of the three normal subgroups R, S, T is contained in either of the other two. Our main result is the following Theorem 1.1. If R, S, T are normal subgroups of a free group F , such that R ⊆ T , and the integral homology groups H 3 (R/R ∩ S), H 4 (R/R ∩ S) are torsion groups, then
D(F, rst) = I(R, S, T ) = (R ∩ (S ∩ T ) ′ ) ′ [(R ∩ S) ′ , R].
Our proof of the above theorem involves a mix of homological and combinatorial arguments, which we develop in Section 2, and it is completed in Section 3. A striking feature to note here is the role played by the third homology in the identification of normal subgroups determined by ideals in free group rings. In Section 4 we bring out further the role of integral homology and give an example with D(F, rsf)/I(R, S, F ) non-zero, thus showing that (1.5) does not hold in general.
In Section 5, we prove (Theorem 5.2), using combinatoral arguments, that if the normal subgroup R is contained in both S and T , and a ∈ D(F, srt + trs), then a 2 ∈ D(F, rrs + srr + trr + rrt) and therefore, by [21], Theorem 4, a 4 ∈ [R, R, ST ]. Thus, in particular, we have a combinatorial proof of one of Ralph Stöhr's results, which is implicit in his homological approach [21] to Gupta's problem, namely that if a ∈ D(F, frf ), then a 2 ∈ D(rrf + frr).
We conclude with a few observations on the corresponding four normal subgroup problem including the identification
D(F, rsfr) = D(F, rfsr) = [R ∩ S ′ , R ∩ S ′ , R]γ 4 (R),
provided R ⊆ S, which is a generalization of a result of Chander Kanta Gupta [8]. .
Homological and Combinatorial Preliminaries
Theorem 2.1. If F is a free group, and R, S its normal subgroups with S ⊆ R, then there is a natural isomorphism
H 2 (F/S, f/r) ∼ = fs ∩ rf fsf + rs .
Proof. Consider the Gruenberg free resolution ( [5], p. 34)
· · · → sf/s 2 f → s/s 2 → f/sf → Z[F/S] → Z → 0
of Z viewed as a trivial left F/S-module. On tensoring this resolution with the right F/S-module f/r, we have the complex
· · · → f/r ⊗ F/S sf/s 2 f → f/r ⊗ F/S s/s 2 → f/r ⊗ F/S f/sf → f/r.
For a free group F , and ideals b ⊂ a, d ⊂ c, we have ( [20], Lemma 4.9)
(a/b) ⊗ F (c/d) ∼ = ac bc + ad .
Thus the above complex reduces to the following complex
· · · → fsf fs 2 f + rsf → fs fs 2 + rs → f 2 fsf + rf → f/r.
Hence H 2 (F/S, f/r) ∼ = fs ∩ rf fsf + rs . Proof. We can assume that F = ST = T S. Let
Remark 2.1. Note that the natural composition
H 2 (R/S) ⊗ f/r / / / / H 2 (R/S, f/r) / / H 2 (F/S, f/r) ∆ 2 (R)∩∆(S)Z[R] ∆(R)∆(S)+∆(S)∆(R) ⊗ f/r / / fs fsf +rs is induced by the map (α, β) → βα, α ∈ ∆ 2 (R) ∩ ∆(S)Z[R], β ∈ f.u ∈ ∆(R)∆(S)∆(T ) ∩ ∆(R)∆(R ∩ S). Since the ideal ∆(R)Z[F ] is a free ringht Z[F ]-module, u = y∈Y (y − 1)u y , (2.2)
where Y is a free basis of R and
u y ∈ ∆(S)∆(T )Z[F ] ∩ ∆(R ∩ S)Z[F ] = ∆(S)∆(T ) ∩ Z[F ]∆(R ∩ S). (2.3) Let Z be a transversal for T in F = ST . Since ∆(R)∆(R ∩ S) ⊆ Z[T ],u y ∈ ∆(S ∩ T )∆(T ) ∩ Z[T ]∆(R ∩ S). (2.5) Therefore, u y ∈ ∆(R ∩ S)∆(T ) + ∆(R ∩ (S ∩ T ) ′ ) [Lemma 2.1 [23], by setting K = S ∩ T , H = R]. Consequently u ∈ ∆(R)∆(R ∩ S)∆(T ) + ∆(R)∆(R ∩ (S ∩ T ) ′ ). (2.6) Because u ∈ ∆(R), projecting (2.6) under Z[T ] → Z[R], it follows that u ∈ ∆(R)∆(R ∩ S)∆(R) + ∆(R)∆(R ∩ (S ∩ T ) ′ ). (2.7)
We have thus proved that the left hand side of (2.1) is contained in the right hand side; the reverse inclusion being obvious, the proof of the Lemma is complete.
A similar analysis, as above, yields the following intersection lemma.
Lemma 2.2. Let R ⊆ S. Then ∆(R)∆(F )∆(S)∆(R) ∩ ∆ 3 (R) = ∆ 4 (R) + ∆(R)∆(R ∩ S ′ )∆(R). ✷
For a free group F and its normal subgroup R, and any left Z[F/R]-module M, there are isomorphisms
H i (F/R, R/R ′ ⊗ M) ∼ = H i+2 (F/R, M), i ≥ 1. (2.8)
This is a well-known fact which follows easily from the Magnus embedding R/R ′ ֒→ f/fr of the relation module R/R ′ .
Let R, S be normal subgroups of F . For the group G := R/R ∩ S ∼ = RS/S, one can consider two different relation modules and a natural map between them:
R ∩ S (R ∩ S) ′ → S/S ′ .
This map can be naturally extended to a map between the corresponding relation sequences ( [6], p. 7):
R∩S (R∩S) ′ / / / / ∆(R) ∆(R∩S)∆(S) / / / / ∆(R) ∆(R∩S)Z[R] ≃ S/S ′ / / / / ∆(RS) ∆(S)∆(RS) / / / / ∆(RS) ∆(S)Z[RS] The Z[G]-modules ∆(R) ∆(R∩S)∆(S) and ∆(RS) ∆(S)∆(RS) are free, hence for any Z[G]-module G, there are natural isomorphisms H i R/R ∩ S, R ∩ S (R ∩ S) ′ ⊗ M ∼ = H i (R/R ∩ S, S/S ′ ⊗ M) ∼ = H i+2 (R/R ∩ S, M), i ≥ 1. (2.9)
We need some well-known facts about certain quadratic endofunctors on the category of abelian groups, namely:
⊗ 2 tensor square, SP 2 symmetric square, Λ 2 exterior square, ⊗ 2 antisymmetric square, Γ 2 divided square.
For a survey of the properties of these functors and their derived functors, see ( [1], [2]). Recall that, for an abelian group A, by definition,
SP 2 (A) = ⊗ 2 (A)/ a ⊗ b − b ⊗ a, a, b ∈ A , Λ 2 (A) = ⊗ 2 (A)/ a ⊗ a, a ∈ A , ⊗ 2 (A) = ⊗ 2 (A)/ a ⊗ b + b ⊗ a, a, b ∈ A .
The divided square functor Γ 2 is also known as the Whitehead quadratic functor. Given an abelian group A, the abelian group Γ 2 (A) is generated by symbols γ(x), x ∈ A, satisfying the following relations for all x, y, z ∈ A:
γ(0) = 0; γ(x) = γ(−x); γ(x + y + z) − γ(x + y) − γ(x + z) − γ(y + z) + γ(x) + γ(y) + γ(z) = 0.
The exterior and the antisymmetric squares are connected as follows. For every abelian group A, we have a short exact sequence
0 → A ⊗ Z/2 →⊗ 2 (A) → Λ 2 (A) → 0. (2.10)
Similarly, connecting the symmetric and divided square functor, we have the following short exact sequence:
0 → SP 2 (A) → Γ 2 (A) → A ⊗ Z/2 → 0. (2.11)
Let E be a free abelian group, I its subgroup and E/I = A. Then there is a natural exact sequence
0 → L 1 SP 2 (A) → Λ 2 (E)/Λ 2 (I) → E ⊗ E/I → SP 2 (A) → 0 (2.12)
where L 1 SP 2 is the first derived functor of SP 2 in the sense of Dold-Puppe, and L 1 SP 2 (A) is equal to the quotient of Tor(A, A) by the subgroup generated by the diagonal elements.
We refer the reader to [16] and [17] for the proof and applications of above kind of sequences in the theory of groups rings.
Another ingredient that we need is the following analog of the results from [13] on Koszul sequences.
SP 2 (I) I ⊗ E →⊗ 2 (E)
satisfies the following:
H 0 =⊗ 2 (A), 0 → Tor(A, Z/2) → H 1 → L 1 Λ 2 (A) → 0. (2.13)
Proof. The description of H 0 follows from the natural commutative diagram
SP 2 (E) SP 2 (E) I ⊗ E / / / / ⊗ 2 (E) / / / / ⊗ 2 (A) I ⊗ E / /⊗ 2 (E) / / / /⊗ 2 (A) since the image of SP 2 (E) in ⊗ 2 (A) is the same as the image of SP 2 (A) in ⊗ 2 (A).
Recall that the first homology of the Koszul sequence
Γ 2 (I) → I ⊗ E → Λ 2 (E)
is naturally isomorphic to the derived functor L 1 Λ 2 (A) (see [13]). Observe that the kernel of the map I ⊗ Z/2 → E ⊗ Z/2 is naturally isomorphic to Tor(A, Z/2). The short exact sequence (2.13) follows from the following commutative diagram:
E ⊗ Z/2 / / / / A ⊗ Z/2 SP 2 (I) / / / / I ⊗ E / /⊗ 2 (E) / / / / ⊗ 2 (A) Γ 2 (I) / / / / I ⊗ E / / Λ 2 (E) / / / / Λ 2 (A) I ⊗ Z/2. 3. Proof of Theorem 1.1 Since F ∩ (1 + ∆(R)∆(S)) = [R ∩ S, R ∩ S] ⊂ (1 + ∆(R ∩ S)∆(R)), Lemma 2.1 implies that D := D(F, ∆(R)∆(S)∆(F )) = F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) + ∆(R)∆(R ∩ (S ∩ T ) ′ ))
Observe that, for w ∈ D, w − 1 ∈ ∆(R ∩ S)∆(R), and
∆(R)∆(R ∩ S)∆(R) ⊂ ∆(R ∩ S)∆(R). Therefore D = F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) + ∆(R)∆(R ∩ (S ∩ T ) ′ ) ∩ ∆(R ∩ S)∆(R)) (3.1)
We have to prove that the quotient
F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) + ∆(R)∆(R ∩ (S ∩ T ) ′ ) ∩ ∆(R ∩ S)∆(R)) (R ∩ (S ∩ T ) ′ ) ′ [(R ∩ S) ′ , R] (3.2)
is a torsion group. By Theorem 2.1,
∆(R)∆(R ∩ (S ∩ T ) ′ ) ∩ ∆(R ∩ S)∆(R) ∆(R)∆(R ∩ (S ∩ T ) ′ )∆(R) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ ) = H 2 R R ∩ (S ∩ T ) ′ , ∆(R) ∆(R ∩ S) .
(3.3) Consider the homological Hochschild-Serre spectral sequence for the group extension
1 → R ∩ S R ∩ (S ∩ T ) ′ → R R ∩ (S ∩ T ) ′ → R R ∩ S → 1
with coefficients in ∆(R) ∆(R∩S) . Three terms of the E 2 -page of the spectral sequence, which give a contribution to the homology group (3.3), are the following:
E 2 0, 2 = H 0 R R ∩ S , H 2 R ∩ S R ∩ (S ∩ T ) ′ , ∆(R) ∆(R ∩ S) ; E 2 1, 1 = H 1 R R ∩ S , H 1 R ∩ S R ∩ (S ∩ T ) ′ , ∆(R) ∆(R ∩ S) ; E 2 2, 0 = H 2 R R ∩ S , ∆(R) ∆(R ∩ S) = H 3 R R ∩ S .
By hypothesis, E 2 2,0 is a torsion group. Consider the term E 2 1,1 :
E 2 1, 1 = H 1 R R ∩ S , R ∩ S R ∩ (S ∩ T ) ′ ⊗ ∆(R) ∆(R ∩ S) = H 2 R R ∩ S , R ∩ S R ∩ (S ∩ T ) ′ . The exact sequence 0 → R ∩ S R ∩ (S ∩ T ) ′ → S ∩ T (S ∩ T ) ′ → S R ∩ S → 0,(3.4)
yields an exact sequence
H 3 R R ∩ S , S R ∩ S → E 2 1,1 → H 2 R R ∩ S , S ∩ T (S ∩ T ) ′ .
(3.5)
The natural isomorphism
R(S ∩ T ) S ∩ T → R R ∩ S induces an isomorphism (see (2.9)) H 4 R R ∩ S ∼ = H 2 R R ∩ S , S ∩ T (S ∩ T ) ′ .
By hypothesis, all terms in (3.5) are torsion. Consider the natural map
f : H 2 R ∩ S R ∩ (S ∩ T ) ′ , ∆(R) ∆(R ∩ S) = H 2 R ∩ S R ∩ (S ∩ T ) ′ ⊗ ∆(R) ∆(R ∩ S) → H 2 R R ∩ (S ∩ T ) ′ , ∆(R) ∆(R ∩ S) .
The natural image of the map f under the isomorphism (3.3) is generated by elements of the type (f − 1)([r 1 , r 2 ] − 1), with f ∈ R, r 1 , r 2 ∈ R ∩ S (see Remark 2.1). It follows that the quotient group
∆(R)∆(R ∩ (S ∩ T ) ′ ) ∩ ∆(R ∩ S)∆(R) im(f ) + ∆(R)∆(R ∩ (S ∩ T ) ′ )∆(R) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ ) is a torsion group. Since im(f ) ⊆ ∆(R)∆(R ∩ S)∆(R), the quotient ∆(R)∆(R ∩ S)∆(R) + ∆(R)∆(R ∩ (S ∩ T ) ′ ) ∩ ∆(R ∩ S)∆(R) ∆(R)∆(R ∩ S)∆(R) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )
is a torsion group. Therefore, the quotient (3.2) is torsion if and only if the quotient
F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )) (R ∩ (S ∩ T ) ′ ) ′ [(R ∩ S) ′ , R] (3.6) is torsion. Since F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )) ⊆ (R ∩ S) ′ , and ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ ) ⊂ ∆ 2 (R ∩ S), F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )) = F ∩ (1 + ∆(R)∆(R ∩ S)∆(R) ∩ ∆ 2 (R ∩ S)Z[R] + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )).
Recall that
H 4 R R ∩ S = ∆(R)∆(R ∩ S)∆(R) ∩ ∆ 2 (R ∩ S)Z[R] ∆ 2 (R ∩ S)∆(R) + ∆(R)∆ 2 (R ∩ S)
is finite by hypothesis. Therefore, the quotient (3.6) is torsion if and only if the quotient
F ∩ (1 + ∆ 2 (R ∩ S)∆(R) + ∆(R)∆ 2 (R ∩ S) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )) (R ∩ (S ∩ T ) ′ ) ′ [(R ∩ S) ′ , R]
is torsion.
We next observe that the following commutative diagram with exact rows follows from On passing to the homology H * (R/R ∩ S, −), we obtain the exact sequence
(2.12) (with E = (R ∩ S)/(R ∩ S) ′ , I = (R ∩ (S ∩ T ) ′ )/(R ∩ S) ′ ∩ (S ∩ T ) ′ ,H 1 R/R ∩ S, R ∩ S R ∩ (S ∩ T ) ′ ⊗ R ∩ S (R ∩ S) ′ → H 1 R/R ∩ S, SP 2 R ∩ S R ∩ (S ∩ T ) ′ → (R ∩ S) ′ (R ∩ (S ∩ T ) ′ ) ′ [(R ∩ S) ′ , R] → ∆ 2 (R ∩ S) ∆ 2 (R ∩ S)∆(R) + ∆(R)∆ 2 (R ∩ S) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ ) (3.7)
That is, we get an epimorphism
H 1 R/R ∩ S, SP 2 R ∩ S R ∩ (S ∩ T ) ′ → F ∩ (1 + ∆ 2 (R ∩ S)∆(R) + ∆(R)∆ 2 (R ∩ S) + ∆(R ∩ S)∆(R ∩ (S ∩ T ) ′ )) (R ∩ (S ∩ T ) ′ ) ′ [(R ∩ S) ′ , R] . (3.8)
The next commutative diagram follows from Lemma 2.3:
SP 2 R∩S R∩(S∩T ) ′ / / / / S∩T (S∩T ) ′ ⊗ R∩S R∩(S∩T ) ′ / /⊗ 2 S∩T (S∩T ) ′ SP 2 S∩T (S∩T ) ′ / / / / S∩T (S∩T ) ′ ⊗ S∩T (S∩T ) ′ / / / / ⊗ 2 S∩T (S∩T ) ′ K / / / / SP 2 S∩T (S∩T ) ′ SP 2 ( R∩S R∩(S∩T ) ′ ) / / S∩T (S∩T ) ′ ⊗ S∩T (R∩S)(S∩T ) ′ / / / /⊗ 2 S∩T (R∩S)(S∩T ) ′ ,
where K lives in the short exact sequence
0 → Tor S ∩ T (S ∩ T ) ′ (R ∩ S) , Z/2 → K → L 1 Λ 2 S ∩ T (S ∩ T ) ′ (R ∩ S) → 0;
in particular, K is a torsion group. Recall that the homology in dimension ≥ 1 of a group with coefficients in the symmetric (or exterior, or antisymmetric) square of its relation module is a 2-torsion group. Hence,
H 1 R/R ∩ S, SP 2 S ∩ T (S ∩ T ) ′ = H 1 R(S ∩ T )/(S ∩ T ), SP 2 S ∩ T (S ∩ T ) ′
is a 2-torsion group. Thus we have exact sequences Let us consider the case T = F , and the corresponding subgroup D (F, rsf).
H 3 R R∩S ,⊗ 2 S∩T (R∩S)(S∩T ) ′ H 2 R R∩S , SP 2 (S∩T /(S∩T ) ′ ) SP 2 ( R∩S R∩(S∩T ) ′ )+K / / H 1 R R∩S , SP 2 R∩S R∩(S∩T ) ′ / / (2 − torsion) H 2 R R∩S , S∩T (S∩T ) ′ ⊗ S∩T (R∩S)(S∩T ) ′Theorem 4.1. If the cohomological dimension cd(R/R ∩ S) ≤ 3, then D(F, rsf) (R ∩ S ′ ) ′ [(R ∩ S) ′ , R] ←֓ H 3 (R/R ∩ S) ⊗⊗ 2 S (R ∩ S)S ′ .
[Recall that⊗ 2 denotes the antisymmetric tensor square].
Proof. Let us set The sequence (3.7) implies that there is an exact sequence
B := ∆ 2 (R ∩ S)∆(R) + ∆(R)∆ 2 (R ∩ S) + ∆(R ∩ S)∆(R ∩ S ′ ).H 1 R/R ∩ S, R ∩ S R ∩ S ′ ⊗ R ∩ S (R ∩ S) ′ → H 1 R/R ∩ S, SP 2 R ∩ S R ∩ S ′ → F ∩ (1 + B) (R ∩ S ′ ) ′ [(R ∩ S) ′ , R]
→ 0 (4.1)
Using dimension shifting, we get
H 1 R/R ∩ S, R ∩ S R ∩ S ′ ⊗ R ∩ S (R ∩ S) ′ = H 3 R/R ∩ S, R ∩ S R ∩ S ′ .
The short exact sequence (3.4) implies the exact sequence
H 4 R/R ∩ S, S (R ∩ S)S ′ → H 3 R/R ∩ S, R ∩ S R ∩ S ′ → H 3 (R/R ∩ S, S/S ′ ) .
Again by dimension shifting, Therefore, by the hypothesis on cohomological condition, both sides in the last exact sequence are zero. Consequently,
H 1 R/R ∩ S, SP 2 R ∩ S R ∩ S ′ = F ∩ (1 + B) (R ∩ S ′ ) ′ [(R ∩ S) ′ , R]
Invoking the sequences (3.9), we get an isomorphsim
H 3 R/R ∩ S,⊗ 2 S (R ∩ S)S ′ = H 1 R/R ∩ S, SP 2 R ∩ S R ∩ S ′ .
The result now follows from the Universal Coefficient Theorem.
Remark 4.1. Decomposition of the antisymmetric tensor square as 0 → − ⊗ Z/2 → ⊗ 2 → Λ 2 → 0, leads to the following diagram
H 3 (R/R ∩ S) ⊗ S (R∩S)S ′ ⊗ Z/2 H 3 (R/R ∩ S) ⊗ ⊗ 2 S (R∩S)S ′ / / / / H 3 R/R ∩ S,⊗ 2 S (R∩S)S ′ / / / / (torsion) H 3 (R/R ∩ S) ⊗ Λ 2 S (R∩S)S ′ .
We can thus conclude that
tf H 3 (R/R ∩ S) ⊗ Λ 2 S (R ∩ S)S ′ = tf H 1 R/R ∩ S, SP 2 R ∩ S R ∩ S ′ .
Here tf means the torsion-free rank of an abelian group.
Example. Let F = F (x 1 , x 2 , x 3 , x 4 , x 5 ), R := x 1 , x 2 , x 3 F , S := [x 1 , x 2 ], [x 2 , x 3 ], [x 1 , x 3 ], x 4 , x 5 F .
The quotient group R/R ∩ S is then a free abelian group of rank three, with the images of x 1 , x 2 , x 3 as generators, the group S (R∩S)S ′ is a free abelian group of rank two with the images of x 4 , x 5 as generators. Therefore,
H 3 (R/R ∩ S) ⊗ Λ 2 S (R ∩ S)S ′ ∼ = Z.
Hence, the quotient D(F, rsf )
I(R, S, F ) (4.2)
is non-zero.
At the moment we are not able to indicate explicitly the elements leading to the nontriviality of the quotient (4.2). Here we present candidates for such elements. The recipe given below shows how the elements from the third homology can be used for constructing elements belonging to generalized dimension subgroups.
Consider a free group F and elements r i ∈ R, t i ∈ R ∩ S such that i [r i , t i ] = 1.
Such elements come from the third homology:
H 3 (R/R ∩ S) = ker{R ∧ (R ∩ S) [,]
→ R}.
Here the sign ∧ means the non-abelian exterior product in the sense of Brown-Loday [3].
For d, e ∈ S, consider the element
w = i [[r −1 i , d], [t i , e]] Proposition 4.1. w − 1 ∈ ∆(R)∆(S)∆(F ).
Proof. Working modulo ∆(R)∆(S)∆(F ), we get
1 − w i ≡ (1 − [r −1 i , d])(1 − [t i , e]) − (1 − [t i , e])(1 − [r −1 i , d]) ≡ − (1 − [t i , e])(1 − [r −1 i , d]) ≡ −(1 − [t i , e])d −1 r i ((1 − r −1 i )(1 − d) − (1 − d)(1 − r −1 i )) ≡ (1 − [t i , e])(1 − r i )(1 − d) ≡ e −1 t −1 i ((1 − t i )(1 − e) − (1 − e)(1 − t i ))(1 − r i )(1 − d)] ≡ (1 − e −1 )(1 − t i )(1 − r i )(1 − d) ≡ − (1 − e −1 )r i t i (1 − [r i , t i ])(1 − d) ≡ −(1 − e −1 )(1 − [r i , t i ])(1 − d). Clearly, 1 − w ≡ i (1 − w i ).
Therefore,
1 − w ≡ −(1 − e −1 )( i (1 − [r i , t i ]))(1 − d) ≡ −(1 − e −1 )(1 − i [r i , t i ])(1 − d) = 0.
A combinatorial proof of a result of Stöhr
As mentioned earlier, the normal subgroup D(F, rfr), which is a special case of Gupta's three subgroup problem, has been identified by Stöhr [21]. The following result on free group rings is implicit in this work based on using homological mehods.
Theorem 5.1. Let F be a free group, and R a normal subgroup of F . If a ∈ D(F, frf ), then a 2 ∈ D(F, rrf + frr).
In this section we give a combinatiorial proof of the above result, and bring out the possibility of higher dimensional variations of its statement. Before proceeding further, let us give a sketch of the main steps from [21] which yield Theorem 5.1.
First identify the tensor square of the relation moduleR := R/[R, R] as R ⊗2 = rr frr and observe that there is a natural Z/2-action onR ⊗2 , namely the one which permutes the factors. This action extends to the natural quotient
(R ⊗2 ) F = rr frr + rrf .
One of the main statements in [21] is that the Z/2-action on the subgroup H 4 (G) = rr ∩ frf rrf + frr ⊆ rr frr + rrf is trivial. The proof in [21], which is homological, uses comparison of different projective resolutions. Let a ∈ D(F, frf). Then a ∈ R ′ , since frf ⊂ fr and D(F, fr) = R ′ . The Z/2-action which permutes the terms in (R ⊗2 ) F sends
a − 1 + frr + rrf → a −1 − 1 + frr + rrf.
Since a ∈ D(F, frf ) and a ∈ R ′ , a ∈ D(F, rr ∩ frf). We conclude that a ≡ a −1 mod frr + rrf and therefore a 2 ∈ D(F, frr + rrf).
Since the above conclusion is a result purely in group rings, the following questions arise naturally.
• Does there exist a combinatorial proof of the above fact without the use of homology?
• Is it possible to generalize this result to more complicated ideals and generalized dimension subgroups?
We answer the first question affirmatively, and offer some remarks on the second question.
Let G = F/R, and choose a transversal {w(g)} g∈G for G in F :
w(g) → g, g ∈ G, w(g) ∈ F, with w(1) = 1.
Then we have a function W : G × G → R defined by
w(g)w(h) = w(gh)W (g, h).
This function satisfies a 2-cocycle condition
W (gh, k)W (g, h) w(k) = W (g, hk)W (h, k) for all g, h, k ∈ G. (5.1) Lemma 5.1. For g, h ∈ G, (W (g, h) −1 ) w(gh) −1 W (h −1 , g −1 ) ∈ [R, R].
Proof. We have
w(h −1 )w(g −1 ) = w(h −1 g −1 )W (h −1 , g −1 ), g, h ∈ G, (5.2) w(g −1 ) = w(g) −1 W (g, g −1 ) = W (g −1 , g)w(g) −1 , g ∈ G, (5.3) W (g, g −1 ) w(g) = W (g −1 , g). (5.4)
Substituting in (5.2) we, in turn, have
w(h) −1 W (h, h −1 )w(g) −1 W (g, g −1 ) = w(gh) −1 W (gh, (gh) −1 )W (h −1 , g −1 ) w(h) −1 w(g) −1 W (h, h −1 ) w(g) −1 W (g, g −1 ) = w(gh) −1 W (gh, (gh) −1 )W (h −1 , g −1 ) W (g, h) −1 w(gh) −1 W (h, h −1 ) w(g) −1 W (g, g −1 ) = w(gh) −1 W (gh, (gh) −1 )W (h −1 , g −1 )
Finally, we have the following relation:
W (g, h) −w(gh) −1 W (h, h −1 ) w(g) −1 W (g, g −1 ) = W (gh, (gh) −1 )W (h −1 , g −1 ). (5.5) Equation (5.1) implies that W (gh, h −1 g −1 )W (g, h) w(h −1 g −1 ) = W (g, g −1 )W (h, h −1 g −1 ) = W (g, g −1 )W (h, h −1 ) w(g −1 ) W (h −1 , g −1 ). (5.6)
Substituting the value of W (gh, h −1 g −1 ) from (5.5) to (5.6), we get
[(W (g, h) −1 ) w(gh) −1 W (h −1 , g −1 )] 2 ∈ [R, R].
Since R/[R, R] is free abelian, we conclude that
(W (g, h) −1 ) w(gh) −1 W (h −1 , g −1 ) ∈ [R, R].
Now we are ready to prove Theorem 5.1 using only combinatorial tools, and without homological algebra. In fact, we have the following more general result, from which Theorem 5.1 follows in case S = T = F . Theorem 5.2. Let R, S, T be normal subgroups in F , such that R ⊆ S, T . If a ∈ D(F, srt + trs), then a 2 ∈ D(F, rrs + srr + trr + rrt).
Proof. Let a ∈ D(F, srt + trs) so that we have an expression
1 − a = (1 − f )(1 − r)(1 − g) + (1 − f ′ )(1 − r ′ )(1 − g ′ ),
where the two sums are taken over products with f, g ′ ∈ S, g, f ′ ∈ T, r, r ′ ∈ R. On opening the brackets, we get
1−a = (1−f −r −g +f r +f g +rg −f rg)+ (1−f ′ −r ′ −g ′ +f ′ r ′ +f ′ g ′ +r ′ g ′ −f ′ r ′ g ′ ).
(5.7)
Remark 5.1. On invoking Theorem 4, [21], it follows that under the hypothesis of the above theorem, a 4 ∈ [R, R, ST ].
One can prove the following result by proceeding in a way similar to that for the proof of the preceding theorem, and therefore we omit the details. Theorem 5.3. Let R, S, T be normal subgroups of a free group F with R ⊆ S, T . If a ∈ D(F, srs + trt), then a 2 ∈ D(F, rrs + srr + rrt + trr) and a 4 ∈ [R, R, ST ].
The general problem in free group rings, of which the foregoing are special cases, asks for the identification of normal subgroups D (F, a), where a is a sum of ideals of the form r 1 . . . r n with R 1 , . . . , R n normal subgroups of the given free group F . As a contribution to this general problem, we present the following two results. Proof. While the proof of (5.11) is similar to that of Theorem 5.2, and so we omit it, the assertion (5.12) follows from (5.11) and the following general result:
If R is a normal subgroup of the free group
L 3 (R ab ) / / SP 2 (R ab ) ⊗ R ab / / / / SP 3 (R ab ),
where L 3 and SP 3 are the third Lie and symmetric power functor respectively, and R ab is the abelianization of R. Applying the homology functor H * (F, −) to this sequence, we get a long exact sequence which connects H 1 and H 0 , which, in turn, implies that D(F, rrrf + frrr + ([R, R] − 1)r) [R, R, R, F ] = Coker{H 1 (F, SP 2 (R ab ) ⊗ R ab ) → H 1 (F, SP 3 (R ab ))}.
The assertion (5.13) follows from the simple fact that, for a free abelian A, the natural composition SP 3 (A) ֒→ SP 2 (A) ⊗ A ։ SP 3 (A) is multiplication by 3.
Our concluding result is a generalization of Kanta Gupta's identification of D(F, rffr) [8]. Proof. Observe that D(F, rsfr) ⊆ D(F, rfr) = γ 3 (R). Therefore, by Lemma 2.2, D(F, rsfr) = D(F, ∆ 4 (R) + ∆(R)∆(R ∩ S ′ )∆(R)).
Recall from [15] that, for a free group F and its normal subgroup N, if (F/N) ab is 2-torsion-free, then D(F, fnf + f 4 ) [N, N, F ]γ 4 (F ) ∼ = L 1 SP 3 ((F/N) ab ).
In our situation, the quotient R/R ∩ S ′ ⊆ S/S ′ is torsion-free, therefore, the contribution from the derived functor L 1 SP 3 vanishes and the result follows.
Observe that, the condition R ⊆ S in Theorem 5.5 significantly simplifies the identification.
For arbitrary normal subgroups R, S, we conjecture that
D(F, rssr) = [γ 3 (R ∩ S), R][γ 2 (R ∩ S ′ ), R].
F, rfs) = [R ′ ∩ S ′ , R ∩ S][R ∩ S ′ , R ∩ S ′ ][R ′ ∩ S, R ′ ∩ S] ([7], [16]); (1.3) D(F, frf) = [R ′ , F ] [21], (1.4)where, for groups H ⊆ G, H ′ and √ H denote respectively the derived subgroup of H and the isolator in G of the subgroup H, and {γ i (G)} i≥1 is the lower central series of G. Given a triple of subgroups R, S, T of a free group F , set
Lemma 2.1. (see [23], Theorem 1.1) If R, S, T are normal subgroups of a free group F and R ⊆ T , then ∆(R)∆(S)∆(T ) ∩ ∆(R)∆(R ∩ S) = ∆(R)∆(R ∩ S)∆(R) + ∆(R)∆(R ∩ (S ∩ T ) ′ ). (2.1)
projecting the above equation (2.2) under the map θ : Z[F ] → Z[T ] induced by f = ts → t for f ∈ F. t ∈ T, s in a right transversal for S ∩ T in S, it follows that u y ∈ Z[T ]. (2.4) Similarly, using the projection Z[F ] → Z[T ] induced on using left transversal for S ∩ T in S, the inclusion (2.3) shows further that
Lemma 2 . 3 .
23For a free abelian group E, I ⊆ E, E/I = A, the homology of the naturally defined complex
and the fact that the derived functor L 1 SP 2 in (2.12) vanishes since the quotient(R ∩ S)/(R ∩ (S ∩ T ) ′ ⊆ (S ∩ T )/(S ∩ T ) ′ is torsion-free): S∩T ) ′ ⊗ R∩S (R∩S) ′ / / / / SP 2 R∩S R∩(S∩T ) ′ (R∩S) ′ (R∩(S∩T ) ′ ) ′ [(R∩S) ′ ,R∩S] / / / / ∆ 2 (R∩S) ∆ 3 (R∩S)+∆(R∩S)∆(R∩(S∩T ) ′ ) / / / / SP 2 R∩S R∩(S∩T ) ′ .
(
R∩S)(S∩T ) ′ .
)(S∩T ) ′ are trivial Z[R/R ∩ S]-modules and one can use the Universal Coefficient Theorem to decompose the homology groups in the last diagram. By hypothesis, the left hand terms in the last diagram are torsion and therefore the left hand term in (3.8) is a torsion group and so the proof is complete. ✷ 4. The subgroup D(F, rsf )
F
Clearly, B ⊆ ∆(R)∆(S)∆(F ), and we have a natural commutative diagram ∩
H 3 (
3R/R ∩ S, S/S ′ ) = H 3 (RS/S, S/S ′ ) = H 5 (R/R ∩ S) = 0.
Theorem 5 . 4 .
54Let R ⊆ S, T be normal subgroups of a free group F . If a ∈ D(F, rsst + tssr), then a 2 ∈ D(F, rsss + sssr + tsss + ssst + ([S, S] − 1)s) (5.11) and a 6 ∈ [S, S, S, RT ]. (5.12)
F and a ∈ D(F, rrrf + frrr + ([R, R] − 1)r), then a 3 ∈ [R, R, R, F ].
Theorem 5 . 5 .
55If R ⊆ S are normal subgroups of a free group F , then D(F, rsfr) = D(F, rfsr) = [R ∩ S ′ , R ∩ S ′ , R]γ 4 (R).
AcknowledgementThe research of the first author is supported by the Russian Science Foundation, grant N 16-11-10073.We pick a set of representatives {w(g)} g∈F/R in F for the elements of the quotient group F/R. Then every element w ∈ F can be written uniquely as w = w(w)r w , withw = wR and r w ∈ R, and so we haveLet π : F → R be the projection given by w → r w . Since the element a lies in [R, R], π(a) = a. The first sum in(5.7), projects under π to the following sumIn the same way we see that, modulo r 2 s + sr 2 , the second sum in (5.7) is equivalent to the sumHence,mod r 2 s + sr 2 + r 2 t + tr 2 . (5.9)On applying the involution f → f −1 on F to the equation (5.7), we haveRepeating the same process as above, we getmod r 2 s + sr 2 + r 2 t + tr 2 . (5.10)Subtracting 1 − a −1 from 1 − a, by (5.9) and (5.10) we getmod r 2 s + sr 2 + r 2 t + tr 2 .Lemma 5.1 implies thatHence 1 − a 2 ∈ r 2 s + sr 2 + r 2 t + tr 2 .
A universal coefficient theorem for quadratic functors. H.-J Baues, T Pirashvili, J. Pure Appl. Alg. 148H.-J. Baues and T. Pirashvili: A universal coefficient theorem for quadratic functors, J. Pure Appl. Alg. 148 (2000), 1-15.
Derived functors of non-additive functors and homotopy theory. L Breen, R Mikhailov, Algebr. Geom. Topol. 11L. Breen and R. Mikhailov: Derived functors of non-additive functors and homotopy theory, Algebr. Geom. Topol., 11 (2011), 327-415.
Van Kampen theorems for diagrams of spaces. R Brown, J.-L Loday, Topology. 26R. Brown, J.-L. Loday: Van Kampen theorems for diagrams of spaces, Topology 26 (1987), 311- 335.
Dennis E Enright, Triangular matrices over group rings. New York UniversityDoctoral ThesisDennis E. Enright: Triangular matrices over group rings, Doctoral Thesis, New York University, 1968.
Karl W Gruenberg, Cohomological Topics in Group Theory. Springer-Verlag143Karl W. Gruenberg: Cohomological Topics in Group Theory, LNM 143, Springer-Verlag, 1970.
Relation modules of finite groups, CBMS No. 25. Karl W Gruenberg, Amer. Math. SocKarl W. Gruenberg: Relation modules of finite groups, CBMS No. 25, Amer. Math. Soc.,1976.
Subgroups of free groups induced by certain products of augmentation ideals. Chander Kanta Gupta, Comm. Algebra. 6Chander Kanta Gupta: Subgroups of free groups induced by certain products of augmentation ideals, Comm. Algebra, 6 (1978), 1231-1238.
Subgroups induced by certain ideals in free group rings. Chander Kanta Gupta, Comm. Algeba. 11Chander Kanta Gupta: Subgroups induced by certain ideals in free group rings, Comm. Algeba, 11 (1983), 2519-2525.
. Narain Gupta, Contemporary Mathematics. 66American Mathematical SocietyFree Group RingsNarain Gupta: Free Group Rings, Contemporary Mathematics, Vol. 66, American Mathematical Society, 1987.
An interpretation of the cohomology groups H n (G, M ). Derek F Holt, J. Algebra. 602Derek F. Holt: An interpretation of the cohomology groups H n (G, M ). J. Algebra 60(2) (1979), 307-320.
Crossed n-fold extensions of groups and cohomology. Johannes Huebschmann, Comment. Math. Helv. 552Johannes Huebschmann: Crossed n-fold extensions of groups and cohomology. Comment. Math. Helv. 55(2) (1980), 302-313.
Some intersection theorems and subgroups determined by certain ideals in integral group rings. Ram Karan, Deepak Kumar, L R Vermani, II. Algebra Colloq. 9Ram Karan, Deepak Kumar and L. R. Vermani: Some intersection theorems and subgroups determined by certain ideals in integral group rings. II. Algebra Colloq. 9, (2002), 135-142.
Computing the homology of Koszul complexes. B Köck, Trans. Amer. Math. Soc. 353B. Köck: Computing the homology of Koszul complexes, Trans. Amer. Math. Soc., 353 (2001), 3115-3147.
Roman Mikhailov, Inder Bir Singh Passi, Lower Central and Dimension Series of Groups, LNM. Springer1952Roman Mikhailov and Inder Bir Singh Passi: Lower Central and Dimension Series of Groups, LNM Vol. 1952, Springer 2009.
Generalized dimension subgroups and derived functors. Roman Mikhailov, Inder Bir, S Passi, J. Pure Appl. Algebra. 220Roman Mikhailov and Inder Bir S. Passi: Generalized dimension subgroups and derived functors, J. Pure Appl. Algebra, 220 (2016), 2143-2163.
The subgroup determined by a certain ideal in a free group ring. Roman Mikhailov, Inder Bir, S Passi, J. Algebra. 449Roman Mikhailov and Inder Bir S. Passi: The subgroup determined by a certain ideal in a free group ring, J. Algebra, 449, (2016), 400-407.
Roman Mikhailov, Inder Bir, S Passi, arxiv: 1605.08196Free group rings and derived functors. Roman Mikhailov and Inder Bir S. Passi: Free group rings and derived functors, arxiv: 1605.08196.
Inder Bir, S Passi, Group Rings and Their Augmentation ideals. Springer-VerlagInder Bir S. Passi: Group Rings and Their Augmentation ideals, Springer-Verlag, 1979.
Ratcliffe: Crossed extensions. G John, Trans. Amer. Math. Soc. 257John G. Ratcliffe: Crossed extensions. Trans. Amer. Math. Soc. 257 (1980), 73-89.
Sergei O Ivanov, Roman Mikhailov, arXiv:1510.09044v1Higher limits, homology theories and fr-codes. math.GRSergei O. Ivanov and Roman Mikhailov: Higher limits, homology theories and fr-codes, arXiv:1510.09044v1 [math.GR].
On Gupta representations of central extensions. R Stöhr, Math. Z. 187R. Stöhr: On Gupta representations of central extensions, Math. Z. 187, (1984), 259-267.
On subgroups determined by ideals of an integral group ring. L R Vermani, Algebra. Some recent advances. Passi, I. B. S.L. R. Vermani: On subgroups determined by ideals of an integral group ring, Passi, I. B. S. (ed.), Algebra. Some recent advances. Basel: Birkhauser. Trends in Mathematics, (1999), 227-242.
Some intersection theorems and subgroups determined by certain ideals in integral group rings. L R Vermani, A Razdan, Algebra Colloq. 2L. R. Vermani and A. Razdan: Some intersection theorems and subgroups determined by certain ideals in integral group rings, Algebra Colloq. 2 (1995), 23-32.
. Inder Bir, S Passi, 14140306Mohali (PunjabCentre for Advanced Study in Mathematics Panjab University ; Chandigarh 160014 India and Indian Institute of Science Education and ResearchIndia Email: [email protected] Bir S. Passi Centre for Advanced Study in Mathematics Panjab University, Sector 14, Chandigarh 160014 India and Indian Institute of Science Education and Research, Mohali (Punjab) 140306 India Email: [email protected]
| []
|
[
"Delayed Rebounds in the Two-Ball Bounce Problem",
"Delayed Rebounds in the Two-Ball Bounce Problem"
]
| [
"Sean P Bartz \nDept. of Chemistry and Physics\nIndiana State University\n47809Terre HauteIN\n"
]
| [
"Dept. of Chemistry and Physics\nIndiana State University\n47809Terre HauteIN"
]
| []
| In the classroom demonstration of a tennis ball dropped on top of a basketball, the surprisingly high bounce of the tennis ball is typically explained using momentum conservation for elastic collisions, with the basketball-floor collision treated as independent from the collision between the two balls. This textbook explanation is extended to inelastic collisions by including a coefficient of restitution. This independent contact model (ICM), as reviewed in this paper, is accurate for a wide variety of cases, even when the collisions are not truly independent. However, it is easy to explore situations that are not explained by the ICM, such as swapping the tennis ball for a pingpong ball. In this paper, we study the conditions that lead to a "delayed rebound effect," a small first bounce followed by a higher second bounce, using techniques accessible to an undergraduate student. The dynamical model is based on the familiar solution of the damped harmonic oscillator.We focus on making the equations of motion dimensionless for numerical simulation, and reducing the number of parameters and initial conditions to emphasize universal behavior. The delayed rebound effect is found for a range of parameters, most commonly in cases where the first bounce is lower than the ICM prediction. | 10.1088/1361-6404/ac5384 | [
"https://arxiv.org/pdf/2007.15005v2.pdf"
]
| 220,871,191 | 2007.15005 | c7df3ec235300b517f1ef7aa0a9e4e5a3fd7282e |
Delayed Rebounds in the Two-Ball Bounce Problem
7 Mar 2022
Sean P Bartz
Dept. of Chemistry and Physics
Indiana State University
47809Terre HauteIN
Delayed Rebounds in the Two-Ball Bounce Problem
7 Mar 2022arXiv:2007.15005v2 [physics.class-ph] (Dated: November 5, 2020)
In the classroom demonstration of a tennis ball dropped on top of a basketball, the surprisingly high bounce of the tennis ball is typically explained using momentum conservation for elastic collisions, with the basketball-floor collision treated as independent from the collision between the two balls. This textbook explanation is extended to inelastic collisions by including a coefficient of restitution. This independent contact model (ICM), as reviewed in this paper, is accurate for a wide variety of cases, even when the collisions are not truly independent. However, it is easy to explore situations that are not explained by the ICM, such as swapping the tennis ball for a pingpong ball. In this paper, we study the conditions that lead to a "delayed rebound effect," a small first bounce followed by a higher second bounce, using techniques accessible to an undergraduate student. The dynamical model is based on the familiar solution of the damped harmonic oscillator.We focus on making the equations of motion dimensionless for numerical simulation, and reducing the number of parameters and initial conditions to emphasize universal behavior. The delayed rebound effect is found for a range of parameters, most commonly in cases where the first bounce is lower than the ICM prediction.
I. INTRODUCTION
In a common classroom demonstration of linear momentum conservation, a tennis ball is held above a basketball, and the two are simultaneously dropped to the floor. The tennis ball rebounds much higher than the drop height, surprising and exciting students. Textbook explanations suggest that when the upper ball is much less massive than the lower ball, it rebounds at three times the impact speed, bouncing to nine times the initial drop height. 1-3
An interested student may wonder about the accuracy of this prediction, as well as its applicability to other combinations of sports balls. For instance, replacing the tennis ball with a ping-pong ball more closely matches the textbook assumptions, suggesting it should bounce higher than the tennis ball. Instead, the ping-pong ball often stays close to the basketball on the first bounce. However, with the balls carefully aligned, the small first bounce is followed by a noticeably higher second bounce. This work shows that this "delayed rebound effect" is robust, and is explained with a simple force model.
The classic justification for the high bounce of the tennis ball assumes that the basketballfloor collision is independent of the basketball-tennis ball collision. The simplifying assumption of this independent contact model (ICM) does not withstand close inspection; the lower ball often remains in contact with the floor when the two balls first make contact. 4 Interestingly, ICM predictions are fairly accurate in many cases when the collisions are not truly independent. 5 However, when the initial separation between the two balls is small, the final velocities differ greatly from the ICM prediction, particularly when the balls are allowed to bounce more than once.
General study of this problem seems to require consideration of an unwieldy number of parameters: the radii and masses of both balls, their elastic and dissipative constants, and the initial drop height and separation between the two balls. In this paper, we show how to reduce this set of eight parameters and two initial conditions to two parameters and a single initial condition. This type of generalization is important for applying computational techniques to understand universal behavior.
m 1 h = z 1 (0) m 2 ∆h = z 2 (0) − z 1 (0) (a) Initial conditions m 1 z 1 (t) m 2 m 1 z 2 (t) (b) Coordinate definitions
II. INDEPENDENT CONTACT MODEL
The textbook solution to the two-ball drop problem assumes independent, instantaneous collisions between the balls and the floor. We review this independent contact model here as a basis of comparison for the more-realistic dynamic model, and to introduce notation.
We focus on the maximum bounce height of the upper ball, as this is the effect most easily seen in the classroom demonstration.
Let us consider the textbook scenario, with a tennis ball dropped on a basketball. In one case, the lower ball is adult-sized (diameter 24 cm), and in the other case it is child-sized (diameter 20 cm). If the tennis ball achieves the same post-collision velocity in each case, the mass dropped on the basketball will bounce 4 cm higher, simply because the collision point is higher from the floor. To better compare these experiments, we measure the position of each ball from its lowest accessible point, which for the tennis ball is one basketball diameter above the floor. See Fig. 1 for an illustration of this coordinate definition. The positions z 1 = 0 and z 2 = 0 do not refer to the same physical point, removing reference to the radii of the balls. Inelastic collisions are characterized by a coefficient of restitution, defined as the ratio of the relative velocity of the particles post-collision to their pre-collision relative velocity,
ε = v ′ 1 − v ′ 2 v 1 − v 2 .(1)
Here, primed velocities refer to the velocity just after the collision. Using this relation, along with conservation of momentum, the post-collision velocities 6 are
v ′ 1 = v 1 + µ (v 2 + ε(v 2 − v 1 )) 1 + µ (2) v ′ 2 = µv 2 + v 1 + ε(v 1 − v 2 ) 1 + µ ,(3)
where µ = m 2 /m 1 .
To understand the first bounce, we divide the motion into five distinct phases: (i) both
The upper ball has position z 2 = ∆h.
(ii) Using (3), and considering the floor to be at rest and infinitely massive, the velocity of the lower ball after colliding with the floor is v 1 = εv i .
(iii) The two balls collide when z 1 = z 2 . Using free-fall kinematics,
z 1 (t) = v i t − 1 2 gt 2 (5) z 2 (t) = ∆h − v i t − 1 2 gt 2 ,(6)
the collision time is
t c = ∆h v i (1 + ε) .(7)
This time is used to calculate the velocities of the balls just before the collision,
v 1 (t c ) = εv i − gt c = ε 2gh − g∆h √ 2gh(1 + ε) (8) v 2 (t c ) = −v i − gt c = −v i − g∆h v i (1 + ε)(9)
and the position of the collision is
z 1 (t) = z 2 (t) = ∆h ε 1 + ε − 1 4h ∆h 1 + ε 2 .(10)v ′ 2 = 2gh ε 2 + 2ε − µ 1 + µ − g∆h √ 2gh(1 + ε)(11)
(v) We use conservation of energy to determine the maximum height of the upper ball after the first bounce
m 2 gh max = 1 2 m 2 v ′2 2 + m 2 gz 2 (t c ),(12)
Inserting (10) and (11) yields the expression
h max = h (ε 2 + 2ε − µ) 2 + ∆h(1 + µ)(µ − ǫ) (1 + µ) 2 .(13)
This result from the ICM is used as a point of comparison for the results of the computational model in Section IV.
III. LINEAR DASHPOT FORCE
We model the inelastic collisions as a damped spring-mass system. 7 The general form for this force includes a restorative force that is proportional to the compression, and a dissipative term that is proportional to the velocity
F = −kz − γż.(14)
This model, also known as a spring with a linear dashpot, has been successfully implemented as an approximation of the Hertz contact force between viscoelastic spheres. 6,8,9 In an interaction between two such systems, effective elastic and damping constants are calculated according to
k ij = 1 k i + 1 k j −1 , γ ij = 1 γ i + 1 γ j −1 .(15)
Some changes must be made to reflect the differences between bouncing balls and damped springs. First, the ball only experiences a force under compression, unlike a spring which can also be stretched. To capture this, we define the compression
ξ ij = min[0, −z i + z j ],
where z i are the vertical positions of the balls as defined in Fig. 1. The floor is denoted by index 0, while the lower and upper balls are labeled by index 1 and 2, respectively. This quantity is zero unless the balls are in contact.
Additionally, the combination of the force terms must always result in a repulsive force.
To handle cases where the dissipative term overcompensates the elastic term, resulting in an unphysical attractive force, we use the minimum function
F ij = min[0, −kξ ij − γ ijξ ij ].(16)
To simplify the analysis, the elastic constant k is taken to be the same for both forces, and the desired coefficient of restitution is set by changing the damping constants γ ij .
The equations of motion are
m 1z1 = −m 1 g + F 01 − F 12 (17) m 2z2 = −m 2 g + F 12(18)
To isolate the parameters of physical importance, the equations of motion are written in a dimensionless fashion via the substitutions z i = X i m 1 g/k and t = τ m 1 /k. The equations become
X ′′ 1 = −1 + f 01 − f 12(19)X ′′ 2 = −1 + f 12 /µ,(20)
where ( ′ ) indicates a derivative with respect to τ . The dimensionless forces are
f ij = − min 0, η ij + 2ζη ′ ij(21)
where ζ = γ ij /2 √ m 1 k are the damping ratios and η ij is the dimensionsless compression.
A. Coefficients of restitution
To get a general expression for the coefficient of restitution, we solve (19)(20) or the duration of the two balls, assuming no contact with the floor. Thus, f 01 = 0 and f 12 = 0, so the minimum function is unnecessary. The equations become
η ′′ ij = − 1 + µ µ (η ij + 2ζη ′ ij ),(22)
with the initial condition η ij (0) = 0. With the substitutions
β = 1 + µ µ ζ, ω 2 0 = 1 + µ µ , ω = ω 2 0 − β 2 ,(23)
the equation of motion takes on the standard damped harmonic oscillator form, with general
solution η ij = η ′ ij (0) 2 β 2 − ω 2 0 e −βτ e τ √ β 2 −ω 2 0 − e −τ √ β 2 −ω 2 0 .(24)
In the case of low damping, ζ 2 < µ/(1 + µ), or β < ω 0 , this becomes
η ij = η ′ ij (0) ω e −βτ sin ωτ .(25)
The derivations for low damping follow. The high damping case is similar, using the definition Ω = β 2 − ω 2 0 to avoid imaginary values of ω, and making use of hyperbolic trigonometric functions, beginning with the solution
η ij = η ′ ij (0) Ω e −βτ sinh Ωτ .(26)
The balls cease contact at time η ′′ ij (τ f ) = 0. With low damping, this becomes
tan ωτ f = −2βω ω 2 − β 2 .(27)
Solving this equation for τ f gives an expression involving the arctangent, which requires care in selecting the correct branch. 10 Using trigonometric definitions, the expression simplifies
to τ f = 1 ω arccos 2β 2 ω 2 0 − 1 .(28)
Converting back to physical parameters, the contact time becomes
τ f = µ (1+µ) √ µ 1+µ −ζ 2 arccos 2ζ 2 1+µ µ − 1 for ζ 2 < µ 1+µ µ (1+µ) √ ζ 2 − µ 1+µ arccosh 2ζ 2 1+µ µ − 1 for ζ 2 > µ 1+µ(29)
To calculate the coefficient of restitution (1), we calculate
ε ij = η ′ ij (τ f ) η ′ ij (0) = e −βτ cos(ωτ f ) − β ω sin(ωτ f ) .(30)
for low damping. Using (27) and trigonometric definitions, this becomes
ε ij = e −βτ f − ω 2 − β 2 ω 2 0 − β ω 2βω ω 2 0 (31) = e −βτ f ,(32)
which also holds for high damping. Inserting (29),
ε ij = exp ζ √ µ 1+µ −ζ 2 arccos 2ζ 2 1+µ µ − 1 for ζ 2 < µ 1+µ exp ζ ζ 2 − √ µ 1+µ arccosh 2ζ 2 1+µ µ − 1 for ζ 2 > µ 1+µ(33)
The high damping case results in ε ij < e −2 in the limit µ → ∞, and even smaller for lower mass ratios. Thus, high damping is not applicable to the examples studied in Section IV. The value of τ f for the lower ball also characterizes the initial conditions of the initial drop height and gap between the balls. In the collision with the infinitely massive floor, we take the limit µ → ∞, and the contact time simplifies to
τ f (µ → ∞) = 1 1 − ζ 2 arccos(2ζ 2 − 1).(34)
The interval between the time X 1 = 0 and X 2 = 0 is
τ d = √ 2 X 2 (0) − X 1 (0) .(35)
This is interpreted as the time at which the balls would collide if they did not compress or bounce. If τ d < τ f , the collisions are simultaneous; the lower ball is still in contact with the floor when the upper ball collides with it.
We study the maximum height of the upper ball after the first and second bounce. We analyze several different ratios: first bounce height to the ICM prediction (13), first and second bounce heights to the initial drop height (h + ∆h), and second bounce height to first bounce height.
For given values of ε 01 , ε 12 , and µ, the bounce height ratios are found to be the same for fixed τ d /τ f , and the force curves retain the same form as well. Thus, the situation is described entirely by three parameters (ε 01 , ε 12 , µ) and a single initial condition (τ d /τ f ). To simplify the analysis, we set ε 01 = ε 12 throughout the rest of this work, as this does not qualitatively affect the results. 6 The ball parameters ε, µ and the initial condition τ d /τ f were each divided into 50 increments, covering ranges ε ∈ (0.5, 1), µ ∈ (10 −2 , 1/3) and τ d /τ f ∈ (10 −2 , 1) for analysis of the bounce heights. For the sake of comparison, the plots shown here focus primarily on the representative value of ε = 0.816, but animated visualizations of all data are available. 12 We use a standard numerical ordinary differential equation solver 13,14 to integrate the initial value problem (19,20) with the balls released from rest. is less than the ICM prediction (13). The ICM limit is recovered in the case τ d /τ f > 1, as expected when collisions are independent. Intermediate gaps show more detailed structure.
For some combinations of parameters, the bounce height remains below the ICM prediction, while for others the simulated bounce height is greater than that of the ICM. The ICM is overperformed when ε is small and µ is on the large end of the range studied, as shown in
C. Second bounce
In this section, we analyze the height of the upper ball on the second bounce, specifically studying the delayed rebound effect with the second bounce higher than the first. Figure 7 shows a comparison of the second bounce height to the height of the first bounce using as ε increases. As expected, the small mass-ratio limits are approximately achieved in the elastic case (ε = 1), when the initial collisions between the floor and the two balls are independent (τ d /τ f = 1).
In Fig. 8, the first and second bounce heights are compared. It is evident that the delayed rebound effect is most prominent in cases where the first bounce is low, often lower than the initial drop height. The highest second bounces on an absolute scale occur when the first bounce is also high. Some of these do slightly exceed the first bounce, but these cases are relatively few. was lower than the ICM prediction. However, having a first bounce lower than the ICM prediction was not highly predictive; only 22% of these cases have a higher second bounce.
Typically, the lower ball is not in contact with the floor when the balls collide for the second bounce, and the balls make a single contact with each other. When the lower ball is in contact with the floor, multiple contacts between the balls are possible, in a manner that is qualitatively similar to the first bounce. These cases lead to a noticeably lower second bounce than others in nearby parameter space.
V. CONCLUSIONS
We show that the "delayed rebound effect," where the second bounce of two aligned balls is higher than the first, is present in the numerical solutions to a linear dashpot force between the balls. The effect is is most prominent when the upper ball has a much smaller mass than the lower ball, and the distance gap between the balls is small when they are released.
Typically, the finite duration of the collisions leads the first bounce to be smaller than predicted by the ICM, so the expected high bounce does not come until the second bounce.
This is consistent with the informal observation that inspired this study: namely, that the delayed rebound effect is more readily produced in ping-pong ball-basketball collisions than in tennis ball-basketball collisions.
Rather than focus on the parameters of a few particular sports balls, we look for universal behavior. The relevant ball parameters are reduced to the mass ratio µ = m 1 /m 2 and the coefficient of restitution ε, assumed to be the same for both balls. The initial conditions of drop height and gap between the balls is reduced to a single parameter τ d /τ f , which characterizes the time between the lower ball reaching the floor and the two balls making first contact. This approach represents a simplification over previous studies, and can be generalized to more than two balls.
We examined the details of the first bounce collisions, finding multiple contacts between the balls in some cases. However, the multiple impacts did not correlate with the postcollision dynamics of the balls. For the first bounce, we found that small initial gaps resulted in lower bounces than the ICM prediction, while the ICM is approximately correct for larger initial gaps, despite the overlapping collisions.
There are several extensions of this work that can serve as student research projects. This universal study can be applied to realistic ball parameters for both first and second bounces.
Detailed study of two-particle interactions can also be extended to chain collisions of multiple particles in a line. 8,[15][16][17][18][19] or studied in the chaotic régime. 20 While the linear dashpot force gives qualitatively similar results to the Hertz force on the first bounce, 6 simulating these forces in chain collisions yields qualitatively different results, 21 so an interested student may wish to extend this study to such a force.
This toy model can serve as a student's introduction to impact mechanics, which is relevant in a variety of fields including engineering, granular materials, and molecular dynamics.
The results of this model may be relevant to study of inelastic collapse and clumping in granular flows, [22][23][24][25][26] particularly in the presence of gravity or another driving force. 27 Inelastic collapse involves an infinite number of collisions in finite time, which presents challenges to event-driven modeling, 28
FIG. 1 :
1The coordinates and initial conditions are defined to remove reference to the radii of the balls. (a) The drop height h is the initial distance between the bottom of the lower ball and the floor. The drop gap ∆h is the initial distance between the top of the lower ball and the bottom of the upper ball. (b) Coordinate z 1 is measured from the floor to the bottom of the lower ball, and z 2 is measured from the top of the lower ball when it touches the floor.
balls free-falling after release from rest, (ii) the lower ball colliding with the floor, (iii) both balls in free fall, with the lower ball moving upward, (iv) the collision of the two balls, (v) the upper ball in free fall until it reaches its maximum height. The collisions are approximated as instantaneous, and we assume the same coefficent of restitution ǫ for both collisions, for simplicity in the analysis. (i) We define the drop height h = z 1 (0) and the gap ∆h = z 2 (0) − z 1 (0). The balls are released from rest simultaneously, so both fall a distance h before the lower ball reaches the floor, and have a speed v i = 2gh.
(
iv) The post-collision velocities of the balls are calculated using (2-3), with pre-collision velocities (8-9). Because we are calculating the maximum height of the upper ball, we neglect the lower ball from this point on. The upper ball's velocity post-collision is
IV. NUMERICAL RESULTS FROM THE LINEAR DASHPOT MODELMost analysis of the two-ball drop problem focuses on the first bounce, 11 taken here to encompass the time from which the balls are released until the upper ball reaches its next local maximum in height. The collisions of interest for the first bounce include the first collision between the lower ball and the floor, and one or several collisions between the two balls. We extend our analysis to include the second bounce of the upper ball. The lower ball contacts the floor one or more times before the balls collide for the second time, and the second collision may or may not occur with the lower ball in contact with the floor.
FIG. 2 :FIG. 3 :
23A. Details of first bounce collisionsAnalysis of the forces (21) from simulations with simultaneous collisions reveals multiple contacts between the two balls under a variety of conditions, consistent with previous analysis.6 Figure 2shows a case with four contacts between the balls, occurring entirely while the lower ball is in contact with the floor. The compression of the balls is illustrated by X 1 and X 2 becoming negative inFig. 2.The number of contacts between the two balls depends sensitively upon the ball parameters and initial condition. The count varies between one and twelve for the representative data shown inFig. 3, with ε = 0.816. Multiple contacts are mostly found when µ and τ d /τ f are small, so the plot axes are scaled logarithmically to show this detail, and the range ofτ d /τ f is expanded to (10 −4 , 1). Comparison to Fig. 4 shows that the number of contacts between the balls does not correlate with the bounce height of the upper ball. Because contact between the two balls is defined by f 2 = 0, instead of by the balls' positions, experimental setups that measure position only cannot confirm or refute the details of the collisions presented here. 5 Piezoelectric sensors placed between the balls The trajectories of the balls and the force curves during the first collision are shown for a representative case where τ d /τ f = 0.010. In this case, µ = 0.010, and ε 01 = ε 12 = 0.9. on the floor present a more promising experimental setup, but the situations studied do not show clear evidence for multiple contacts. 4 However, these particular measurements do not conflict with our simulations, as they do not match conditions predicted to exhibit multiple collisions. Future experiments targeting the parameters that predict multiple contacts would help validate the use of this model.B. Comparison of first bounce heights to ICMThe post-collision motion of the balls is more readily observable in an experimental setting than the forces between the balls. We focus on the maximum height of the upper ball after each bounce rather than the relative velocity of the balls because the height of the second bounce is influenced by both the post-collision velocity and the height of the second collision.When the initial gap between the balls is small (τ d /τ f ≪ 1), the height of the first bounce The number of contacts between the two balls during the first bounce with a typical coefficient of restitution ε = 0.816, chosen to be the same for both balls. The axes are logarithmically scaled to illustrate the effects found with small initial gaps and mass ratios.
Fig. 5 .FIG. 4 :
54These conditions lead to small bounce heights, below the initial drop height, in any case. The ICM closely approximates post-collision behavior for a variety of cases where the collisions are not truly independent, as τ d /τ f < 1. For τ d /τ f ≈ 0.7, there is deviation from the ICM for some regions of the parameter space shown in Fig. Comparison to Fig. 3 shows no clear relationship between number of contacts and the height the upper ball reaches on its first bounce. are well-approximated by the simple model. Plots of larger values of τ d /τ f are not shown because they do not exhibit noticeable contrast, as all results are quite close to the ICM value. For example, the bounce heights are all within 5% of the ICM limit for τ d /τ f > 0.85 for the range of ball parameters studied. This unexpected success of the ICM is confirmed by experimental measurements. 4,5
εFIG. 5 :
5= 0.816 as an illustrative example. This plot shows that most cases of the delayed rebound effect occur for small initial drop gaps and small mass ratios.For moderate initial gaps, as shown here, there are some combinations of ε, µ that result in bounces lower than the ICM prediction, while others result in bounces higher than the ICM. The ratios plotted range from 0.49 to 1.48.
FIG. 6 :
6The general sense of the delayed rebound effect obtained by visual inspection is confirmed by systematic comparison. For the parameter ranges studied, 12.3% of combinations resulted in the delayed rebound effect. Of these, 93% occurred in cases where the first bounce The results of the simulation begin to to converge to the ICM result for a wide range of ε, µ, despite the collisions not being independent, with τ d /τ f < 1. The ratios plotted range from 0.87 to 1.27.
FIG. 7 :
7Comparison of the second bounce height to the first bounce height. Red points indicate situations in which the second bounce is higher than the first.
FIG. 8 :
8Comparison of bounce heights to the initial height for a representative value of the coefficient of restitution. The plot in (a) shows that the first bounce exceeds the drop height in most regions of parameter space. In (b), we see more detailed structure for the second bounce.
so understanding when the ICM is accurate can help improve simulation efficiency. A thorough experimental study of the delayed rebound effect requires a mechanism to constrain the interacting particles to a single dimension. Precisely aligned spheres, as used in experimental studies of the first bounce, are unlikely to remain aligned for a second bounce. Low friction carts on an inclined track, with springs for repulsion, may be a useful model, although it might be difficult to achieve the range of mass ratios seen in ball drop experiments. * [email protected] 1 Walter Roy Mellen. Superball Rebound Projectiles. American Journal of Physics, 36(9):845-
Velocity Amplification in Collision Experiments Involving Superballs. William G Harter, American Journal of Physics. 396William G. Harter. Velocity Amplification in Collision Experiments Involving Superballs. Amer- ican Journal of Physics, 39(6):656-663, June 1971.
Simple explanation of a well-known collision experiment. F Herrmann, P Schmälzle, American Journal of Physics. 498F. Herrmann and P. Schmälzle. Simple explanation of a well-known collision experiment. Amer- ican Journal of Physics, 49(8):761-764, August 1981.
Vertical bounce of two vertically aligned balls. Rod Cross, American Journal of Physics. 7511Rod Cross. Vertical bounce of two vertically aligned balls. American Journal of Physics, 75(11):1009-1016, October 2007.
The two-ball bounce problem. Y Berdeni, A Champneys, R Szalai, Proc. R. Soc. Lond. A. R. Soc. Lond. A47120150286Y. Berdeni, A. Champneys, and R. Szalai. The two-ball bounce problem. Proc. R. Soc. Lond. A, 471(2179):20150286, July 2015.
Two-ball problem revisited: Limitations of event-driven modeling. P Muller, T Poschel, Physical Review E. 83441304Muller, P. and T. Poschel. Two-ball problem revisited: Limitations of event-driven modeling. Physical Review E, 83(4):041304, April 2011.
A mass-spring-damper model of a bouncing ball. M Nagurka, Shuguang Huang, Proceedings of the 2004 American Control Conference. the 2004 American Control Conference1M. Nagurka and Shuguang Huang. A mass-spring-damper model of a bouncing ball. In Pro- ceedings of the 2004 American Control Conference, volume 1, pages 499-504 vol.1, 2004.
The Hertz contact in chain elastic collisions. P Patrício, American Journal of Physics. 7212P. Patrício. The Hertz contact in chain elastic collisions. American Journal of Physics, 72(12):1488-1491, November 2004.
Inelastic collision and the Hertz theory of impact. D Gugan, American Journal of Physics. 6810D. Gugan. Inelastic collision and the Hertz theory of impact. American Journal of Physics, 68(10):920-924, September 2000.
Coefficient of restitution and linear-dashpot model revisited. Thomas Schwager, Thorsten Pöschel, Granular Matter. 96Thomas Schwager and Thorsten Pöschel. Coefficient of restitution and linear-dashpot model revisited. Granular Matter, 9(6):465-469, November 2007.
Magic mass ratios of complete energy-momentum transfer in onedimensional elastic three-body collisions. June-Haak Ee, Jungil Lee, American Journal of Physics. 832June-Haak Ee and Jungil Lee. Magic mass ratios of complete energy-momentum transfer in one- dimensional elastic three-body collisions. American Journal of Physics, 83(2):110-120, January 2015.
Data and visualizations. Sean Bartz, Sean Bartz. Data and visualizations.
A Systematized Collection of ODE Solvers. A C Hindmarsh, Odepack, IMACS Transactions on Scientific Computation. 1A. C. Hindmarsh. ODEPACK, A Systematized Collection of ODE Solvers. IMACS Transactions on Scientific Computation, 1:55-64, 1983.
Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations. L R Petzold, SIAM Journal on Scientific and Statistical Computing. 41L. R. Petzold. Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations. SIAM Journal on Scientific and Statistical Computing, 4(1):136-148, 1983.
Velocity, Momentum, and Energy Transmissions in Chain Collisions. James D Kerwin, American Journal of Physics. 408James D. Kerwin. Velocity, Momentum, and Energy Transmissions in Chain Collisions. Amer- ican Journal of Physics, 40(8):1152-1158, August 1972.
A billiard-theoretic approach to elementary one-dimensional elastic collisions. S Redner, American Journal of Physics. 7212S. Redner. A billiard-theoretic approach to elementary one-dimensional elastic collisions. Amer- ican Journal of Physics, 72(12):1492-1498, 2004.
Astroblaster-a fascinating game of multi-ball collisions. Marián Kireš, Physics Education. 442Marián Kireš. Astroblaster-a fascinating game of multi-ball collisions. Physics Education, 44(2):159-164, February 2009.
Shock Absorption Using Linear Particle Chains With Multiple Impacts. Mohamed Gharib, Ahmet Celik, Yildirim Hurmuzlu, Journal of Applied Mechanics. 78331005Mohamed Gharib, Ahmet Celik, and Yildirim Hurmuzlu. Shock Absorption Using Linear Par- ticle Chains With Multiple Impacts. Journal of Applied Mechanics, 78(3):031005, May 2011.
Maximizing kinetic energy transfer in one-dimensional manybody collisions. Bernard Ricardo, Paul Lee, European Journal of Physics. 36225013Bernard Ricardo and Paul Lee. Maximizing kinetic energy transfer in one-dimensional many- body collisions. European Journal of Physics, 36(2):025013, February 2015.
Two balls in one dimension with gravity. N D Whelan, D A Goodings, J K Cannizzo, Phys. Rev. A. 42N. D. Whelan, D. A. Goodings, and J. K. Cannizzo. Two balls in one dimension with gravity. Phys. Rev. A, 42:742-754, Jul 1990.
The fragmentation of a line of balls by an impact. E J Hinch, S Saint-Jean, Proc. R. Soc. Lond. A. R. Soc. Lond. A455E. J. Hinch and S. Saint-Jean. The fragmentation of a line of balls by an impact. Proc. R. Soc. Lond. A, 455(1989):3201-3220, September 1999.
Inelastic collapse and clumping in a one-dimensional granular medium. Sean Mcnamara, W R Young, Physics of Fluids A: Fluid Dynamics. 43Sean McNamara and W. R. Young. Inelastic collapse and clumping in a one-dimensional gran- ular medium. Physics of Fluids A: Fluid Dynamics, 4(3):496-504, 1992.
Inelastic collisions of three particles on a line as a two-dimensional billiard. Peter Constantin, Elizabeth Grossman, Muhittin Mungan, Physica D: Nonlinear Phenomena. 834Peter Constantin, Elizabeth Grossman, and Muhittin Mungan. Inelastic collisions of three particles on a line as a two-dimensional billiard. Physica D: Nonlinear Phenomena, 83(4):409 - 420, 1995.
Inelastic collapse of three particles. Tong Zhou, Leo P Kadanoff, Phys. Rev. E. 54Tong Zhou and Leo P. Kadanoff. Inelastic collapse of three particles. Phys. Rev. E, 54:623-628, Jul 1996.
Cluster-growth in freely cooling granular media. S Luding, H J Herrmann, Chaos: An Interdisciplinary Journal of Nonlinear Science. 93S. Luding and H. J. Herrmann. Cluster-growth in freely cooling granular media. Chaos: An Interdisciplinary Journal of Nonlinear Science, 9(3):673-681, 1999.
Inelastic collapse of perfectly inelastic particles. Nikola Topic, Thorsten Pöschel, Communications Physics. 285Nikola Topic and Thorsten Pöschel. Inelastic collapse of perfectly inelastic particles. Commu- nications Physics, 2(1):85, July 2019.
Inelastic collapse in one-dimensional driven systems under gravity. Hiroyuki Jun'ichi Wakou, Takahiro Kitagishi, Hiizu Sakaue, Nakanishi, Phys. Rev. E. 8742201Jun'ichi Wakou, Hiroyuki Kitagishi, Takahiro Sakaue, and Hiizu Nakanishi. Inelastic collapse in one-dimensional driven systems under gravity. Phys. Rev. E, 87:042201, Apr 2013.
Event driven algorithms applied to a high energy ball mill simulation. Roland Reichardt, Wolfgang Wiechert, Granular Matter. 93Roland Reichardt and Wolfgang Wiechert. Event driven algorithms applied to a high energy ball mill simulation. Granular Matter, 9(3):251-266, June 2007.
| []
|
[
"EFFICIENT REPRESENTATION LEARNING OF SUBGRAPHS BY SUBGRAPH-TO-NODE TRANSLATION",
"EFFICIENT REPRESENTATION LEARNING OF SUBGRAPHS BY SUBGRAPH-TO-NODE TRANSLATION"
]
| [
"Dongkwan Kim [email protected] \nSchool of Computing\nKAIST\n\n",
"Alice Oh [email protected] \nSchool of Computing\nKAIST\n\n"
]
| [
"School of Computing\nKAIST\n",
"School of Computing\nKAIST\n"
]
| []
| A subgraph is a data structure that can represent various real-world problems. We propose Subgraph-To-Node (S2N) translation, which is a novel formulation to efficiently learn representations of subgraphs. Specifically, given a set of subgraphs in the global graph, we construct a new graph by coarsely transforming subgraphs into nodes. We perform subgraph-level tasks as node-level tasks through this translation. By doing so, we can significantly reduce the memory and computational costs in both training and inference. We conduct experiments on four real-world datasets to evaluate performance and efficiency. Our experiments demonstrate that models with S2N translation are more efficient than state-of-theart models without substantial performance decrease. | 10.48550/arxiv.2204.04510 | [
"https://arxiv.org/pdf/2204.04510v1.pdf"
]
| 248,085,310 | 2204.04510 | 02e73719b1fdc63c31dd1a3f7eff0052a13227e8 |
EFFICIENT REPRESENTATION LEARNING OF SUBGRAPHS BY SUBGRAPH-TO-NODE TRANSLATION
Dongkwan Kim [email protected]
School of Computing
KAIST
Alice Oh [email protected]
School of Computing
KAIST
EFFICIENT REPRESENTATION LEARNING OF SUBGRAPHS BY SUBGRAPH-TO-NODE TRANSLATION
Accepted at the ICLR 2022 Workshop on Geometrical and Topological Representation Learning
A subgraph is a data structure that can represent various real-world problems. We propose Subgraph-To-Node (S2N) translation, which is a novel formulation to efficiently learn representations of subgraphs. Specifically, given a set of subgraphs in the global graph, we construct a new graph by coarsely transforming subgraphs into nodes. We perform subgraph-level tasks as node-level tasks through this translation. By doing so, we can significantly reduce the memory and computational costs in both training and inference. We conduct experiments on four real-world datasets to evaluate performance and efficiency. Our experiments demonstrate that models with S2N translation are more efficient than state-of-theart models without substantial performance decrease.
INTRODUCTION
Graph neural networks (GNNs) have been developed to learn representations of nodes, edges, and graphs (Bronstein et al., 2017;Battaglia et al., 2018;. Recently, Alsentzer et al. (2020) has proposed SubGNN, a specialized architecture for learning representations of subgraphs. This architecture outperforms prior models; however, it requires a lot of memory and computations to learn the non-trivial structure and various attributes in subgraphs.
In this paper, we propose 'Subgraph-To-Node (S2N)' translation, a novel method to create data structures to solve subgraph-level prediction tasks efficiently. The S2N translation constructs a new graph where its nodes are original subgraphs, and its edges are relations between subgraphs. The GNN models can encode the node representations in the translated graph. Then, we can get the results of the subgraph-level tasks by performing node-level tasks from these node representations.
For example, in a knowledge graph where subgraphs are diseases, nodes are symptoms, and edges are relations between symptoms based on knowledge in the medical domain, the goal of the diagnosis task is to predict the type of a disease (i.e., the class of a subgraph). Using S2N translation, we can make a new graph of diseases, nodes of which are diseases and edges of which are relations between them (e.g., whether two diseases share any symptoms).
The S2N translation enables efficient subgraph representation learning for the following two reasons. First, it provides a small and coarse graph in which the number of nodes is reduced to the number of original subgraphs. We can load large batches of subgraphs on the GPU and parallelize the training and inference. Second, there is a wider range of models to choose from in encoding translated graphs. We confirm that even a simple pipeline of DeepSets (Zaheer et al., 2017) and GCN (Kipf & Welling, 2017) can outperform state-of-the-art models.
We conduct experiments with four real-world datasets to evaluate the performance and efficiency of S2N translation. We measure the number of parameters, throughput (samples per second), and latency (seconds per forward pass) for efficiency (Dehghani et al., 2021). We demonstrate that models with S2N translation are more efficient than the existing approach without a significant performance drop. Even some models perform better than baselines in three of the four datasets. S2N translation. Subgraphs Si and Sj are transformed into nodesvi andvj by Tv, and an edgeêij between them is formed by Te.
iv i = T v ( i ) v j jê ij = T e ( i , j ) (a) The
Graph Encoder
Set Encoder
Prediction
Set Encoder
Set Encoder Set Encoder Shared (b) Models for graphs translated by S2N. We treat the nodê v in the translated graph as a set of nodes in the original subgraph S. Thus, we apply a set encoder first, then a graph encoder (GNN) to their outputs for the prediction. Figure 1: Overview of the Subgraph-To-Node translation and the models for translated graphs.
SUBGRAPH-TO-NODE TRANSLATION
We introduce the Subgraph-To-Node (S2N) translation and our specific design choices. We also suggest model families for the subgraph prediction task using S2N translated graphs.
Notations We first summarize the notations in the subgraph representation learning, particularly in the subgraph classification task. Let G = (V, A, X) be a global graph where V is a set of nodes (|V| = N ), A ∈ {0, 1} N ×N is an adjacency matrix, and X ∈ R N ×F0 is a node feature matrix. A subgraph S = (V sub , A sub ) is a graph formed by subsets of nodes and edges in the global graph G. For the subgraph classification task, there is a set of M subgraphs S = {S 1 , S 2 , ..., S M }, and for
S i = (V sub i , A sub i )
, the goal is to learn its representation and the logit vector y i ∈ R C where C is the number of classes.
Overview of S2N Translation The S2N translation reduces the memory and computational costs in the model training and inference by constructing a new coarse graph that summarizes the original subgraph into a node. As illustrated in Figure 1a, for each subgraph S i ∈ S in the global graph G, we create a nodev i = T v (S i ) in the translated graphĜ; for all pairs (S i , S j ) of two close subgraphs in G, we make an edgeê ij = T e (S i , S j ) between corresponding nodes inĜ. Here, T v and T e are translation functions for nodes and edges inĜ, respectively. Formally, the S2N translated grapĥ
G = (V,Â) where |V| = M and ∈ {0, 1} M ×M is defined bŷ V = {v i |v i = T v (S i ), S i ∈ S},Â[i, j] =ê ij = T e (S i , S j ).
(1)
We can choose any function for T v and T e . They can be simple heuristics or modeled with neural networks to learn the graph structure (Franceschi et al., 2019;Kim & Oh, 2021;Fatemi et al., 2021).
Detailed Design of S2N Translation In this paper, we choose straightforward designs of T v and T e with negligible translation costs. For T v , we use a function that ignores the internal structure
A sub i of the subgraph S i = (V sub i , A sub i )
and treats the node as a set (i.e., V sub i ). For T e , we make an edge if at least one common node between two subgraphs S i and S j . They are defined as follows:
v i = T v (S i ) = V sub i ,ê ij = T e (S i , S j ) = 1 if |V sub i ∩ V sub j | = 0 0 otherwise .(2)
In some cases, this particular translation provides a more intuitive description for real-world problems than a form of subgraphs. For a fitness social network (EM-User) from Alsentzer et al. (2020) (subgraphs: users, nodes: workouts, edges: whether multiple users complete workouts), it will be translated into a network of users connected if they complete the same workouts. This graph directly expresses the relation between users and follows the conventional approach to express social networks where nodes are users.
Models for S2N Translated Graphs
We propose simple but strong model pipelines for S2N translated graphs. Since the nodev i is a set of original nodes in S i , we first use a set encoder E set :V → R F (Wagstaff et al., 2021) where F is a dimension of the representation. It takes a set of
h i = E set (v i ) = E set (V sub i ) = E set ({x u |x u = X[u, :], u ∈ V sub i }).(3)
Then, given the node representationĥ i , we apply a graph encoder E graph :
R M ×F × {0, 1} M ×M → R M ×C to get the logit vectorŷ i ∈ R C .
For the input and output of E graph , we use matricesĤ ∈ R M ×F andŶ ∈ R M ×C where the ith rows areĥ i andŷ i , respectively.
Y = E graph (Ĥ,Â).(4)
For E graph , we can take any GNNs that perform message-passing between nodes. This node-level message-passing on translated graphs is analogous to message-passing at the subgraph level in Sub-GNN (Alsentzer et al., 2020).
EXPERIMENTS
This section describes the experimental setup, including datasets, training, evaluation, and models.
Datasets We use four real-world datasets, PPI-BP, HPO-Neuro, HPO-Metab, and EM-User, introduced in Alsentzer et al. (2020). The task is subgraph classification where nodes V, edges A, and subgraphs S ∈ S are given in datasets. There are two input node features X pretrained with GIN or GraphSAINT from the same paper. Detailed description and statistics are in Appendix B.
Training and Evaluation In the original setting from the SubGNN paper, evaluation (i.e., validation and test) samples cannot be seen during the training stage. Following this protocol, we create different S2N graphs for each stage using train and evaluation sets of subgraphs (S train and S eval ). For the S2N translation, we use S train only in the training stage, and use both S train ∪ S eval in the evaluation stage. That is, we predict unseen nodes based on structures translated from S train ∪ S eval in the evaluation stage. In this respect, node classification on S2N translated graphs is inductive.
Models for S2N Translated Graphs We use two-or four-layer DeepSets (Zaheer et al., 2017) with sum or max operations as E set for all S2N models. For E graph , we use well-known graph neural networks: GCN (Kipf & Welling, 2017) and GAT (Veličković et al., 2018). In addition, LINKX (Lim et al., 2021) and FAGCN (Bo et al., 2021), models that perform well in non-homophilous graphs are employed. All GNNs are one-or two-layer models. See Appendix C.1 for their hyperparameters.
Since LINKX is designed for the transductive setting, we make a small change in LINKX to work in the inductive setting. We call this variant LINKX-I. See Appendix C.2 for this modification.
Baselines We use current state-of-the-art models for subgraph classification as baselines: Sub2Vec (Adhikari et al., 2018), Graph-level GIN (Xu et al., 2019), and SubGNN (Alsentzer et al., 2020). We report the best performance among three variants for Sub2Vec (N, S, and NS) and two results by different pretrained embeddings for SubGNN. All baselines results are reprinted from Alsentzer et al. (2020).
RESULTS
In this section, we analyze the characteristics of S2N translated graphs and compare our models and baselines on classification performance and efficiency. Analysis of S2N Translated Graphs Table 1 summarizes dataset statistics before and after S2N translation, including node (Pei et al., 2020) and edge homophily (Zhu et al., 2020). Except for HPO-Neuro, translated graphs have a smaller number of nodes (×0.006 -×0.03) and edges (×0.17 -×0.78) than original graphs. For HPO-Neuro, it has twice as many edges as the original graph, but has ×0.27 fewer nodes. Since the number of edges decreased less than nodes, translated graphs are denser than originals (×9.7 -×297). We also find that they are non-homophilous (low homophily), which means there are many connected nodes of different classes.
Note that we propose multi-label node and edge homophily for multi-label datasets (HPO-Neuro):
h node, ml = 1 |V| v∈V 1 |N (v)| u∈N (v) |L u ∩ L v | |L u ∪ L v | , h edge, ml = 1 |A| (u,v)∈A |L u ∩ L v | |L u ∪ L v | ,(5)
where L v is a set of labels of v, N (v) is a set of neighbors of v, and A = {(u, v)|A[u, v] = 1}. They generalize the existing multi-class homophily and we discuss more in Appendix D.
Performance In Table 2, we report the mean and standard deviation of micro-F1 score over ten runs of our models and baselines. LINKX-I and FAGCN, which are known to work well in nonhomophilous graphs, perform on par with or better than the best baseline in 12 of 16 cases. Here, 'performance on par with the baseline' implies no significant difference from the t-test at a level of 0.05 (∼: p-value > .05), which does not mean that our model is superior. For PPI-BP and HPO-Metab, some models even outperform SubGNN with statistical significance ( : p-value < .05).
Notably, all S2N models outperform SubGNN in the PPI-BP, which has relatively high homophily. GCN and GAT underperform LINKX-I and FAGCN for most experiments.
Efficiency In Figure 2, we show the number of parameters, throughput (subgraphs per second), and latency (seconds per forward pass) of S2N models and SubGNN on HPO-Neuro, HPO-Metab, and EM-User. We cannot experiment with PPI-BP since it takes more than 48 hours in pre-computation. We make three observations in this figure. First, S2N models use fewer parameters and process many samples faster (i.e., higher throughput and lower latency) than SubGNN. In particular, for throughput, S2N models can process 8 to 300 times more samples than SubGNN for the same amount of time. Second, the training throughput is higher than the inference throughput in S2N models. Generally, as in SubGNN, throughput increases in the inference step, which does not require gradient calculation. This is because the S2N models use message-passing between training and inference samples (See §3). Thus, they compute both training and inference samples, requiring more computation for the inference stage. Lastly, as one exception to general trends, the training latency of GAT on HPO-Metab is higher than that of SubGNN. Note that latency ignores the parallelism from large batch sizes (Dehghani et al., 2021). Our model can show relatively high latency since it requires full batch computation. See Appendix E for the experimental setup.
CONCLUSION AND FUTURE RESEARCH
We propose Subgraph-To-Node (S2N) translation, a novel way to learn representations of subgraphs efficiently. Using S2N, we create a new graph where nodes are original subgraphs, edges are relations between subgraphs, and perform subgraph-level tasks as node-level tasks. S2N translation significantly reduces memory and computation costs without performance degradation.
There are limitations in this research. First, we used simple translate functions and did not explore them deeply. How do we define aggregated features and structures in translated graphs? Second, we do not yet know the properties of subgraphs that affect the performance of the S2N translation.
What properties of subgraphs can be learned after translation? We leave these as future directions. EM-User EM-User (Users in EndoMondo) dataset is a social fitness network from Endomondo (Ni et al., 2019). Here, subgraphs are users, nodes are workouts, and edges exist between workouts completed by multiple users. Each subgraph represents the workout history of a user. The task is to profile a user's gender.
C MODELS
This section describes the model details we used: hyperparameter tuning and LINKX-I design. All models are implemented with PyTorch (Paszke et al., 2019), PyTorch Geometric , and PyTorch Lightning (Falcon & The PyTorch Lightning team, 2019).
C.1 HYPERPARAMETERS
We tune seven hyperparameters using TPE (Tree-structured Parzen Estimator) algorithm in Optuna (Akiba et al., 2019) by 30 trials: weight decay (10 −9 -10 −6 ), the number of layers in E set (2 or 4), the number of layers in E graph (1 or 2), the pooling operating in E set (sum or max), dropout of channels and edges ({0.0, 0.1, ..., 0.5}), and gradient clipping ({0.0, 0.1, ..., 0.5}). We use batch normalization (Ioffe & Szegedy, 2015) for all S2N models except LINKX-I.
C.2 INDUCTIVE LINKX (LINKX-I)
Given node features X ∈ R N ×F0 and an adjacent matrix A ∈ R N ×N , LINKX (Lim et al., 2021) model computes the logit matrix Y ∈ R N ×C by following equations,
H A = MLP A (A) ∈ R N ×F , H X = MLP X (X) ∈ R N ×F ,(6)Y = MLP f (ReLU(W f [H A H X ] + H A + H X )) where W f ∈ R F ×2F(7)
The computation of the first single layer in MLP A = Linear A • ReLU • Linear A • ... is as follows
Linear A (A) = AW A , (AW A )[i, k] = j∈N (i) W A [j, k], W A ∈ R N ×F .(8)
In our inductive setting, we have train and train+eval , the shapes of which arê A train ∈ {0, 1} Mtrain×Mtrain , train+eval ∈ {0, 1} (Mtrain+Meval)×(Mtrain+Meval) .
If we train MLP A on train , we cannot process train+eval , because shapes of matrix multiplication do not match (i.e., M train + M eval = M train ). Thus, in LINKX-I, we use the modified matrix multiplication in MLP A to aggregate parameters corresponding training nodes only. Formally, for the matrix M * ∈ R M * ×M * of the arbitrary shape,
(Â M * W A )[i, k] = j∈N (i)∧j∈Vtrain W A [j, k], (Â M * W A ) ∈ R M * ×F(10)
The remaining parts are the same as LINKX.
D GENERALIZATION OF HOMOPHILY TO MULTI-LABEL CLASSIFICATION
Node (Pei et al., 2020) and edge homophily (Zhu et al., 2020) are defined by,
h edge = |{(u, v)|(u, v) ∈ A ∧ y u = y v }| |A| , h node = 1 |V| v∈V |{(u, v)|u ∈ N (v) ∧ y u = y v }| |N (v)| ,(11)
where y v is the label of the node v. In the main paper, we define multi-label node and edge homophily by,
h edge, ml = 1 |A| (u,v)∈A |L u ∩ L v | |L u ∪ L v | , h node, ml = 1 |V| v∈V 1 |N (v)| u∈N (v) |L u ∩ L v | |L u ∪ L v | .(12)
If we compute r = |Lu∩Lv| |Lu∪Lv| for single-label multi-class graphs, r = 1 1 = 1 for nodes of same classes, and r = 0 2 = 0 for nodes of different classes. That makes h edge, ml = h edge and h node, ml = h node for single-label graphs.
Efficiency of S2N models and SubGNN on EM-User.
Figure 2 :
2The number of parameters, throughput, and latency of S2N models and SubGNN on HPO-Neuro (Top), HPO-Metab (Middle) and EM-User (Bottom).
ACKNOWLEDGMENTS This research was supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921).
1
arXiv:2204.04510v1 [cs.LG] 9 Apr 2022 Accepted at the ICLR 2022 Workshop on Geometrical and Topological Representation Learning
Table 1 :
1Statistics of real-world datasets before and after S2N translation. node features inv i as an input and generates the representationĥ i ∈ R F ofv i , that is,PPI-BP
HPO-Neuro
HPO-Metab
EM-User
# nodes (before → after) 17.1K → 1.6K
14.6K → 4.0K 14.6K → 2.4K 57.3K → 324
# edges (before → after) 317.0K → 55.7K 3.2M → 6.6M 3.2M → 2.5M 4.6M → 87.2K
Density (before → after) 0.002 → 0.021
0.030 → 0.413 0.030 → 0.439 0.003 → 0.830
# classes
6
10
6
2
Node / Edge homophily
0.449 / 0.391
0.176 / 0.175
0.195 / 0.189
0.514 / 0.511
Table 2 :
2Summary of classification performance in mean micro-F1 score over 10 random seeds for real-world datasets. Results of the unpaired t-test with the best baseline are denoted by colors and superscripts ( ∼: no statistically significant difference, i.e., p-value > .05 ,: outperformed with p-value < .05 ). We mark with daggers ( †) the reprinted results from
Alsentzer et al. (2020).
Model
Embedding
PPI-BP
HPO-Neuro HPO-Metab EM-User
Sub2Vec Best †
-
30.9 ±2.3 22.3 ±6.5
13.2 ±4.7
85.9 ±1.4
Graph-level GIN † -
39.8 ±5.8 53.5 ±3.2
45.2 ±2.5
56.1 ±5.9
SubGNN †
GIN
59.9 ±2.4 63.2 ±1.0
53.7 ±2.3
81.4 ±4.6
SubGNN †
GraphSAINT 58.3 ±1.7 64.4 ±1.9
42.8 ±3.5
81.6 ±4.0
S2N + GCN
GIN
61.4 ∼
±1.6
59.0 ±0.7
51.6 ±1.8
70.2 ±2.3
S2N + GCN
GraphSAINT 60.6 ∼
±1.2
59.9 ±0.7
50.6 ±1.9
69.0 ±4.5
S2N + GAT
GIN
60.8 ∼
±2.7
53.1 ±1.9
47.9 ±3.4
71.4 ±6.3
S2N + GAT
GraphSAINT 60.4 ∼
±1.4
54.6 ±2.0
49.4 ±4.5
80.2 ±4.8
S2N + LINKX-I
GIN
60.9 ∼
±1.8
62.9 ±1.1
55.9 ∼
±2.6
83.3 ±3.6
S2N + LINKX-I
GraphSAINT 61.3 ∼
±1.5
62.9 ∼
±1.3
57.9 ±2.1
84.7 ∼
±2.9
S2N + FAGCN
GIN
62.8 ±1.2 64.5 ∼
±1.3
58.2 ±2.7
80.0 ±4.0
S2N + FAGCN
GraphSAINT 60.7 ∼
±3.1
63.3 ∼
±1.1
57.5 ±3.3
82.9 ±3.7
https://github.com/mims-harvard/SubGNN
(Niepert et al., 2016;Morris et al., 2019;Bouritsas et al., 2020), scalability(Hamilton et al., 2017;Chiang et al., 2019;Zeng et al., 2020), and augmentation(Qiu et al., 2020;You et al., 2020). However, only a few studies deal with learning representations of subgraphs. The Subgraph Pattern Neural Network(Meng et al., 2018)learns subgraph evolution patterns but does not generalize to subgraphs with varying sizes. The Subgraph Neural Network (SubGNN)(Alsentzer et al., 2020)is the first approach of subgraph representation learning using topology, positions, and connectivity. However, SubGNN requires large memory and computation costs to encode the mentioned information for prediction. Our method allows efficient learning of subgraph representations without a complex model design.Graph Coarsening Our S2N translation summarizes subgraphs into nodes, and in that sense, it is related to graph coarsening methods(Loukas & Vandergheynst, 2018;Loukas, 2019;Bravo Hermsdorff & Gunderson, 2019;Jin et al., 2020;Deng et al., 2020;Cai et al., 2021;Huang et al., 2021). They focus on creating coarse graphs while preserving specific properties in a given graph, such as spectral similarity or distance. The difference between them and ours is whether the node boundaries in coarse graphs (or super-nodes) are given or not. Super-nodes are unknown in existing works of graph coarsening; thus, algorithms to decide on super-nodes are required. In S2N translation, we treat subgraphs as super-nodes and can create coarse graphs with simple heuristics.B DATASETSSubgraph datasets PPI-BP, HPO-Neuro, HPO-Metab, and EM-User are proposed inAlsentzer et al. (2020), and can be downloaded from the GitHub repository 1 . InTable 3, we summarize statistics of datasets in original forms without S2N translation. We describe their nodes, edges, subgraphs, tasks, and references in the following paragraphs.PPI-BPThe global graph of PPI-BP(Zitnik et al., 2018;Subramanian et al., 2005;Consortium, 2019;Ashburner et al., 2000)is a human protein-protein interaction (PPI) network; nodes are proteins, and edges are whether there is a physical interaction between proteins. Subgraphs are sets of proteins in the same biological process (e.g., alcohol bio-synthetic process). The task is to classify processes into six categories.HPO-Neuro and HPO-Metab These two HPO (Human Phenotype Ontology) datasets(Hartley et al., 2020;Köhler et al., 2019;Mordaunt et al., 2020)are knowledge graphs of phenotypes (i.e., symptoms) of rare neurological and metabolic diseases. Each subgraph is a collection of symptoms associated with a monogenic disorder. The task is to diagnose the rare disease: classifying the disease type among subcategories (ten for HPO-Neuro and six for HPO-Metab).We compute throughput (subgraphs per second) and latency (seconds per forward pass) by following equations.Training throughput = # of training subgraphs training wall-clock time (seconds) / # of epochs ,Inference throughput = # of validation subgraphs validation wall-clock time (seconds) / # of epochs ,Training latency = training wall-clock time (seconds) # of training batches ,Inference latency = validation wall-clock time (seconds) # of validation batches .We use the best hyperparameters (including batch sizes) for each model and take the mean wallclock time over 50 epochs. Our computation device is Intel(R) Xeon(R) CPU E5-2640 v4 and single GeForce GTX 1080 Ti.
Sub2vec: Feature learning for subgraphs. Bijaya Adhikari, Yao Zhang, Naren Ramakrishnan, B Aditya Prakash, Pacific-Asia Conference on Knowledge Discovery and Data Mining. SpringerBijaya Adhikari, Yao Zhang, Naren Ramakrishnan, and B Aditya Prakash. Sub2vec: Feature learn- ing for subgraphs. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 170-182. Springer, 2018.
Optuna: A next-generation hyperparameter optimization framework. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, Masanori Koyama, Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. the 25th ACM SIGKDD international conference on knowledge discovery & data miningTakuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 2623-2631, 2019.
Subgraph neural networks. Emily Alsentzer, G Samuel, Michelle M Finlayson, Marinka Li, Zitnik, Proceedings of Neural Information Processing Systems. Neural Information Processing SystemsNeurIPSEmily Alsentzer, Samuel G Finlayson, Michelle M Li, and Marinka Zitnik. Subgraph neural net- works. Proceedings of Neural Information Processing Systems, NeurIPS, 2020.
Gene ontology: tool for the unification of biology. Michael Ashburner, Catherine A Ball, Judith A Blake, David Botstein, Heather Butler, Michael Cherry, Allan P Davis, Kara Dolinski, S Selina, Dwight, T Janan, Eppig, Nature genetics. 251Michael Ashburner, Catherine A Ball, Judith A Blake, David Botstein, Heather Butler, J Michael Cherry, Allan P Davis, Kara Dolinski, Selina S Dwight, Janan T Eppig, et al. Gene ontology: tool for the unification of biology. Nature genetics, 25(1):25-29, 2000.
W Peter, Jessica B Battaglia, Victor Hamrick, Alvaro Bapst, Vinicius Sanchez-Gonzalez, Mateusz Zambaldi, Andrea Malinowski, David Tacchetti, Adam Raposo, Ryan Santoro, Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintPeter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
Beyond low-frequency information in graph convolutional networks. Deyu Bo, Xiao Wang, Chuan Shi, Huawei Shen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. Beyond low-frequency information in graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 35, pp. 3950-3957, 2021.
Giorgos Bouritsas, Fabrizio Frasca, arXiv:2006.09252Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. arXiv preprintGiorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improv- ing graph neural network expressivity via subgraph isomorphism counting. arXiv preprint arXiv:2006.09252, 2020.
A unifying framework for spectrum-preserving graph sparsification and coarsening. Gecia Bravo Hermsdorff, Lee Gunderson, Advances in Neural Information Processing Systems. 32Gecia Bravo Hermsdorff and Lee Gunderson. A unifying framework for spectrum-preserving graph sparsification and coarsening. Advances in Neural Information Processing Systems, 32, 2019.
Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE Signal Processing Magazine. 344Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geomet- ric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
Graph coarsening with neural networks. Chen Cai, Dingkang Wang, Yusu Wang, International Conference on Learning Representations. Chen Cai, Dingkang Wang, and Yusu Wang. Graph coarsening with neural networks. In Interna- tional Conference on Learning Representations, 2021. URL https://openreview.net/ forum?id=uxpzitPEooJ.
Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningWei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 257-266, 2019.
The gene ontology resource: 20 years and still going strong. Gene Ontology Consortium, Nucleic acids research. 47D1Gene Ontology Consortium. The gene ontology resource: 20 years and still going strong. Nucleic acids research, 47(D1):D330-D338, 2019.
The efficiency misnomer. Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, Yi Tay, arXiv:2110.12894arXiv preprintMostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, and Yi Tay. The efficiency mis- nomer. arXiv preprint arXiv:2110.12894, 2021.
Graphzoom: A multilevel spectral approach for accurate and scalable graph embedding. Chenhui Deng, Zhiqiang Zhao, Yongyu Wang, Zhiru Zhang, Zhuo Feng, International Conference on Learning Representations. Chenhui Deng, Zhiqiang Zhao, Yongyu Wang, Zhiru Zhang, and Zhuo Feng. Graphzoom: A multi- level spectral approach for accurate and scalable graph embedding. In International Confer- ence on Learning Representations, 2020. URL https://openreview.net/forum?id= r1lGO0EKDH.
William Falcon and The PyTorch Lightning team. PyTorch LightningWilliam Falcon and The PyTorch Lightning team. PyTorch Lightning, 3 2019. URL https: //github.com/PyTorchLightning/pytorch-lightning.
Slaps: Self-supervision improves structure learning for graph neural networks. Bahare Fatemi, Layla El Asri, Seyed Mehran Kazemi, Advances in Neural Information Processing Systems. 342021Bahare Fatemi, Layla El Asri, and Seyed Mehran Kazemi. Slaps: Self-supervision improves struc- ture learning for graph neural networks. Advances in Neural Information Processing Systems, 34, 2021.
Fast graph representation learning with PyTorch Geometric. Matthias Fey, Jan E Lenssen, International Conference on Learning Representations Workshop on Representation Learning on Graphs and Manifolds. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In International Conference on Learning Representations Workshop on Representation Learning on Graphs and Manifolds, 2019.
Learning discrete structures for graph neural networks. Luca Franceschi, Mathias Niepert, Massimiliano Pontil, Xiao He, International conference on machine learning. PMLRLuca Franceschi, Mathias Niepert, Massimiliano Pontil, and Xiao He. Learning discrete structures for graph neural networks. In International conference on machine learning, pp. 1972-1982. PMLR, 2019.
Inductive representation learning on large graphs. Rex William L Hamilton, Jure Ying, Leskovec, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsWilliam L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025-1035, 2017.
New diagnostic approaches for undiagnosed rare genetic diseases. Annual review of genomics and human genetics. Taila Hartley, Gabrielle Lemire, Kristin D Kernohan, Heather E Howley, David R Adams, Kym M Boycott, 21Taila Hartley, Gabrielle Lemire, Kristin D Kernohan, Heather E Howley, David R Adams, and Kym M Boycott. New diagnostic approaches for undiagnosed rare genetic diseases. Annual review of genomics and human genetics, 21:351-372, 2020.
Scaling up graph neural networks via graph coarsening. Zengfeng Huang, Shengzhong Zhang, Chong Xi, Tang Liu, Min Zhou, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningZengfeng Huang, Shengzhong Zhang, Chong Xi, Tang Liu, and Min Zhou. Scaling up graph neu- ral networks via graph coarsening. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 675-684, 2021.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, International conference on machine learning. PMLRSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. PMLR, 2015.
Graph coarsening with preserved spectral properties. Yu Jin, Andreas Loukas, Joseph Jaja, International Conference on Artificial Intelligence and Statistics. PMLRYu Jin, Andreas Loukas, and Joseph JaJa. Graph coarsening with preserved spectral properties. In International Conference on Artificial Intelligence and Statistics, pp. 4452-4462. PMLR, 2020.
How to find your friendly neighborhood: Graph attention design with self-supervision. Dongkwan Kim, Alice Oh, International Conference on Learning Representations. Dongkwan Kim and Alice Oh. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Wi5KUNlqWty.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, International Conference on Learning Representations. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional net- works. In International Conference on Learning Representations, 2017.
Expansion of the human phenotype ontology (hpo) knowledge base and resources. Sebastian Köhler, Leigh Carmody, Nicole Vasilevsky, Julius O B Jacobsen, Daniel Danis, Jean-Philippe Gourdine, Michael Gargano, Nomi L Harris, Nicolas Matentzoglu, Julie A Mcmurry, Nucleic acids research. 47D1Sebastian Köhler, Leigh Carmody, Nicole Vasilevsky, Julius O B Jacobsen, Daniel Danis, Jean- Philippe Gourdine, Michael Gargano, Nomi L Harris, Nicolas Matentzoglu, Julie A McMurry, et al. Expansion of the human phenotype ontology (hpo) knowledge base and resources. Nucleic acids research, 47(D1):D1018-D1027, 2019.
Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Derek Lim, Felix Hohne, Xiuyu Li, Linda Sijia, Vaishnavi Huang, Omkar Gupta, Ser Nam Bhalerao, Lim, Advances in Neural Information Processing Systems. 342021Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34, 2021.
Graph reduction with spectral and cut guarantees. Andreas Loukas, Journal of Machine Learning Research. 20Andreas Loukas. Graph reduction with spectral and cut guarantees. Journal of Machine Learning Research, 20:1-42, 2019.
Spectrally approximating large graphs with smaller graphs. Andreas Loukas, Pierre Vandergheynst, International Conference on Machine Learning. PMLRAndreas Loukas and Pierre Vandergheynst. Spectrally approximating large graphs with smaller graphs. In International Conference on Machine Learning, pp. 3237-3246. PMLR, 2018.
Subgraph pattern neural networks for high-order graph evolution prediction. Changping Meng, Chandra Mouli, Bruno Ribeiro, Jennifer Neville, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Changping Meng, S Chandra Mouli, Bruno Ribeiro, and Jennifer Neville. Subgraph pattern neural networks for high-order graph evolution prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
Metabolomics to improve the diagnostic efficiency of inborn errors of metabolism. Dylan Mordaunt, David Cox, Maria Fuller, International journal of molecular sciences. 2141195Dylan Mordaunt, David Cox, and Maria Fuller. Metabolomics to improve the diagnostic efficiency of inborn errors of metabolism. International journal of molecular sciences, 21(4):1195, 2020.
Weisfeiler and leman go neural: Higher-order graph neural networks. Christopher Morris, Martin Ritzert, Matthias Fey, L William, Jan Eric Hamilton, Gaurav Lenssen, Martin Rattan, Grohe, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4602-4609, 2019.
Modeling heart rate and activity data for personalized fitness recommendation. Jianmo Ni, Larry Muhlstein, Julian Mcauley, The World Wide Web Conference. Jianmo Ni, Larry Muhlstein, and Julian McAuley. Modeling heart rate and activity data for person- alized fitness recommendation. In The World Wide Web Conference, pp. 1343-1353, 2019.
Learning convolutional neural networks for graphs. Mathias Niepert, Mohamed Ahmed, Konstantin Kutzkov, International conference on machine learning. PMLRMathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net- works for graphs. In International conference on machine learning, pp. 2014-2023. PMLR, 2016.
Pytorch: An imperative style, highperformance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high- performance deep learning library. In Advances in neural information processing systems, pp. 8026-8037, 2019.
Geom-gcn: Geometric graph convolutional networks. Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan, Yu Chang, Bo Lei, Yang, International Conference on Learning Representations. Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=S1e2agrFvS.
Gcc: Graph contrastive coding for graph neural network pre-training. Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningJiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceed- ings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1150-1160, 2020.
Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Aravind Subramanian, Pablo Tamayo, K Vamsi, Sayan Mootha, Mukherjee, L Benjamin, Ebert, A Michael, Amanda Gillette, Paulovich, L Scott, Pomeroy, Eric S Todd R Golub, Lander, Proceedings of the National Academy of Sciences. 10243Aravind Subramanian, Pablo Tamayo, Vamsi K Mootha, Sayan Mukherjee, Benjamin L Ebert, Michael A Gillette, Amanda Paulovich, Scott L Pomeroy, Todd R Golub, Eric S Lander, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expres- sion profiles. Proceedings of the National Academy of Sciences, 102(43):15545-15550, 2005.
Graph Attention Networks. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, International Conference on Learning Representations. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.
Universal approximation of functions on sets. Edward Wagstaff, B Fabian, Martin Fuchs, Engelcke, A Michael, Ingmar Osborne, Posner, arXiv:2107.01959arXiv preprintEdward Wagstaff, Fabian B Fuchs, Martin Engelcke, Michael A Osborne, and Ingmar Posner. Uni- versal approximation of functions on sets. arXiv preprint arXiv:2107.01959, 2021.
How powerful are graph neural networks?. Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka, International Conference on Learning Representations. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https: //openreview.net/forum?id=ryGs6iA5Km.
Graph contrastive learning with augmentations. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen, Advances in Neural Information Processing Systems. 33Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33, 2020.
Deep sets. Advances in neural information processing systems. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, R Russ, Alexander J Salakhutdinov, Smola, 30Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. Advances in neural information processing systems, 30, 2017.
Graphsaint: Graph sampling based inductive learning method. Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, Viktor Prasanna, International Conference on Learning Representations. Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. Graph- saint: Graph sampling based inductive learning method. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BJe8pkHFwS.
Graph neural networks: A review of methods and applications. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun, AI Open. 1Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applica- tions. AI Open, 1:57-81, 2020.
Beyond homophily in graph neural networks: Current limitations and effective designs. Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, Danai Koutra, Advances in Neural Information Processing Systems. 33Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793-7804, 2020.
Biosnap datasets: Stanford biomedical network dataset collection. Marinka Zitnik, Rok Sosic, Jure Leskovec, 5Marinka Zitnik, Rok Sosic, and Jure Leskovec. Biosnap datasets: Stanford biomedical network dataset collection. Note: http://snap. stanford. edu/biodata Cited by, 5(1), 2018.
| [
"https://github.com/mims-harvard/SubGNN"
]
|
[
"SHALLOW ULTRAVIOLET TRANSITS OF WD 1145+017",
"SHALLOW ULTRAVIOLET TRANSITS OF WD 1145+017"
]
| [
"Siyi Xu \nGemini Observatory\n670 N. A'ohoku Place96720HiloHI\n",
"Na ' Ama Hallakoun \nSchool of Physics and Astronomy\nTel-Aviv University\n6997801Tel-AvivIsrael\n",
"Bruce Gary \nHereford Arizona Observatory\n85615HerefordAZUSA\n",
"Paul A Dalba \nDepartment of Earth & Planetary Sciences\nUniversity of California Riverside\n900 University Ave92521RiversideCAUSA\n",
"John Debes \nSpace Telescope Science Institute\n21218BaltimoreMDUSA\n",
"Patrick Dufour \nInstitut de Recherche sur les Exoplanètes (iREx)\nUniversité de Montréal\nH3C 3J7MontréalQCCanada\n",
"Maude Fortin-Archambault \nInstitut de Recherche sur les Exoplanètes (iREx)\nUniversité de Montréal\nH3C 3J7MontréalQCCanada\n",
"Akihiko Fukui \nDepartment of Earth and Planetary Science\nGraduate School of Science\nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan\n\nInstituto de Astrofísica de Canarias\nVía Láctea s/nE-38205La Laguna, TenerifeSpain\n",
"Michael A Jura \nDepartment of Physics and Astronomy\nUniversity of California\n90095-1562Los AngelesCAUSA\n",
"Beth Klein \nDepartment of Physics and Astronomy\nUniversity of California\n90095-1562Los AngelesCAUSA\n",
"Nobuhiko Kusakabe \nAstrobiology Center\n2-21-1 Osawa181-8588MitakaTokyoJapan\n",
"Amy Steele \nDepartment of Astronomy\nPhysical Sciences Complex\nUniversity of Maryland\nBldg. 4151113, 20742-2421College ParkMDUSA\n",
"Kate Y L Su \nSteward Observatory\nUniversity of Arizona\n933 N. Cherry Avenue85721TucsonAZUSA\n",
"Andrew Vanderburg \nDepartment of Astronomy\nThe University of Texas at Austin\nStop C14002515, 78712Speedway, AustinTX\n",
"Noriharu Watanabe \nOptical and Infrared Astronomy Division\nNational Astronomical Observatory\n181-8588MitakaTokyoJapan\n\nDepartment of Astronomical Science\nGraduate University for Advanced Studies (SOKENDAI)\n181-8588MitakaTokyoJapan\n",
"Zhuchang Zhan \nDepartment of Earth, Atmospheric and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Ben Zuckerman \nDepartment of Physics and Astronomy\nUniversity of California\n90095-1562Los AngelesCAUSA\n",
"\nDepartment of Astronomy & Institute for Astrophysical Research\nBoston University\n725 Commonwealth Avenue02215BostonMAUSA\n",
"\nDepartment of Astronomy\nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan\n",
"\nJST\nPRESTO\n7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan\n",
"\nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan\n"
]
| [
"Gemini Observatory\n670 N. A'ohoku Place96720HiloHI",
"School of Physics and Astronomy\nTel-Aviv University\n6997801Tel-AvivIsrael",
"Hereford Arizona Observatory\n85615HerefordAZUSA",
"Department of Earth & Planetary Sciences\nUniversity of California Riverside\n900 University Ave92521RiversideCAUSA",
"Space Telescope Science Institute\n21218BaltimoreMDUSA",
"Institut de Recherche sur les Exoplanètes (iREx)\nUniversité de Montréal\nH3C 3J7MontréalQCCanada",
"Institut de Recherche sur les Exoplanètes (iREx)\nUniversité de Montréal\nH3C 3J7MontréalQCCanada",
"Department of Earth and Planetary Science\nGraduate School of Science\nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan",
"Instituto de Astrofísica de Canarias\nVía Láctea s/nE-38205La Laguna, TenerifeSpain",
"Department of Physics and Astronomy\nUniversity of California\n90095-1562Los AngelesCAUSA",
"Department of Physics and Astronomy\nUniversity of California\n90095-1562Los AngelesCAUSA",
"Astrobiology Center\n2-21-1 Osawa181-8588MitakaTokyoJapan",
"Department of Astronomy\nPhysical Sciences Complex\nUniversity of Maryland\nBldg. 4151113, 20742-2421College ParkMDUSA",
"Steward Observatory\nUniversity of Arizona\n933 N. Cherry Avenue85721TucsonAZUSA",
"Department of Astronomy\nThe University of Texas at Austin\nStop C14002515, 78712Speedway, AustinTX",
"Optical and Infrared Astronomy Division\nNational Astronomical Observatory\n181-8588MitakaTokyoJapan",
"Department of Astronomical Science\nGraduate University for Advanced Studies (SOKENDAI)\n181-8588MitakaTokyoJapan",
"Department of Earth, Atmospheric and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Department of Physics and Astronomy\nUniversity of California\n90095-1562Los AngelesCAUSA",
"Department of Astronomy & Institute for Astrophysical Research\nBoston University\n725 Commonwealth Avenue02215BostonMAUSA",
"Department of Astronomy\nThe University of Tokyo\n7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan",
"JST\nPRESTO\n7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan",
"National Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan"
]
| [
"Norio Narita (成田憲保)"
]
| WD 1145+017 is a unique white dwarf system that has a heavily polluted atmosphere, an infrared excess from a dust disk, numerous broad absorption lines from circumstellar gas, and changing transit features, likely from fragments of an actively disintegrating asteroid. Here, we present results from a large photometric and spectroscopic campaign with Hubble, Keck , VLT, Spitzer, and many other smaller telescopes from 2015 to 2018. Somewhat surprisingly, but consistent with previous observations in the u' band, the UV transit depths are always shallower than those in the optical. We develop a model that can quantitatively explain the observed "bluing" and the main findings are: I. the transiting objects, circumstellar gas, and white dwarf are all aligned along our line of sight; II. the transiting object is blocking a larger fraction of the circumstellar gas than of the white dwarf itself. Because most circumstellar lines are concentrated in the UV, the UV flux appears to be less blocked compared to the optical during a transit, leading to a shallower UV transit. This scenario is further supported by the strong anti-correlation between optical transit depth | 10.3847/1538-3881/ab1b36 | [
"https://arxiv.org/pdf/1904.10896v1.pdf"
]
| 129,945,470 | 1904.10896 | 92eb118902214a1fec7e792b8b0933763df676b6 |
SHALLOW ULTRAVIOLET TRANSITS OF WD 1145+017
April 25, 2019 24 Apr 2019
Siyi Xu
Gemini Observatory
670 N. A'ohoku Place96720HiloHI
Na ' Ama Hallakoun
School of Physics and Astronomy
Tel-Aviv University
6997801Tel-AvivIsrael
Bruce Gary
Hereford Arizona Observatory
85615HerefordAZUSA
Paul A Dalba
Department of Earth & Planetary Sciences
University of California Riverside
900 University Ave92521RiversideCAUSA
John Debes
Space Telescope Science Institute
21218BaltimoreMDUSA
Patrick Dufour
Institut de Recherche sur les Exoplanètes (iREx)
Université de Montréal
H3C 3J7MontréalQCCanada
Maude Fortin-Archambault
Institut de Recherche sur les Exoplanètes (iREx)
Université de Montréal
H3C 3J7MontréalQCCanada
Akihiko Fukui
Department of Earth and Planetary Science
Graduate School of Science
The University of Tokyo
7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan
Instituto de Astrofísica de Canarias
Vía Láctea s/nE-38205La Laguna, TenerifeSpain
Michael A Jura
Department of Physics and Astronomy
University of California
90095-1562Los AngelesCAUSA
Beth Klein
Department of Physics and Astronomy
University of California
90095-1562Los AngelesCAUSA
Nobuhiko Kusakabe
Astrobiology Center
2-21-1 Osawa181-8588MitakaTokyoJapan
Amy Steele
Department of Astronomy
Physical Sciences Complex
University of Maryland
Bldg. 4151113, 20742-2421College ParkMDUSA
Kate Y L Su
Steward Observatory
University of Arizona
933 N. Cherry Avenue85721TucsonAZUSA
Andrew Vanderburg
Department of Astronomy
The University of Texas at Austin
Stop C14002515, 78712Speedway, AustinTX
Noriharu Watanabe
Optical and Infrared Astronomy Division
National Astronomical Observatory
181-8588MitakaTokyoJapan
Department of Astronomical Science
Graduate University for Advanced Studies (SOKENDAI)
181-8588MitakaTokyoJapan
Zhuchang Zhan
Department of Earth, Atmospheric and Planetary Sciences
Massachusetts Institute of Technology
02139CambridgeMAUSA
Ben Zuckerman
Department of Physics and Astronomy
University of California
90095-1562Los AngelesCAUSA
Department of Astronomy & Institute for Astrophysical Research
Boston University
725 Commonwealth Avenue02215BostonMAUSA
Department of Astronomy
The University of Tokyo
7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan
JST
PRESTO
7-3-1 Hongo, Bunkyo-ku113-0033TokyoJapan
National Astronomical Observatory of Japan
2-21-1 Osawa181-8588MitakaTokyoJapan
SHALLOW ULTRAVIOLET TRANSITS OF WD 1145+017
Norio Narita (成田憲保)
1110April 25, 2019 24 Apr 2019(Received 2019 March 8; Revised 2019 April 15; Accepted 2019 April 19) Submitted to ApJDraft version Typeset using L A T E X twocolumn style in AASTeX61 Corresponding author: Siyi Xu AAST E X Shallow Ultraviolet Transits of WD 1145+017 3circumstellar matter -minor planets, asteroids: general -stars: individual: WD 1145+017 -white dwarfs * Deceased
WD 1145+017 is a unique white dwarf system that has a heavily polluted atmosphere, an infrared excess from a dust disk, numerous broad absorption lines from circumstellar gas, and changing transit features, likely from fragments of an actively disintegrating asteroid. Here, we present results from a large photometric and spectroscopic campaign with Hubble, Keck , VLT, Spitzer, and many other smaller telescopes from 2015 to 2018. Somewhat surprisingly, but consistent with previous observations in the u' band, the UV transit depths are always shallower than those in the optical. We develop a model that can quantitatively explain the observed "bluing" and the main findings are: I. the transiting objects, circumstellar gas, and white dwarf are all aligned along our line of sight; II. the transiting object is blocking a larger fraction of the circumstellar gas than of the white dwarf itself. Because most circumstellar lines are concentrated in the UV, the UV flux appears to be less blocked compared to the optical during a transit, leading to a shallower UV transit. This scenario is further supported by the strong anti-correlation between optical transit depth
and circumstellar line strength. We have yet to detect any wavelength-dependent transits caused by the transiting material around WD 1145+017.
Keywords: circumstellar matter -minor planets, asteroids: general -stars: individual: WD 1145+017 -white dwarfs
INTRODUCTION
There is evidence that planetary systems can be common and active around white dwarfs (e.g. Jura & Young 2014;Veras 2016). To-date, WD 1145+017 is the only white dwarf that shows transit features of planetary material, likely from an actively disintegrating asteroid (Vanderburg et al. 2015). The original K2 light curves reveal at least six stable periods, all between 4.5-5.0 hours, near the white dwarf tidal radius. Follow-up photometric observations show that the system is actively evolving and the light curve changes on a daily basis Rappaport et al. 2016Rappaport et al. , 2017Gary et al. 2017). Likely, the transits are caused by dusty fragments 1 coming off the disintegrating asteroid (Veras et al. 2017) and each piece is actively producing dust for a few weeks to many months.
The basic parameters of WD 1145+017 are listed in Table 1. Its photosphere is also heavily "polluted" with elements heavier than helium; such pollution has been observed in 25-50% of all white dwarfs (Zuckerman et al. 2003(Zuckerman et al. , 2010Koester et al. 2014). At least 11 heavy elements have been detected in its atmosphere and the overall composition resembles that of the bulk Earth (Xu et al. 2016). In addition, WD 1145+017 displays strong infrared excess from a dust disk, which has been observed around 40 other white dwarfs (Farihi 2016). The standard model is that these disks are a result of tidal disruption of extrasolar asteroids and atmospheric pollution comes from accretion of the circumstellar material (Jura 2003).
Another unique feature of WD 1145+017 is its ubiquitous broad circumstellar absorption lines, which have not been detected around any other white dwarfs (Xu et al. 2016)
2 .
They are broad (line widths ∼ 300 km s −1 ), asymmetric, and arise mostly from transitions with lower energy levels <3 eV above ground. The circumstellar lines display short-term variabilitya reduction of absorption flux coinciding with the transit feature (Redfield et al. 2017;Izquierdo et al. 2018;Karjalainen et al. 2019), as well as long-term variability -they have evolved from being strongly red-shifted to blue-shifted in a couple of years ). The long-term variability can be explained by preces-1 In this paper, we use the term "dusty fragments" to refer to the material that is directly causing the transits. Likely, all these dusty fragments come from one or several asteroid parent bodies in orbit around WD 1145+017. But the asteroid itself is too small to be directly detectable via transits.
2 Variable circumstellar emission features have been detected around some dusty white dwarfs (e.g. Gänsicke et al. 2006;Manser et al. 2016;Dennihy et al. 2018 sion of an eccentric ring, either under general relativity or by an external perturber. The transiting fragments include dust particles and observations at different wavelengths could constrain their size and composition -the main motivation for multi-wavelength photometric observations. Previous studies show that the transit depths are the same from V to J band (Alonso et al. 2016;Zhou et al. 2016;Croll et al. 2017). Observations at K s band and 4.5 µm have revealed shallower transits than those in the optical (Xu et al. 2018a). However, after correcting for the excess emission from the dust disk, the transit depths become the same at all the observed wavelengths. Under the assumption of optically thin transiting material, the authors conclude that there is a dearth of dust particles smaller than 1.5 µm due to their short sublimation time (Xu et al. 2018a).
The first detection of a color difference in WD 1145+017's transits is featured by a shallower u'-band transit (u'r' = -0.05 mag) using multi-band fast photometry from ULTRACAM (Hallakoun et al. 2017). The authors have demonstrated that limb darkening cannot reproduce the observed difference in the transit depth and that dust extinction is unlikely to be the mechanism either. Finally, they proposed that the most likely cause is the reduced circumstellar gas absorption during transits because of the high concentration of circumstellar lines in the u' band. Due to the active nature of this system, simultaneous photometric and spectroscopic observations are required to test this hypothesis.
In this paper, we report results from a large spectroscopic and photometric campaign of WD 1145+017 with the Hubble Space Telescope (HST), the Keck Telescope, the Very Large Telescope (VLT), Spitzer Space Telescope, and several smaller telescopes. The main goal is to understand the interplay between the transiting fragments, dust disk, and circumstellar gas. This paper is organized as follows. Observations and data reduction of the main dataset are presented in Section 2. The light curves are analyzed in Section 3, which features shallow 4.5 µm and UV transits. All the spectroscopic analysis is presented in Section 4, where we found an anti-correlation between the optical transit depth and the strength of the circumstellar absorption lines. In Section 5, we present a model that could quantitatively explain the shallow UV transits from the change of circumstellar lines. Conclusions are presented in Section 6.
2. OBSERVATIONS AND DATA REDUCTION 2.1. UV photometry and spectroscopy
We were awarded observing time with the Cosmic Origins Spectrograph (COS) onboard HST to observe WD 1145+017 (program ID #14467, #14646, #15155) a few times between 2016 and 2018. The observing log is listed in Table 2. The G130M grating was used with a central wavelength of 1291Å and a wavelength coverage of 1125 to 1440Å. To minimize the effect of fixed pattern noise, we adopted different FP-POS steps for each exposure (COS Instrument Handbook). As a result, the wavelength coverage was slightly different in each exposure. A representative COS spectrum, full of absorption features, is shown in Fig. 1.
To extract the light curve from the time-tagged COS observations, we used a Python library created under the Archival Legacy Program (ID: # 13902, PI: J. Ely, "The Light curve Legacy of COS and STIS", see also Sandhaus et al. 2016). This library begins with the event-list datasets produced as part of a routine Cal-COS reduction, and performs additional filtering, calibration, and extraction in order to transform a spectral dataset into the time domain sequence. This extraction can be done with any time sampling and wavelength ranges. Here, we selected the wavelength range that was shared by all the FP-POS exposures at each epoch and a time sampling of 30 sec. The light curve was normalized by dividing by a constant, which is the average flux of the out-of-transit light curve, as shown in Fig. 2.
In the 97 minute orbit of HST, WD 1145+017 is visible for about 50 minutes. The orbital period of the fragment is ≈ 270 minutes, about a factor of 3 times the HST orbital period. In 2016, five consecutive HST orbits were executed, covering three separate parts of the orbital phase. In 2017, we improved our observing strategy by setting up two groups of observations with 3 orbits each, separated by 10 orbits in between. This set-up allows us to have an almost complete phase coverage. In 2018, we used a similar set-up as those in 2017. Unfortunately due to a gyro failure, only 3 orbits worth of data were obtained.
Spitzer Photometry
WD 1145+017 was observed with Spitzer/IRAC at 4.5 µm under program #13065. Following our previous set-up on the same target (Xu et al. 2018a), the science observation had 1140 exposures with 30 sec frame time in stare mode. The total on-target time was 9.5 hr, covering a little over two full 4.5-hr cycles. Data reduction Table 5. was performed following procedures outlined in Xu et al. (2018a), and the light curve is shown in Fig. 3. There is one transit (Dip A) marginally detected with Spitzer around phase 0.3.
Optical Photometry
We have arranged optical photometric monitoring around the same time of HST and Spitzer observations. We present the highest quality optical light curves here; the logs are listed in Table 2. We briefly describe each observation.
2016 Meyer observation: The 0.6m Paul and Jane Meyer Observatory Telescope in the Whole Earth Telescope (WET) network (Provencal et al. 2012) was used for observing WD 1145+017 during the 2016 HST window. The exposure time is 60 sec and weather conditions were moderate with some passing clouds. This epoch of observation has been reported in Xu et al. (2018a), together with simultaneous VLT/K s band and Spitzer 4.5 µm observations. 2017 MuSCAT observation: MuSCAT is mounted at the 188-cm telescope at the Okayama Astrophysical Observatory in Japan (Narita et al. 2015). We observed the target in g, r, and z s bands simultaneously with 60 sec exposure time in each filter. No color difference was detected and the g-band light curve is presented, which has the highest quality.
2017 DCT observation: We observed WD 1145+017 from the 4.3-m Discovery Channel Telescope (DCT) using the Large Monolithic Imager (LMI, Massey et al. 2013). A series of 30 s exposures in V-band were taken using 2x2 pixel binning under thin cirrus. The data reductions were similar to those described in Dalba & Muirhead (2016); Dalba et al. (2017). Aperture photometry was performed on the target as well as suitable reference stars to generate light curves. This procedure was repeated for a range of photometric apertures. The aperture and the set of reference stars that maximized the photometric precision away from the dimming events were used to generate the final light curve.
2018 NASACam Observation: On both Apr 25 and 26, the V-band filter was used with an exposure time of 60 sec. These data were reduced and analyzed following the DCT observations. As shown in Fig. 3, the optical light curve was rather noisy due to the proximity (<30 deg) of the nearly fully illuminated (>75%) Moon. Nevertheless, Dip A is well detected in the optical light curve and there is also a weaker Dip B around phase 0.8. Our long-term monitoring of WD 1145+017 over this period (Gary et al. in prep) shows that Dip B is changing quickly, while Dip A is relatively stable. We will focus on Dip A for the following analysis.
2018 Perkins Observation: The PRISM instrument (Janes et al. 2004) mounted on the 1.8-m Perkins telescope at Lowell Observatory was used to observe WD 1145+017. The exposure times were 45 s. On April 29, the sky was clear but it was windy. The wind continued on April 30 and patchy cirrus was present. On both nights, the seeing was consistently greater than 2. 0. These observations were reduced and analyzed in the same fashion as the DCT observations.
In addition, WD 1145+017 has been monitored on a regular basis by amateur astronomers using the 14-inch telescope at the Hereford Arizona Observatory (HAO) and a 32-inch telescope at Arizona (some of the observations have been reported in Rappaport et al. 2016;Alonso et al. 2016;Rappaport et al. 2017;Gary et al. 2017; see details in those references). We also utilized light curves obtained from the University of Arizona's 61 inch telescope, a 14-inch telescope at Cyprus, a 32inch IAC80 telescope on the Canary Islands, as well as a 20-inch telescope in Chile. We use optical photometric observations that were taken closest in time with our spectroscopic monitoring described in the next section.
Optical Spectroscopy
WD 1145+017 has been observed intensively with different optical spectrographs from 2015-2018. A summary of the observing log is listed in Table 3.
Keck/HIRES: High Resolution Echelle Spectrometer (HIRES; Vogt et al. 1994) on the Keck I telescope has a blue collimator and a red collimator. HIRESb was used more frequently for observing WD 1145+017 because it covers shorter wavelengths, where most circumstellar lines are located, approximately the green region (X-SHOOTER UVB arm) shown in the lower panel of Fig. 1. For HIRES observations, the C5 decker was used with a slit width of 1. 148, returning a spectral resolution of 40,000. The exposure times vary to ensure a signal-to-noise ratio of at least 10 in a single exposure. Data reduction was performed with the MAKEE package. We also continuum normalized the spectra with IRAF following procedures described in an early study of WD 1145+017 (Xu et al. 2016). Thanks to the high spectral resolution of HIRES, we can resolve the profiles of individual absorption lines and separate the photospheric component from the circumstellar component.
Keck/ESI: WD 1145+017 has also been observed with the Echellette Spectrograph and Imager (ESI; Sheinis et al. 2002) on the Keck II telescope. A slit width of 0. 3 was used, which returns a spectral resolution of 14,000. Similar to the HIRES analysis, data reduction was performed using both MAKEE and IRAF. The main advantage of ESI is its wider wavelength coverage and shorter integration times due to its lower spectral resolution. The ESI dataset is well suited to probe short-term (tens of minutes) variations of the circumstellar lines.
VLT/X-SHOOTER: WD 1145+017 has been observed with X-SHOOTER (Vernet et al. 2011) on the VLT at the same time as the 2016 HST observation. The weather conditions were decent with some thin clouds. X-SHOOTER has three arms, i.e. UVB, VIS, and NIR, which provide simultaneous wavelength coverage from the atmospheric cutoff in the blue to K band. WD 1145+017 is too faint to be detected in the NIR arm and here we focus on the UVB and VIS arms. A series of short exposures was taken to monitor the variations of the circumstellar lines. The data were reduced using the XSHOOTER pipeline 2.7.0b by the ESO quality control group. The final combined spectrum is presented in Fig. 1.
TRANSIT ANALYSIS
To model the light curve, we follow previous studies of fitting asymmetric transit (e.g. Rappaport et al. 2014) by adopting a series of asymmetric hyperbolic secant (AHS) functions with the following form:
f (p) = f 0 [1 − f dip (p)] ≡ f 0 1 − i 2f i e p−p i φ i1 + e − p−pi φ i2 .
(1) where i represents the number of AHS component, p i is the phase of the deepest point in a transit, φ i1 and φ i2 represent the phase duration of the ingress and egress, respectively.
Assuming the light curves have the same shape but different depth at different wavelengths, we can fit the light curve at another wavelength as:
f λ (p) = f λ,0 [1 − d λ f dip (p)](2)
There are only two free parameters, f λ,0 , which characterizes the continuum level, and d λ , which represents the transit depth ratios. f dip (p) can be taken from the best fit parameters from Equ. (1).
4.5µm and Optical Transits
We fitted Dip A with one AHS function and the best fit models are shown as black lines in Fig. 3. We calculated the transit depth ratios between 4.5 µm and optical, as listed in Table 4. We also list the ratios calculated for 2017 and 2018 epochs (Xu et al. 2018a) and take the average value d 4.5µm /d opt of 0.235 ± 0.024. After correcting for the contribution from the dust disk at 4.5 µm, the transit depth ratio between 4.5 µm and optical is 0.995 ± 0.119, consistent with unity.
In Fig. 4, we compared the new measurements with Mie scattering cross sections of astronomical silicates calculated by Draine & Lee (1984) and Laor & Draine (1993). The astronomical silicates is not a real mineral, and its real and imaginary refractive index are calculated from a combination of lab measurements and actual astronomical observation of circumstellar and interstellar silicate dust (Draine & Lee 1984). Our transit observations cover 0.12 µm 3 to 4.5 µm and yet no wavelength dependence caused by the transiting material has been detected. The result is consistent with our previous finding that either the transiting material has very few small grains or it is optically thick (Xu et al. 2018a).
In addition, the out-of-transit flux at 4.5 µm in 2018 is 52.7 ± 4.7 µJy, which is consistent with 55.0 ± 3.2 µJy reported in 2016 and 2017 (Xu et al. 2018a). The infrared fluxes of WD 1145+017 are surprisingly stable given the transit behavior has changed dramatically during the past few years. A recent study by Swan et al. (2019) has found that WD 1145+017 is variable in the WISE W1 band (3.4 µm). The cause is unclear due to the sparse sampling of WISE and the variability could either come from the transits or the disk material. Infrared variability of white dwarf dust disks has been reported up to 30% at the IRAC bands (Xu & Jura 2014;Xu et al. 2018b;Farihi et al. 2018) and the proposed scenario is tidal disruption and dust production/destruction. It is evident that dust is constantly produced and destroyed around WD 1145+017 (Kenyon & Bromley 2017a,b). However, it is still a puzzle given the stable 4.5 µm flux.
UV and Optical Transits
For the 2016, February 2017, and 2018 observations, we started by fitting the optical light curves with AHS functions because they have good phase coverage. For the June 2017 observations, we started with the UV light curve because the optical light curve only covers a small phase range. The best fit models are shown in Fig. 2 and the UV-to-optical transit depth ratios are listed in Table 5. Here, we are reporting the average ratio for a given epoch and this number could differ for each dip. The UV transit depths are always shallower than those of the optical, which is rather surprising given the constant transit depth from optical to 4.5 µm presented in Sectio 3.1. In addition, the UV-to-optical transit ratios also appear to be changing at different epochs.
To quantify the strength of a transit, we introduce D,
D = 1 p 2 − p 1 × p2 p1 f 0 × f dip (p)dp,(3)
where p1 and p2 represent the phase interval of interest and f 0 and f dip (p) are taken from the Equ (1). It is similar to the mean transit depth used in Hallakoun et al. (2017). For each epoch, we selected two phase ranges and calculated the corresponding transit depths, as listed in Table 5. The transit depths are different for different dips.
SPECTROSCOPIC ANALYSIS
With this extensive spectroscopic dataset, we updated the white dwarf models to consistently fit the optical and UV spectra of WD 1145+017. We have also developed new models for the circumstellar lines. Details will be presented in Fortin-Archambault et al. (in prep) and Steele et al (in prep). In the following analysis, we present some preliminary results of the calculation to help us understand different components of the absorption features.
Long-term Variability
To assess the long-term variability of the absorption features, we selected three representative regions, i.e. Mg II doublet around 4481Å, Si II 6347Å, and Fe II 5169Å, whose transitions come from a lower energy level of 8.86 eV, 8.12 eV, and 2.89 eV, respectively. Both Mg II 4481Å and Si II 6347Å primarily have a photospheric contribution due to the high lower energy level of the transition, while Fe II 5169Å has both photospheric and circumstellar contributions (Xu et al. 2016). The spectra covering those regions are shown in Fig. 5. From 2015 to 2018, the shapes of the circumstellar lines have changed significantly, from being both blue-shifted and red-shifted (April 2015), to mostly red-shifted (March 2016), to mostly blue-shifted (March 2017), and back to being both blue-shifted and red-shifted (May 2018). Likely, our observation has covered a whole precession period of ∼ 3 years.
Normalized Flux
Velocity (km s 1 ) Fe II 5169
Mg II 4481 Si II 6347 In stellar spectroscopy, equivalent width is often used to characterize the strength of an absorption line. However, it is not suitable here because of the irregular line shape. We quantify the average strength of an absorption line over the wavelength range ∆λ as
F abs = 1 − i F abs,i × dλ i F WD × ∆λ ,(4)
where i dλ i = ∆λ, which is the total absorbing wavelength/velocity range. F WD is the white dwarf continuum flux without any absorption from heavy elements, which is approximately 1 for a normalized spectrum. F abs,i is the flux of an absorption feature at each wavelength λ i and dλ i is the wavelength interval. The pink shaded area in Fig. 5 marks the velocity range for calculation. For Mg II and Si II, F abs = F phot , which is the average flux of photospheric absorption because there is little contribution from circumstellar absorption. For Fe II, F abs = F phot + F cs , because both photospheric and circumstellar material contribute to the absorption feature. F cs represents the average flux of circumstellar absorption over a given wavelength range. The average absorption line strength as a function of observing date is shown in Fig. 6. Even though the shapes of the circumstellar lines have changed significantly, the average line strength remains the same from 2015 to 2018, both for circumstellar and photospheric components. Therefore, there are no changes in the compositions of the white dwarf photosphere or the circumstellar material.
Line Strength Comparison
The number density of absorption lines is higher in the UV/blue compared to the optical (see Fig. 1 for an example). We selected three regions to assess the overall contribution from photospheric and circumstellar absorption, A. 1330-1420Å (COS segment A), B. 3200-3900Å (similar to ULTRACAM u' band reported in Hallakoun et al. 2017), C. 6200-6900Å (similar to ULTRACAM r' band). We can compare the flux of WD 1145+017 in those wavelength ranges with a metal-free white dwarf with the same system parameters (F WD , which is defined as 100%) and a polluted white Table 6.
In the UV, the photospheric and circumstellar lines are ubiquitous, and absorb 43% of the white dwarf flux. In comparison, in the optical, the absorbed flux is relatively small compared to the white dwarf flux and the effect becomes even smaller at longer wavelengths. As a result, circumstellar lines will have a much larger effect on the overall UV flux than the broad-band optical flux, as has been first discussed in Hallakoun et al. (2017).
We caution that this kind of calculation is very sensitive to the choice of the continuum flux, particularly for the UV observations where there is essentially no measured continuum. We estimated the uncertainty, using the relatively clean part of the spectrum, to be about 5% for COS observations and 2% for HIRES observations. That is the dominant source of error in Table 6. Note-FWD is the flux of a metal-free white dwarf with the same parameters as WD 1145+017 (e.g. black line in Fig. 1). F abs is calculated directly from the observed spectra, which represents the total amount of photospheric and circumstellar absorption. F phot is absorption from photospheric absorption, which is calculated from white dwarf models with the same parameters and abundances as WD 1145+017 (Fortin-Archambault et al. in prep). F cs is calculated as F abs -F phot .
Short-term Variability
There are several epochs where we have continuous spectroscopic observations that cover a deep transit, suitable for assessing short-term variability of the absorption lines. An example is shown in Fig. 7. During the transit, the absorption feature becomes weaker. Thanks to the continuous optical monitoring of WD 1145+017, we were able to design our spectroscopic observations such that they cover both in-transit and out-of-transit spectra, as shown in Fig. 8. We found that circumstellar lines become shallower during a transit but there is little change in photospheric line strength. This effect is most prominent when there is a deep transit. Previous spectroscopic studies were around the time when the circumstellar line was mostly red-shifted and only a reduction of line strength in the red-shifted component has been reported (Redfield et al. 2017;Izquierdo et al. 2018;Karjalainen et al. 2019) and, this study reports the same characteristics apply to the blue-shifted component.
As discussed in section 4.1, the average strength of the absorption lines (both photospheric and circumstellar) has remained the same for the past few years. Now we can use every single exposure to assess any possible correlation between the transit depth and the absorption line strength. We focus again on Fe II 5169Å and Si II 6347Å. For every spectrum, we calculated the absorption line strength F abs using Equ. (4) as well as the average optical transit depth D opt from the optical light curve during the time interval when the spectrum was taken using Equ. (3) 4 . The result is shown in Fig. 9. There is a strong anti-correlation between the line strength of Fe II 5169Å and the optical transit depth, while the absorption line strengths do not vary much for Si II 6347Å. We performed the same analysis around other absorption features and found a similar pattern: no short-term variability around absorption lines with only a photospheric component and an anti-correlation between line strength and transit depths among lines with both photospheric and circumstellar contributions.
INTERPRETATION
Now we explore a toy model that can explain the interplay between circumstellar line strength and transit depth. With an orbital period of 4.5 hr, the dusty fragment has a semi-major axis of 90R WD . The circumstellar lines can be modeled by a series of eccentric gas rings and the large line width suggests that most of the gas is located close to the white dwarf (e.g. 20-30R WD in Cauley et al. 2018), within the orbit of the transiting fragment. In addition, our model assumes the fragment and the circumstellar gas to be co-planar. A cartoon illustration of our proposed geometry is shown in Fig. 10. The transiting fragments are ∼ 1000 K and they emit negligible amount of flux at the UV and optical. The out of transit flux F out can be calculated as:
F out = F WD − F phot − F cs ,(5)
where the definitions of F WD , F phot , and F cs are the same as in Section 4. Depending on the temperature, circumstellar gas could have some emission as well. F cs represents the combined flux for the circumstellar gas. From the observed spectra, we know the net result is absorption. During a transit, the dusty fragment can block different fractions of the white dwarf and the circumstellar gas. We introduce two new parameters α and β: α characterizes the fraction of white dwarf flux visible during a transit while β characterizes the fraction of circumstellar absorption visible during a transit. The flux during a transit can be calculated as:
F in = α × (F WD − F phot ) − β × F cs(6)
If the circumstellar material covers uniformly the whole white dwarf, α would be equal to β during a transit. We can calculate the average transit depth as:
D = (F out − F in )dp F out dp = 1 − p2 p1 [α(p) × (F WD − F phot ) − β(p) × F cs ]dp [F WD − F phot − F cs ] × (p 2 − p 1 ) = 1 −ᾱ × (F WD − F phot ) −β × F cs F WD − F phot − F cs(7)
whereᾱ ≡ p2 p1 α(p)dp/ (p 2 − p 1 ) andβ ≡ p2 p1 β(p)dp/ (p 2 − p 1 ). For a given phase interval (p 2 − p 1 ),ᾱ represent the average fraction of detectable light from the white dwarf andβ represents the fraction of visible circumstellar gas. They are geometrical parameters and independent of wavelength. The average transit depth D depends onᾱ,β, F WD , F phot , and F cs .
In the optical, F phot F wd and F cs F wd (see Table 6). We can simplify Equ. (7) as:
D opt ≈ 1 −ᾱ.(8)
The average optical transit depth D opt has been measured in Table 5, and we can calculate the correspondinḡ α.
In the UV, F phot and F cs are comparable to F wd so we need to use Equ. (7): Table 5 can be used to calculate D uv andᾱ. F WD,uv , F phot,uv , and F cs,uv have been reported in Table 6 in the column labeled COS. Now we can solve forβ using Equ. (9) and the results are also listed in Table 5. We see thatᾱ andβ are often unequal; this can be explained as the dusty fragment blocking different fractions of the circumstellar gas and the white dwarf light. In addition, bothᾱ andβ varied for different dips, suggesting that the coverages of dusty fragment over circumstellar gas and white dwarf are different.
D uv = 1 −ᾱ × (F WD,uv − F phot,uv ) −β × F cs,uv F WD,uv − F phot,uv − F cs,uv(9)
In Fig. 9, we explored an anti-correlation between the optical transit depth, D opt , and absorption strength, F abs . Spectroscopically,β can be calculated as:
β = F abs,in − F phot,in F abs,out − F phot,out .(10)
We have shown in section 4.1 and 4.3 that the photospheric line strength is constant (F phot,in =F phot,out ). Around Fe II 5169, F abs,out = 0.144±0.018 (from Fig. 6) and (F cs /F phot ) ≈ 2 (from fitting the line profile, Fortin-Archambault et al. in prep). F abs,in has been measured for individual epoch in Fig. 9. As a result,β can also be directly calculated for an individual spectrum.ᾱ is related to the optical transit depth through Equ (8).
We can now re-arrange the measurements in Fig. 9 to representᾱ andβ in Fig. 11.
There are two ways to determineβ, from spectroscopic measurements (Equ. 10) and photometric measurements (Equ. 9). We have shown that both sets of measurements yield a similar result in Fig. 11:ᾱ andβ are correlated. Such a correlation has been hinted in Hallakoun et al. 2017. They showed in their Fig. 7 that "bluing" is most prominent in the deepest transit and is less visible for shallower transits.
As shown in Fig. 11,β is always smaller thanᾱ, indicating that the transiting object blocks a larger fraction of the circumstellar gas than the white dwarf flux. This is expected as long as the circumstellar gas does not uniformally cover the whole white dwarf surface. Because the circumstellar lines are highly concentrated in the UV, we detect much less UV circumstellar absorption during a transit. As a result, the observed flux appears to be higher in the UV compared to the optical and therefore the UV-to-optical transit depth ratios are less than unity. This is somewhat in analog to a spot occultation during a transiting planet. Because a starspot is . This figure is to show the correlation between α (the average fraction of white dwarf flux visible during a transit) andβ (the average fraction of circumstellar absorption visible during a transit).ᾱ can be directly measured from the optical light curve. Forβ, the colored dots are from spectroscopic measurements shown in Fig. 9 using Equ. 10, while the purple stars are from photometric measurement with Equ. 9. Both sets of measurements follow the same trend andβ is always smaller thanᾱ. The 1:1 ratio line is shown in grey.
fainter than the surrounding area, the light curve gets a bit brighter when a planet transits across the starspot compared to other parts of the stellar surface.
This model can quantitatively explain the shallower UV transits observed around WD 1145+017. After correcting for the change of circumstellar lines, the UVto-optical transit depth ratios are consistent with unity from 2016 to 2018.
Note that an important conclusion of the model is that the disintegrating object, circumstellar gas, and the white dwarf are aligned along our line of sightthe system is edge-on. This is expected because likely, the circumstellar gas is produced by the sublimation or collision of the fragments from the disintegrating objects (Xu et al. 2018a). There could be some gas around the same region of the dusty fragments and this analysis still holds if the circumstellar gas extends beyond the orbit of the dusty fragment. Shallower UV transits are expected as long as the transiting fragment is blocking a larger fraction of the circumstellar gas compared to the white dwarf surface.
CONCLUSION
In this paper, we report multi-epoch photometric and spectroscopic observations of WD 1145+017, a white dwarf with an actively disintegrating asteroid. The main conclusions are summarized as follows:
• There is a strong anti-correlation between circumstellar line strength and transit depth. Regardless of being blue-shifted or red-shifted, the circumstellar lines become significantly weaker during a deep transit. This can be explained when the transiting fragment is blocking a larger fraction of the circumstellar gas than the white dwarf flux during a transit.
• The shallow UV transit is a result of short-term variability of the circumstellar lines. The UV transit depth is always shallower than that in the optical. We presented a model that can quantitatively explain this phenomena due to the deduction of circumstellar line strength during a transit and their high concentration in the UV.
• The orbital planes of the gas disk and the dusty fragment are likely to be aligned. An important conclusion of our model is the alignment between the transiting fragment and circumstellar gas -the system is edge-on. This is consistent with the picture that the gas is likely to come from the fragment and is eventually accreted onto the white dwarf.
• We have yet to detect any differences in the transit depths directly caused by the transiting material itself. We have not detected any wavelength dependence caused by the transiting material at wavelengths from 0.1 µm to 4.5 µm. The transiting material must be either optically thick at all of these wavelengths or mostly consist of large particles.
One main puzzle left for WD 1145+017 is the location and evolution of the dust disk, which could be constrained by future observations. Infrared spectroscopy could put limits on the temperature, size, and composition of the dust disk while infrared monitoring can probe the evolution of the disk.
Figure 2 .
2UV and optical light curves of WD 1145+017 from 2016 to 2018. The optical and UV data were taken within one night. The black line represents the best fit model for a given night. The pink and grey shaded area marks the phase range used to calculate the average transit depth in
Figure 3 .
3Spitzer 4.5 µm and optical light curves of WD 1145+017 in 2018. The black line represents the best fit model to the data.
Figure 4 .
4Mie extinction cross-section ratios versus wavelength using astronomical silicates(Draine & Lee 1984;Laor & Draine 1993) for a particle radius s from 0.2 to 10 µm. For each curve, a bulge peaks at s × π/2 is a characteristic of the Mie scattering cross section. The other two peaks at 10 µm and 20 µm are the absorption cross section of astronomical silicates. Black dots are the measured transit depths ratios in WD 1145+017 and grains smaller than 2 µm are inconsistent with the observation
Figure 5 .
5A compilation of spectra around Fe II 5169Å, Mg II doublet 4481Å, and Si II 6347Å in the reference frame of the observer. The red line represents our preliminary white dwarf photospheric model spectrum computed to match the instrument resolution(Fortin-Archambault et al. in prep). The pink line marks the average radial velocity of photospheric lines, which is at 42 km s −1 . The pink shaded area marks the wavelength range for calculating the absorptione line strength F abs defined in Equ. (4). The absorption feature around 250 km s −1 at the Fe II 5169 panel is caused by circumstellar absorption of Mg I at 5173Å, which is visible during some epochs.
Figure 6 .
6Average line strength on a given night as a function of the observing date. The red dashed line indicates the average value and the pink shaded area marks the standard deviation. The strength of the absorption lines has been the same since 2015. dwarf with only photospheric absorption (for F phot , models are from Fortin-Archambault et al. in prep). The results are listed in
Figure 7 .
7Simultaneous photometric and spectroscopic observations of WD 1145+017 on March 28, 2016. The top panel is the optical light curve taken with the 61-inch telescope at Arizona; the middle panel is the Keck /ESI spectra centered around Fe II 5169 region; the bottom panel is individual spectra divided by the average spectrum around Fe II 5169. During a transit, the absorption feature becomes shallower.
Figure 8 .Figure 9 .
89In and out-of-transit spectra around Fe II 5169Å. The in-transit spectra were shown in pink while out-of-transit spectra where shown in grey. The average optical transit depth D was listed in pink. From top left to bottom right, the panels were arranged in increasing average transit depth D. When there is a deep transit, circumstellar lines become shallower, while photospheric lines remain the same. Absorption line strength for individual exposure as a function of transit depth for Si II 6347Å (102 data points) and Fe II 5169Å (141 data points). The dots are color-coded by the year of the observation. The typical error bar is shown in the upper right corner of each panel. There is a strong anti-correlation between the absorption line strength of Fe II 5169Å and the optical transit depth with a Pearson's correlation coefficient of -0.68, while such a correlation does not exist for the Si II 6347Å with a correlation coefficient of -0.06.
Figure 10 .Figure 11
1011A cartoon illustration of our proposed configuration. The orbital plane of the gas disk is aligned with that of the dusty fragment. During a transit, the fragment is blocking both the white dwarf and the circumstellar gas but with different fractions.
).
Table 1 .
1Basic Parameters of WD 1145+017Parameter
Ref
Coord (J2000) 11:48:33.6 +01:28:59.4
Spectral Type DBZA
V
17.0 mag
TWD
15020 ± 520 K
(1)
log g
8.07 ± 0.05
(1)
MWD
0.63 ± 0.05 M
(1)
Distance
141.7 ± 2.5 pc
(2)
Note-(1) Izquierdo et al. (2018); (2) Gaia DR2
Table 2 .
2Observing Log of Photometric ObservationsTel./Inst. λ (µm)
Date (UT)
COS
0.13
Mar 28, 23:15 -Mar 29, 06:19, 2016
Meyer
0.48
Mar 28, 21:00 -Mar 29, 04:37, 2016
COS
0.13
Feb 17, 11:29 -Feb 18, 07:09, 2017
MuSCAT
0.48
Feb 18, 13:18 -Feb 18, 20:35, 2017
COS
0.13
Jun 6, 07:02 -Jun 7, 04:07, 2017
DCT
0.55
Jun 6, 03:35 -Jun 6, 05:32, 2017
IRAC
4.5
Apr 25, 10:15-Apr 25, 20:05, 2018
NASACam 0.55
Apr 25, 3:51 -Apr 25, 9:19, 2018
NASACam 0.55
Apr 26, 4:58 -Apr 26, 8:21, 2018
COS
0.13
Apr 30, 22:57 -May 1, 18:42, 2018
Perkins
0.65
Apr 29, 03:55 -May 1, 08:10, 2018
Table 3 .
3Optical Spectroscopic Observations of WD 1145+017 The first number is for the UVB arm while the second number is for the VIS arm.Instrument
Wavelength
Resolution
Date (UT)
Exposure Times
Keck/HIRESb
3200-5750Å
40,000
2015 Apr 11 2400s×3
Keck/ESI
4700-9000Å
14,000
2015 Apr 25 1180s × 2
Keck/HIRESr
4700-9000Å
40,000
2016 Feb 3
1800s×9
Keck/HIRESb
3200-5750Å
40,000
2016 Mar 3
1259s, 2400s×5, 2000s, 1440s, 2400s×2
Keck/ESI
3900-9300Å
14,000
2016 Mar 28 1300s, 900s×2, 600s×25
VLT/X-SHOOTER 3100-10,000Å 6200/7400 a 2016 Mar 29 280s/314s a × 29
Keck/HIRESb
3100-5950Å
40,000
2016 Apr 1
1200s×9
Keck/ESI
3900-9300Å
14,000
2016 Nov 18 600s×6
Keck/ESI
3900-9300Å
14,000
2016 Nov 19 600s×11
Keck/HIRESr
4715-9000Å
40,000
2016 Nov 26 1800s ×2
Keck/HIRESr
4780-9200Å
40,000
2016 Dec 22 1500s
Keck/ESI
3900-9300Å
14,000
2017 Mar 6
600s×3, 480s×3
Keck/ESI
3900-9300Å
14,000
2017 Mar 7
600s×6
Keck/ESI
3900-9300Å
14,000
2017 Apr 17 500s×8
Keck/HIRESb
3100-5950Å
40,000
2018 Jan 1
900s×5
Keck/HIRESb
3100-5950Å
40,000
2018 Apr 24 1200s×5, 1000s, 1350s× 2
Keck/HIRESb
3100-5950Å
40,000
2018 May 18 1200×5
a
Table 4 .
4Spitzerand Optical
Transit Measurements
Epoch
Dip
d4.5µm/dopt
2016
A1 0.256 ± 0.032
2017
B2
0.309 ± 0.075
2017
B3
0.239 ± 0.026
2018
A
0.137 ± 0.044
Average
0.235 ± 0.024
yet its infrared fluxes remained unchanged. In addi-
tion, the canonical geometrically thin optically thick
disk aligned with the transiting material can not repro-
duce the strong infrared excess (Xu et al. 2016). Either
the dust disk is misaligned with the transiting object
or the disk has a significant scale height. Numerical
simulations have shown that white dwarf dust disks can
have a vertical structure when there is a constant input
Table 5 .
5UV and Optical Transit MeasurementsDate
duv/dopt a
Phase
Dopt bᾱcβc
2016 Mar 28-29
0.531 ± 0.020 0.29-0.39 0.295 ± 0.038 0.705 ± 0.038 0.373 ± 0.103
0.20-0.40 0.209 ± 0.026 0.791 ± 0.026 0.556 ± 0.071
2017 Feb 18-19
0.625 ± 0.018 0.30-0.40 0.150 ± 0.015 0.850 ± 0.015 0.715 ± 0.038
0.40-0.60 0.188 ± 0.016 0.812 ± 0.016 0.642 ± 0.045
2017 Jun 6-7
0.924 ± 0.068 0.15-0.25 0.021 ± 0.006 0.979 ± 0.006 0.975 ± 0.009
0.10-0.30 0.011 ± 0.003 0.989 ± 0.003 0.987 ± 0.004
2018 Apr 30-May 1 0.746 ± 0.036 0.80-0.90 0.102 ± 0.009 0.898 ± 0.009 0.835 ± 0.020
0.20-0.40 0.107 ± 0.030 0.893 ± 0.030 0.828 ± 0.051
a The average optical-to-UV transit depth ratio for a given epoch.
b Dopt is the average optical transit depth for the given phase range, as defined in Equ. (3).
cᾱ andβ are defined in section 5.
Table 6 .
6WD 1145+017 Flux Comparison2016 Mar 29
2016 Apr 01
2016 Feb 3
COS
HIRESb
HIRESr
(1300-1420Å) (3200-3800Å) (6100-6700Å)
FWD
100 ± 5%
100 ± 2%
100 ± 2%
F abs
42.7 ± 5.5%
4.2 ± 2.7%
1.3 ± 2.8%
F phot
19.0 ± 5.4%
1.5 ± 2.8%
0.7 ± 2.8%
F cs
23.8 ± 2.3%
2.7 ± 2.7%
0.6 ± 2.8%
The 0.12 µm COS light curve is analyzed in Section 5.
Typically, the spectroscopic and photospheric observations were taken within one night.
Acknowledgements. The authors would like to thank S. Rappaport for useful discussions on different aspects of the manuscript. We also greatly appreciate helps from R. Alonso A large amount of data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.These results also made use of the Discovery Channel Telescope at Lowell Observatory. Lowell is a private, non-profit institution dedicated to astrophysical research and public appreciation of astronomy and operates the DCT in partnership with Boston University, the University of Maryland, the University of Toledo, Northern Arizona University and Yale University. The Large Monolithic Imager was built by Lowell Observatory using funds provided by the National Science Foundation (AST-1005313).A portion of this research was supported by NASA and NSF grants to UCLA. This work is partly supported by JSPS KAKENHI Grant Numbers JP18H01265, 18H05439 and JP16K13791, and JST PRESTO Grant Number JPMJPR1775.Software: Matplotlib(Hunter 2007)
. R Alonso, S Rappaport, H J Deeg, E Palle, A&A. 5896Alonso, R., Rappaport, S., Deeg, H. J., & Palle, E. 2016, A&A, 589, L6
. P W Cauley, J Farihi, S Redfield, ApJ. 85222Cauley, P. W., Farihi, J., Redfield, S., et al. 2018, ApJ, 852, L22
. B Croll, P A Dalba, A Vanderburg, ApJ. 83682Croll, B., Dalba, P. A., Vanderburg, A., et al. 2017, ApJ, 836, 82
. P A Dalba, P S Muirhead, ApJL. 8267Dalba, P. A., & Muirhead, P. S. 2016, ApJL, 826, L7
. P A Dalba, P S Muirhead, B Croll, E M Kempton, -R, AJ. 15359Dalba, P. A., Muirhead, P. S., Croll, B., & Kempton, E. M.-R. 2017, AJ, 153, 59
. E Dennihy, J C Clemens, B H Dunlap, ApJ. 85440Dennihy, E., Clemens, J. C., Dunlap, B. H., et al. 2018, ApJ, 854, 40
. B T Draine, H M Lee, ApJ. 28589Draine, B. T., & Lee, H. M. 1984, ApJ, 285, 89
. J Farihi, 719NewARFarihi, J. 2016, NewAR, 71, 9
. J Farihi, R Van Lieshout, P W Cauley, MNRAS. 4812601Farihi, J., van Lieshout, R., Cauley, P. W., et al. 2018, MNRAS, 481, 2601
. B T Gänsicke, T R Marsh, J Southworth, A Rebassa-Mansergas, Science. 314Gänsicke, B. T., Marsh, T. R., Southworth, J., & Rebassa-Mansergas, A. 2006, Science, 314, 1908
. B T Gänsicke, A Aungwerojwit, T R Marsh, ApJL. 8187Gänsicke, B. T., Aungwerojwit, A., Marsh, T. R., et al. 2016, ApJL, 818, L7
. B L Gary, S Rappaport, T G Kaye, R Alonso, F.-J Hambschs, MNRAS. 4653267Gary, B. L., Rappaport, S., Kaye, T. G., Alonso, R., & Hambschs, F.-J. 2017, MNRAS, 465, 3267
. N Hallakoun, S Xu, D Maoz, MNRAS. 4693213Hallakoun, N., Xu, S., Maoz, D., et al. 2017, MNRAS, 469, 3213
. J D Hunter, Computing in Science & Engineering. 990Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90
. P Izquierdo, P Rodríguez-Gil, B T Gänsicke, MNRAS. 481703Izquierdo, P., Rodríguez-Gil, P., Gänsicke, B. T., et al. 2018, MNRAS, 481, 703
. K A Janes, D P Clemens, M N Hayes-Gehrke, American Astronomical Society Meeting Abstracts. 36672Bulletin of the American Astronomical SocietyJanes, K. A., Clemens, D. P., Hayes-Gehrke, M. N., et al. 2004, in Bulletin of the American Astronomical Society, Vol. 36, American Astronomical Society Meeting Abstracts #204, 672
. M Jura, ApJL. 58491Jura, M. 2003, ApJL, 584, L91
. M Jura, E D Young, Annual Review of Earth and Planetary Sciences. 4245Jura, M., & Young, E. D. 2014, Annual Review of Earth and Planetary Sciences, 42, 45
. M Karjalainen, E J W De Mooij, R Karjalainen, N P Gibson, MNRAS. 482999Karjalainen, M., de Mooij, E. J. W., Karjalainen, R., & Gibson, N. P. 2019, MNRAS, 482, 999
. S J Kenyon, B C Bromley, arXiv:1706.08579ArXiv e-printsKenyon, S. J., & Bromley, B. C. 2017a, ArXiv e-prints, arXiv:1706.08579
. ApJ. 85050-. 2017b, ApJ, 850, 50
. D Koester, B T Gänsicke, J Farihi, A&A. 56634Koester, D., Gänsicke, B. T., & Farihi, J. 2014, A&A, 566, A34
. A Laor, B T Draine, ApJ. 402441Laor, A., & Draine, B. T. 1993, ApJ, 402, 441
. C J Manser, B T Gänsicke, T R Marsh, MNRAS. 4554467Manser, C. J., Gänsicke, B. T., Marsh, T. R., et al. 2016, MNRAS, 455, 4467
P Massey, E W Dunham, T A Bida, American Astronomical Society Meeting Abstracts #221. 2212American Astronomical Society Meeting AbstractsMassey, P., Dunham, E. W., Bida, T. A., et al. 2013, in American Astronomical Society Meeting Abstracts, Vol. 221, American Astronomical Society Meeting Abstracts #221, 345.02
. N Narita, A Fukui, N Kusakabe, Journal of Astronomical Telescopes, Instruments, and Systems. 145001Narita, N., Fukui, A., Kusakabe, N., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 045001
. J L Provencal, M H Montgomery, A Kanaan, ApJ. 75191Provencal, J. L., Montgomery, M. H., Kanaan, A., et al. 2012, ApJ, 751, 91
. S Rappaport, T Barclay, J Devore, ApJ. 78440Rappaport, S., Barclay, T., DeVore, J., et al. 2014, ApJ, 784, 40
. S Rappaport, B L Gary, T Kaye, MNRAS. 4583904Rappaport, S., Gary, B. L., Kaye, T., et al. 2016, MNRAS, 458, 3904
. S Rappaport, B L Gary, A Vanderburg, arXiv:1709.08195ArXiv e-printsRappaport, S., Gary, B. L., Vanderburg, A., et al. 2017, ArXiv e-prints, arXiv:1709.08195
. S Redfield, J Farihi, P W Cauley, ApJ. 83942Redfield, S., Farihi, J., Cauley, P. W., et al. 2017, ApJ, 839, 42
. P H Sandhaus, J H Debes, J Ely, D C Hines, M Bourque, ApJ. 82349Sandhaus, P. H., Debes, J. H., Ely, J., Hines, D. C., & Bourque, M. 2016, ApJ, 823, 49
. A I Sheinis, M Bolte, H W Epps, PASP. 114851Sheinis, A. I., Bolte, M., Epps, H. W., & et al. 2002, PASP, 114, 851
. A Swan, J Farihi, T G Wilson, arXiv:1901.09468arXiv e-printsSwan, A., Farihi, J., & Wilson, T. G. 2019, arXiv e-prints, arXiv:1901.09468
. A Vanderburg, J A Johnson, S Rappaport, Nature. 526546Vanderburg, A., Johnson, J. A., Rappaport, S., et al. 2015, Nature, 526, 546
. D Veras, Royal Society Open Science. 3150571Veras, D. 2016, Royal Society Open Science, 3, 150571
. D Veras, P J Carter, Z M Leinhardt, B T Gänsicke, MNRAS. 4651008Veras, D., Carter, P. J., Leinhardt, Z. M., & Gänsicke, B. T. 2017, MNRAS, 465, 1008
. J Vernet, H Dekker, S D'odorico, A&A. 536105Vernet, J., Dekker, H., D'Odorico, S., et al. 2011, A&A, 536, A105
S S Vogt, S L Allen, B C Bigelow, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. D. L. Crawford & E. R. Craine2198362Vogt, S. S., Allen, S. L., & Bigelow, B. C., et al. 1994, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. D. L. Crawford & E. R. Craine, Vol. 2198, 362
. S Xu, M Jura, ApJL. 79239Xu, S., & Jura, M. 2014, ApJL, 792, L39
. S Xu, M Jura, P Dufour, B Zuckerman, ApJL. 81622Xu, S., Jura, M., Dufour, P., & Zuckerman, B. 2016, ApJL, 816, L22
. S Xu, S Rappaport, R Van Lieshout, MNRAS. 4744795Xu, S., Rappaport, S., van Lieshout, R., et al. 2018a, MNRAS, 474, 4795
. S Xu, K Y L Su, L K Rogers, ApJ. 866108Xu, S., Su, K. Y. L., Rogers, L. K., et al. 2018b, ApJ, 866, 108
. G Zhou, L Kedziora-Chudczer, J Bailey, MNRAS. 4634422Zhou, G., Kedziora-Chudczer, L., Bailey, J., et al. 2016, MNRAS, 463, 4422
. B Zuckerman, D Koester, I N Reid, M Hünsch, ApJ. 596477Zuckerman, B., Koester, D., Reid, I. N., & Hünsch, M. 2003, ApJ, 596, 477
. B Zuckerman, C Melis, B Klein, D Koester, M Jura, ApJ. 722725Zuckerman, B., Melis, C., Klein, B., Koester, D., & Jura, M. 2010, ApJ, 722, 725
| []
|
[
"Time periodic solutions of compressible fluid models of Korteweg type",
"Time periodic solutions of compressible fluid models of Korteweg type"
]
| [
"Zhengzheng Chen ",
"Qinghua Xiao ",
"Huijiang Zhao ",
"\nSchool of Mathematics and Statistics\nSchool of Mathematics and Statistics\nWuhan University\n430072WuhanChina\n",
"\nSchool of Mathematics and Statistics\nWuhan University\n430072WuhanChina\n",
"\nWuhan University\n430072WuhanChina\n"
]
| [
"School of Mathematics and Statistics\nSchool of Mathematics and Statistics\nWuhan University\n430072WuhanChina",
"School of Mathematics and Statistics\nWuhan University\n430072WuhanChina",
"Wuhan University\n430072WuhanChina"
]
| []
| This paper is concerned with the existence, uniqueness and time-asymptotic stability of time periodic solutions to the compressible Navier-Stokes-Korteweg system effected by a time periodic external force in R n . Our analysis is based on a combination of the energy method and the time decay estimates of solutions to the linearized system. | null | [
"https://arxiv.org/pdf/1203.6529v1.pdf"
]
| 119,634,324 | 1203.6529 | 22425c1905026f0e5a67adb7111450f7376fafaf |
Time periodic solutions of compressible fluid models of Korteweg type
29 Mar 2012
Zhengzheng Chen
Qinghua Xiao
Huijiang Zhao
School of Mathematics and Statistics
School of Mathematics and Statistics
Wuhan University
430072WuhanChina
School of Mathematics and Statistics
Wuhan University
430072WuhanChina
Wuhan University
430072WuhanChina
Time periodic solutions of compressible fluid models of Korteweg type
29 Mar 2012Navier-Stokes-Korteweg systemCapillary fluidsTime periodic solutionEnergy estimatesAMS Subject Classifications 2010: 35M10, 35Q35, 35B10
This paper is concerned with the existence, uniqueness and time-asymptotic stability of time periodic solutions to the compressible Navier-Stokes-Korteweg system effected by a time periodic external force in R n . Our analysis is based on a combination of the energy method and the time decay estimates of solutions to the linearized system.
Introduction
The compressible Navier-Stokes-Korteweg system for the density ρ > 0 and velocity u = (u 1 , u 2 , · · · , u n ) ∈ R n is written as : ρ t + ∇ · (ρu) = 0, (ρu) t + ∇ · (ρu u) + ∇P (ρ) − µ∆u − (ν + µ)∇(∇ · u) = κρ∇∆ρ + ρf (t, x).
(1.1)
Here, (t, x) ∈ R + × R n , P = P (ρ) is the pressure, µ, ν are the viscosity coefficients, κ is the capillary coefficient, and f (t, x) = (f 1 , f 2 , f 3 )(t, x) is a given external force. System (1.1) can be used to describe the motion of the compressible isothermal fluids with capillarity effect of materials, see the pioneering work by Dunn and Serrin [1], and also [2,3,4].
In this paper, we consider the problem (1.1) for (ρ, u) around a constant state (ρ ∞ , 0) for n ≥ 5, where ρ ∞ is a positive constant. Throughout this paper, we make the following basic assumptions:
(H1): µ, ν and κ are positive constants and satisfying ν + 2 n µ ≥ 0.
(H2): P (ρ) is smooth in a neighborhood of ρ ∞ satisfying P ′ (ρ ∞ ) > 0.
(H3): f is time periodic with period T > 0. The main purpose of this paper is to show that the problem (1.1) admits a time periodic solution around the constant state (ρ ∞ , 0) which has the same period as f . By combining the energy method and the optimal decay estimates of solutions to the linearized system, we prove the existence of a time periodic solution in some suitable function space. Notice that some similar results have been obtained for the compressible Navier-Stokes equations and Boltzmann equation, cf. [8,9,10,11].
Precisely, Let N ≥ n + 2 be a positive integer, define the solution space by
X M (0, T ) = (ρ, u)(t, x)
ρ(t, x) ∈ C(0, T ; H N (R n )) ∩ C 1 (0, T ; H N −2 (R n )),
u(t, x) ∈ C(0, T ; H N −1 (R n )) ∩ C 1 (0, T ; H N −3 (R n )),
∇ρ(t, x) ∈ L 2 (0, T ; H N +1 (R n )), ∇u(t, x) ∈ L 2 (0, T ; H N (R n )), |||(ρ, u)||| ≤ M,
(1.2)
for some positive constant M and with the norm
|||(ρ, u)||| 2 = sup 0≤t≤T ρ(t) 2 N + u(t) 2 N −1 + T 0 ∇ρ(t) 2 N +1 + ∇u(t) 2 N dt. (1.3)
Then the existence of the time periodic solution can be stated as follows.
(ρ per − ρ ∞ , u per ) ∈ X M 0 (0, T )
Furthermore the periodic solution is unique in the following sense: if there is another time periodic solution (ρ per 1 , u per 1 ) satisfying (1.1) with the same f , and (ρ per 1 − ρ ∞ , u per 1 ) ∈ X M 0 (0, T ), then (ρ per 1 , u per 1 ) = (ρ per , u per ).
To study the stability of the time periodic solution (ρ per , u per ) obtained in Theorem 1.1, we consider the problem (1.1) with the following initial date
(ρ, u)(t, x)| t=0 = (ρ 0 , u 0 )(x) → (ρ ∞ , 0), as |x| → ∞.
(1.5)
Here ρ 0 (x) and u 0 (x) is a small perturbation of the time periodic solution (ρ per , u per ). And we have the following stability result.
Theorem 1.2. Under the assumptions of Theorem 1.1, let (ρ per , u per ) be the time periodic solution thus obtained. If the initial date (ρ 0 , u 0 ) be such that (ρ 0 − ρ per (0), u 0 − u per (0) N −1 is sufficiently small, then the Cauchy problem (1.1), (1.5) has a unique classical solution (ρ, u) globally in time, which satisfies ρ − ρ per ∈ C(0, ∞;
H N −1 (R n )) ∩ C 1 (0, ∞; H N −3 (R n )), u − u per ∈ C(0, ∞; H N −2 (R n )) ∩ C 1 (0, ∞; H N −4 (R n )). (1.6)
Moreover, there exists a constant C 0 > 0 such that
(ρ − ρ per )(t) 2 N −1 + (u − u per )(t) 2 N −2 + t 0 ∇(ρ − ρ per )(τ ) 2 N −1 + ∇(u − u per )(τ ) 2 N −2 dτ ≤ C 0 ρ 0 − ρ per (0) 2 N −1 + u 0 − u per (0) 2 N −2 , (1.7) for any t ≥ 0 and (ρ − ρ per , u − u per ) L ∞ → 0 as t → ∞.
(1.8)
Now we outline the main ingredients used in proving of our main results. For the proof of Theorem 1.1, thanks to the time decay estimates of solutions to the linear system (2.7) (see Lemma 2.1 below), we can show the integral in (4.5) is convergent. Based on this and the elaborate energy estimates given in Section 3, we prove the existence of time periodic solution by the contraction mapping principle. Here, similar to the case of compressible Navier-Stokes equations, Theorem 1.1 is obtained only in the case n ≥ 5 because of the convergence of the integral in (4.5). Thus, how to deal with the case n < 5, especially, the physical case n = 3, is still an open problem. Theorem 1.2 is established by the energy method. The key ingredient in the proof of Theorem 1.2, among other things, is to get the a priori estimates, which can be done similarly to the estimates in Section 3.
There have been a lot of studies on the mathematical theory of the compressible Navier-Stokes-Korteweg system. For example, Hattori and Li [12,13] proved the local existence and the global existence of smooth solutions in Sobolev space. Danchin and Desjardins [7] studied the existence of suitably smooth solutions in critical Besov space. Bresch, Desjardins and Lin [5] considered the global existence of weak solution, then Haspot improved their results in [6]. The local existence of strong solutions was proven in [14]. Recently, Wang and Tan [15] established the optimal decay rates of global smooth solutions without external force. Li [16] discussed the global existence and optimal L 2 -decay rate of smooth solutions with potential external force.
The rest of the paper is organized as follows. In Section 2, we will reformulate the problem and give some preliminaries for later use. In Section 3, we give the energy estimates on the linearized system (2.4). The proof of Theorem 1.1 is given in Section 4. In the last section, we will study the stability of the time periodic solution.
Notations: Throughout this paper, for simplicity, we will omit the variables t, x of functions if it does not cauchy any confusion. C denotes a generic positive constant which may vary in different estimates. ·, · is the inner product in L 2 (R n ). The norm in the usual Sobolev Space H s (R n ) are denoted by · s for s ≥ 0. When s=0, we will simply use · . Moreover, we denote
· H s + · L 1 by · H s ∩L 1 . If g = (g 1 , g 2 , · · · , g n ), then g = n k=1 ( g k 2 ) 1 2 . ∇ = (∂ 1 , ∂ 2 , · · · , ∂ n )
with ∂ i = ∂ x i , i = 1, 2, · · · , n and for any integer l ≥ 0, ∇ l g denotes all x derivatives of order l of the function g. Finally, for multi-index α = (α 1 , α 2 , · · · , α n ), it is standard that
∂ α x = ∂ α 1 x 1 ∂ α 2 x 2 · · · ∂ αn xn , |α| = n i=1
α i .
Reformulated system and preliminaries
We reformulate the system (1.1) in this section. Firstly, set
γ = P ′ (ρ ∞ ), κ ′ = ρ ∞ γ κ, µ ′ = µ ρ ∞ , ν ′ = ν + µ ρ ∞ , λ 1 = γ ρ ∞ , λ 2 = ρ ∞ γ ,
and define the new variables σ = ρ − ρ ∞ , v = λ 2 u, then the system (1.1) is reformulated as
σ t + γ∇ · v = G 1 (σ, v), v t − µ ′ ∆v − ν ′ ∇(∇ · v) + γ∇σ − κ ′ ∇∆σ = G 2 (σ, v) + λ 2 f, (2.1) where G 1 (σ, v) = −λ 1 ∇ · (σv), G 2 (σ, v) = − σ ρ ∞ (σ + ρ ∞ ) (µ∆v + ν∇(∇ · v)) − λ 1 (v · ∇)v − λ 2 P ′ (σ + ρ ∞ ) σ + ρ ∞ − P ′ (ρ ∞ ) ρ ∞ ∇σ.
Notice that G 1 and G 2 have the following properties:
G 1 (σ, v) ∼ ∇σ · v + σ∇ · v, G 2 (σ, v) ∼ σ∆v + σ∇(∇ · v) + (v · ∇)v + σ∇σ. (2.2)
Here ∼ means that two side are of same order.
Set U = (σ, v), G = (G 1 , G 2 ), F = (0, λ 2 f ) and A = 0 γdiv γ∇ − κ ′ ∇∆ − µ ′ ∆ − ν ′ ∇div ,
then the system (2.1) takes the form
U t + AU = G(U ) + F. (2.3)
We first consider the linearized system of (2.1):
σ t + γ∇ · v = G 1 (Ũ ), v t − µ ′ ∆v − ν ′ ∇(∇ · v) + γ∇σ − κ ′ ∇∆σ = G 2 (Ũ ) + λ 2 f, (2.4)
for any given functionsŨ = (σ,ṽ) satisfying
σ ∈ H N +2 (R n ),ṽ ∈ H N +1 (R n ).
Notice that the system (2.4) can be written as
U t + AU = G(Ũ ) + F. (2.5)
By the Duhamel's principle, the solution to the system (2.4) can be written in the mild form as
U (t) = S(t, s)U (s) + t s S(t, τ )(G(Ũ ) + F )(τ )dτ, t ≥ s, (2.6)
where S(t, s) is the corresponding linearized solution operator defined by
S(t, s) = e (t−s)A , t ≥ s.
Indeed, the corresponding homogeneous linear system to (2.4) is
σ t + γ∇ · v = 0, v t − µ ′ ∆v − ν ′ ∇(∇ · v) + γ∇σ − κ ′ ∇∆σ = 0, σ| t=s = σ s (x), v| t=s = v s (x).
(2.7)
By repeating the argument in the proof of Theorem 1.3 in [15], we can get the following result for the problem (2.7). The details are omitted here.
Lemma 2.1. Let l ≥ 0 be an integer. Assume that (σ, v) is the solution of the problem (2.7) with the initial date σ s ∈ H l+1 ∩ L 1 and v s ∈ H l ∩ L 1 , then σ(t) ≤ C(1 + t) − n 4 ( (σ s , v s ) L 1 + (σ s , v s ) ) , ∇ k+1 σ(t) ≤ C(1 + t) − n 4 − k+1 2 (σ s , v s ) L 1 + (∇ k+1 σ s , ∇ k v s ) , ∇ k v(t) ≤ C(1 + t) − n 4 − k 2 (σ s , v s ) L 1 + (∇ k+1 σ s , ∇ k v s ) ,
where k is an integer satisfying 0 ≤ k ≤ l.
Energy estimates
In this section, we will perform some energy estimates on solutions (σ, v) to problem (2.4). Throughout of this section, we assume that
f (t, x) ∈ H N −1 (R n ) ∩ L 1 (R n ) for all t ≥ 0.
For later use, we list some standard inequalities as follows. cf. [8].
u 2 L ∞ ≤ C ∇ m+1 u ∇ m−1 u f or n = 2m, u 2 L ∞ ≤ C ∇ m+1 u ∇ m u f or n = 2m + 1.
Lemma 3.2. Let m be the integer defined in Lemma 3.1 and f, g, h ∈ H [ n 2 ]+1 (R n ) , then we have
(i) R n f · g · h dx ≤ ǫ ∇ m−1 f 2 2 + C ǫ g 2 h 2 , (ii) R n f · g · h dx ≤ ǫ f 2 2 + C ǫ ∇ m−1 g 2 2 h 2 ,
for any ǫ > 0. Here and hereafter, C ǫ denotes a positive constant depending only on ǫ.
We first give the energy estimate on the low order derivatives of (σ, v).
Lemma 3.3. Let n ≥ 5, N ≥ n + 2, then there exists two suitably small constants d 0 > 0 and ǫ 0 > 0 such that for 0 < ǫ ≤ ǫ 0 , it holds d dt U (t) 2 + ∇σ(t) 2 + d 0 v, ∇σ (t) + ∇v(t) 2 + ∇σ(t) 2 1 ≤ ǫC ∇ 3 σ(t) 2 m−2 + ∇ 2 v(t) 2 m−1 + C ǫ C Ũ (t) 2 m+1 ∇Ũ (t) 2 1 + f (t) 2 L 1 ∩L 2 , (3.1)
where m is defined in Lemma 3.1 and C depends only on ρ ∞ , µ, ν and κ.
Proof. Multiplying (2.4) 1 and (2.4) 2 by σ and v, respectively, and integrating them over R n , we have from integrating by parts that
1 2 d dt U 2 + µ ′ ∇v 2 + ν ′ ∇ · v 2 = G 1 (Ũ ), σ + G 2 (Ũ ), v + κ ′ ∇∆σ, v + λ 2 f, v = I 0 + I 1 + I 2 + I 3 . (3.2)
From (2.2) and Lemma 3.2, we have
I 0 ≤ ǫ ∇ m−1 σ 2 2 + C ǫ C ∇σ 2 ṽ 2 + σ 2 ∇ṽ 2 ≤ ǫ ∇ m−1 σ 2 2 + C ǫ C Ũ 2 ∇Ũ 2 ,(3.3)
and
I 1 ≤ ǫ ∇ m−1 v 2 2 + C ǫ C Ũ 2 ∇Ũ 2 1 . (3.4)
For I 2 , integrating by parts and using (2.4) 1 , (2.2) and Lemma 3.2, we deduce that
I 2 = −κ ′ ∆σ, ∇ · v = κ ′ γ ∆σ, σ t − G 1 (Ũ ) = − κ ′ 2γ d dt ∇σ 2 − κ ′ γ ∆σ, G 1 (Ũ ) ≤ − κ ′ 2γ d dt ∇σ 2 + ǫ ∇ 2 σ 2 + C ǫ C ∇Ũ 2 ∇ m−1Ũ 2 2 .
(3.5)
For I 3 , Lemma 3.1 gives
I 3 ≤ ǫ ∇ m−1 v 2 2 + C ǫ C f 2 L 1 . (3.6) Since n ≥ 5, N ≥ n + 2, we have m − 1 ≥ 1. Substituting (3.3)-(3.6) into (3.2) yields d dt U 2 + ∇σ 2 + ∇v 2 + ∇ · v 2 ≤ ǫC ∇ m−1 σ 2 2 + ∇ 2 σ 2 + ǫC ∇ 2 v 2 m−1 + C ǫ C Ũ 2 m+1 ∇Ũ 2 1 + f 2 L 1 , (3.7)
provided that ǫ is small enough, where C depends only on ρ ∞ , µ, ν and κ. Next, we estimate ∇σ 2 . Taking the L 2 inner product with ∇σ on both side of (2.4) 2 and then integrating by parts, we have
γ ∇σ 2 + κ ′ ∇ 2 σ 2 = − v t , ∇σ + µ ′ ∆v, ∇σ + ν ′ ∇(∇ · v), ∇σ + G 2 (Ũ ) + λ 2 f, ∇σ = I 4 + I 5 + I 6 + I 7 .
(3.8) Similar to (3.5), the term I 4 can be controlled by
I 4 = − d dt v, ∇σ − ∇ · v, σ t = − d dt v, ∇σ − ∇ · v, −γ∇ · v + G 1 (Ũ ) ≤ − d dt v, ∇σ + 2γ ∇ · v 2 + C ∇ m−1Ũ 2 2 ∇Ũ 2 .
(3.9)
Integrating by parts and using the Cauchy-Schwartz inequality, it is easy to get
I 5 + I 6 ≤ κ ′ 4 ∇ 2 σ 2 + C( ∇v 2 + ∇ · v 2 ). (3.10)
Finally, (2.2) and the Cauchy-Schwartz inequality imply that
I 7 ≤ γ 2 ∇σ 2 + C ∇ m−1Ũ 2 2 ∇Ũ 2 1 + f 2 . (3.11) Combining (3.8)-(3.11), we obtain d dt v, ∇σ + ∇σ 2 + ∇ 2 σ 2 ≤ C( ∇v 2 + ∇ · v 2 ) + C ∇ m−1Ũ 2 2 ∇Ũ 2 1 + f 2 .
(3.12)
where the constant C depends only on ρ ∞ , µ, ν and κ. Multiplying Next, we derive the energy estimate on the high order derivatives of (σ, v). We establish the following lemma.
1 > 0 such that for 0 < ǫ ≤ ǫ 1 , it holds d dt ∇σ(t) 2 N + ∇v(t) 2 N −1 + d 1 N |α|=1 ∂ α x v, ∂ α x ∇σ (t) + ∇ 2 σ(t) 2 N + ∇ 2 v(t) 2 N −1 ≤ ǫC ∇σ(t) 2 + C ǫ C ∇Ũ (t) 2 N −2 ∇Ũ (t) 2 N + f (t) 2 N −1 ,(3.
13)
where C is depending only on ρ ∞ , µ, ν and κ.
Proof. For each multi-index α with 1 ≤ |α| ≤ N , applying ∂ α x to (2.4) 1 and (2.4) 2 and then taking the L 2 inner product with ∂ α x σ and ∂ α x v on the two resultant equations respectively, we have from integrating by parts that
1 2 d dt ∂ α x σ 2 + ∂ α x v 2 + µ ′ ∂ α x ∇v 2 + ν ′ ∂ α x ∇ · v 2 = ∂ α x G 1 (Ũ ), ∂ α x σ + ∂ α x G 2 (Ũ ), ∂ α x v + κ ′ ∂ α x ∇∆σ, ∂ α x v + λ 2 ∂ α x f, ∂ α x v = I 8 + I 9 + I 10 + I 11 .
(3.14)
Now, we estimate I 8 -I 11 term by term. For I 8 , we deduce from (2.2) and the Cauchy-Schwartz inequality that
I 8 ≤ ǫ ∂ α x σ 2 + C ǫ ∂ α x G 1 (Ũ ) 2 ≤ ǫ ∂ α x σ 2 + C ǫ C ∂ α x (∇σ ·ṽ) 2 + ∂ α x (σ∇ ·ṽ) 2 .
(3.15) By Leibniz's formula and Minkowski's inequality, we get
∂ α x (∇σ ·ṽ) 2 ≤ C( (∂ α x ∇σ) ·ṽ 2 + ∇σ · ∂ α xṽ 2 ) + C 0<|β|=|α|−1 C α β ∂ β x ∇σ · ∂ α−β xṽ 2 +C 0<|β|≤|α|−2, |α−β|≤ N 2 C α β ∂ β x ∇σ · ∂ α−β xṽ 2 +C 0<|β|≤|α|−2, |α−β|> N 2 C α β ∂ β x ∇σ · ∂ α−β xṽ 2 = J 0 + J 1 + J 2 + J 3 .
(3.16)
Here C α β denotes the binomial coefficients corresponding to multi-indices. For J 0 , lemma 3.1 gives
J 0 ≤ C ṽ 2 L ∞ ∂ α x ∇σ 2 + ∇σ 2 L ∞ ∂ α xṽ 2 ≤ C ∇ṽ 2 N −5 ∇ 2σ 2 N −1 + ∇ 2σ 2 N −5 ∇ṽ 2 N −1 ,(3.17)
where, in the last inequality of (3.17), we have used the fact that m − 1 ≥ 1 and m + 1 ≤ N − 4 due to N ≥ n + 2 and n ≥ 5. Similarly, it holds that
J 1 ≤ C 0<|β|=|α|−1 ∂ α−β xṽ 2 L ∞ ∂ β x ∇σ 2 ≤ C ∇ 2ṽ 2 N −5 ∇ 2σ 2 N −2 . (3.18)
For the terms J 2 and J 3 , notice that for any β ≤ α with |α − β| ≤ N 2 ,
|α − β| + m + 1 ≤ N 2 + n 2 + 1 ≤ N 2 + N 2 = N,
and for any β ≤ α with |α − β| > N 2 ,
|β| + m + 2 = |α| − |α − β| + m + 2 < N − N 2 + n 2 + 2 ≤ N + 1.
which implies |β| + m + 2 ≤ N since |β| and m are positive integers. Hence, we deduce from Lemma 3.1 that
J 2 ≤ C 0<|β|≤|α|−2, |α−β|≤ N 2 ∂ α−β xṽ 2 L ∞ ∂ β x ∇σ 2 ≤ C ∇ 2ṽ 2 N −2 ∇ 2σ 2 N −3 ,(3.19)
and
J 3 ≤ C 0<|β|≤|α|−2, |α−β|> N 2 ∂ β x ∇σ 2 L ∞ ∂ α−β xṽ 2 ≤ C ∇ 2ṽ 2 N −3 ∇ 2σ 2 N −2 .∂ α x (∇σ ·ṽ) 2 ≤ C ∇ṽ 2 N −5 ∇ 2σ 2 N −1 + ∇ 2Ũ 2 N −3 ∇Ũ 2 N −1 . (3.21) Similarly, it holds ∂ α x (σ∇ ·ṽ) 2 ≤ C ∇σ 2 N −5 ∇ 2ṽ 2 N −1 + ∇ 2Ũ 2 N −3 ∇Ũ 2 N −1 .I 8 ≤ ǫ ∂ α x σ 2 + C ǫ C ∇Ũ 2 N −2 ∇Ũ 2 N . (3.23)
For the term I 9 , let α 0 ≤ α with |α 0 | = 1, then
I 9 = − ∂ α−α 0 x G 2 , ∂ α+α 0 x v ≤ ǫ ∂ α+α 0 x v 2 + C ǫ ∂ α−α 0 x G 2 2 . (3.24)
Similar to the estimate of (3.21), we have
∂ α−α 0 x G 2 2 ≤ C Ũ 2 N −1 ∇Ũ 2 N . (3.25)
Thus, it follows from (3.24) and (3.25) that
I 9 ≤ ǫ ∂ α+α 0 x v 2 + C Ũ 2 N −1 ∇Ũ 2 N . (3.26)
Notice that (3.21) and (3.22) imply
∂ α x G 1 2 ≤ C ∂ α x (∇σ ·ṽ) 2 + ∂ α x (σ∇ ·ṽ) 2 ≤ C ∇Ũ 2 N −2 ∇Ũ 2 N . (3.27)
Therefore, we derive from (2.4) 1 , (3.27) and the Cauchy-Schwartz inequality that
I 10 = − κ ′ γ ∂ α x ∆σ, −∂ α x σ t + ∂ α x G 1 (Ũ ) = − κ ′ γ ∂ α x ∇σ, ∂ α x ∇σ t − κ ′ γ ∂ α x ∆σ, ∂ α x G 1 (Ũ ) ≤ − κ ′ 2γ d dt ∂ α x ∇σ 2 + ǫ ∂ α x ∆σ 2 + C ǫ C ∇Ũ 2 N −2 ∇Ũ 2 N .
(3.28)
Moreover, it holds that
I 11 = −λ 2 ∂ α+α 0 x v, ∂ α−α 0 x f ≤ ǫ ∂ α+α 0 x v 2 + C ǫ C f 2 N −1 . (3.29)
where α 0 is defined in (3.24). Combining (3.14), (3.23), (3.26), (3.28) and (3.29
), if ǫ is small enough, we have d dt ∂ α x σ 2 1 + ∂ α x v 2 + ∂ α x ∇v 2 + ∂ α x ∇ · v 2 ≤ ǫC ∂ α x σ 2 + ǫC ∂ α x ∆σ 2 + C ǫ C ∇Ũ 2 N −2 ∇Ũ 2 N + f 2 N −1 ,(3.30)
where C depends only on ρ ∞ , µ, ν and κ. Now we turn to estimate ∂ α x ∆σ 2 for 1 ≤ |α| ≤ N . As we did for the first order derivative estimate, applying ∂ α x to (2.4) 2 and then taking the L 2 inner product with ∂ α x ∇σ on the resultant equation, we get from integrating by parts that
κ ′ ∂ α x ∆σ 2 + γ ∂ α x ∇σ 2 = − ∂ α x v t , ∂ α x ∇σ + µ ′ ∂ α x ∆v, ∂ α x ∇σ + ν ′ ∂ α x ∇(∇ · v), ∂ α x ∇σ + ∂ α x G 2 (Ũ ), ∂ α x ∇σ + λ 2 ∂ α x f, ∂ α x ∇σ = I 12 + I 13 + I 14 + I 15 + I 16 .
(3.31)
The first term I 12 is controlled by
I 12 = − d dt ∂ α x v, ∂ α x ∇σ + ∂ α x v, ∂ α x ∇σ t = − d dt ∂ α x v, ∂ α x ∇σ − ∂ α x ∇ · v, ∂ α x (−γ∇ · v + G 1 (Ũ )) ≤ − d dt ∂ α x v, ∂ α x ∇σ + 2γ ∂ α x ∇ · v 2 + C ∇Ũ 2 N −2 ∇Ũ 2 N .
(3.32)
Here, in the last inequality of (3.32), we have used (3.27). By integrating by parts, the Cauchy-Schwartz inequality and (3.25), the other terms I 13 -I 15 can be estimated as follows.
I 13 + I 14 ≤ κ ′ 4 ∂ α x ∇ 2 σ 2 + C ∂ α x ∇v 2 + ∂ α x ∇ · v 2 ,(3.
33)
I 15 ≤ κ ′ 4 ∂ α+α 0 x ∇σ 2 + C Ũ 2 N −1 ∇Ũ 2 N , (3.34) I 16 ≤ κ ′ 4 ∂ α+α 0 x ∇σ 2 + C f 2 N −1 . (3.35)
where α 0 is given in (3.24). Combining (3.31)-(3.35), we obtain
d dt ∂ α x v, ∂ α x ∇σ + κ ′ ∂ α x ∆σ 2 + γ ∂ α x ∇σ 2 ≤ C ∂ α x ∇v 2 + ∂ α x ∇ · v 2 + C Ũ 2 N −1 ∇Ũ 2 N + f 2 N −1 .d dt ∂ α x σ 2 1 + ∂ α x v 2 + d 1 ∂ α x v, ∂ α x ∇σ + ∂ α x ∇σ 2 1 + ∂ α x ∇v 2 ≤ ǫC ∂ α x σ 2 + CC ǫ ∇Ũ 2 N −2 ∇Ũ 2 N + f 2 N −1 ,d 1 > 0 such that d dt σ(t) 2 N +1 + v(t) 2 N + d 0 v, ∇σ (t) + d 1 N |α|=1 ∂ α x v, ∂ α x ∇σ (t) + ∇σ(t) 2 N +1 + ∇v(t) 2 N ≤ C Ũ (t) 2 N −1 ∇Ũ (t) 2 N + f (t) 2 H N−1 ∩L 1 , (3.38)
where C depends only on ρ ∞ , µ, ν and κ.
Proof. Notice that, from the fact that m − 1 ≥ 1 and m + 1 ≤ N − 4, we have
∇ 3 σ 2 m−2 + ∇ 2 v 2 m−1 ≤ C ∇ 2Ũ 2 N −6 ,
and Ũ m+1 ≤ Ũ 2 N −4 . Adding (3.37) to (3.1), we obtain (3.38) immediately by the smallness of ǫ. This completes the proof of Corollary 3.1.
Existence of time periodic solution
In this section, we will combine the linearized decay estimate Lemma 2.1 with the energy estimates Corollary 3.1 to show the existence of time periodic solution to (1.1). Now, we are ready to prove Theorem 1.1 as follows. Proof of Theorem 1.1. The proof is divided into two steps.
Step 1. Suppose that there exists a time periodic solution U per (t) := (σ per (x, t), v per (x, t)), t ∈ R of the system (2.1) with period T , and U per (t) ∈ X M 0 (0, T ) for some constant M 0 > 0. Then it solves (2.3) with initial date U s = U per (s) for any given time s ∈ R. Choosing s = −kT for k ∈ N. Clearly, U per (−kT ) = U per (0), thus (2.3) can be written in the mild form as
U per (t) = S(t, −kT )U per (0) + t −kT S(t, τ )(G(U per )(τ ) + F (τ ))dτ. (4.1)
Denote S(t, −kT )U per (0) := (σ per 1 (t), v per 1 (t)). Applying Lemma 2.1 to S(t, −kT )U per (0), we have
σ per 1 (t) N ≤ (1 + t + kT ) − n 4 (σ per 0 , v per 0 ) L 1 + σ per 0 2 N + v per 0 2 N −1 −→ 0 as k → ∞. (4.2)
and v per
1 (t) N −1 ≤ (1 + t + kT ) − n 4 (σ per 0 , v per 0 ) L 1 + σ per 0 2 N + v per 0 2 N −1 −→ 0 as k → ∞. (4.3) Since L 2 ∩L 1 is dense in L 2 , (4.2) and (4.3) still hold for U per (0) = (σ per 0 , v per 0 ) ∈ H N (R n )×H N −1 (R n )
. On the other hand, denote S(t, τ )(G(U per )(τ ) + F (τ )) := (S 1 (t, τ ), S 2 (t, τ )).
By using Lemma 2.1 again, we get
S 1 (t, τ ) N ≤ (1 + t − τ ) − n 4 K 0 , S 2 (t, τ ) N −1 ≤ (1 + t − τ ) − n 4 K 0 ,(4.4)
where
K 0 = (G 1 (U per ), G 2 (U per ) + λ 2 f )(τ ) L 1 + G 1 (U per )(τ ) N + (G 2 (U per ) + λ 2 f )(τ ) N −1 .
Then (4.4) guarantees the convergence of the integral in (4.1) since n 4 > 1 when n ≥ 5. Thus, letting k → ∞ in (4.1), we obtain
U per (t) = t −∞ S(t, τ )(G(U per ) + F )(τ )dτ.
(4.5)
For any U = (σ, v) ∈ X M 0 (0, T ), define Ψ[U ](t) = t −∞ S(t, τ )(G(U ) + F )(τ )dτ.
Then (4.5) shows that U per is a fixed point of Ψ[U ]. Conversely, suppose that Ψ has a unique fixed point, denoted by U 1 (t) = (σ 1 , v 1 )(t). We show that U 1 (t) is time periodic with period T . To this end, setting U 2 (t) = U 1 (t + T ). Since the period of f is T , the period of F is T too. Thus, we have
U 2 (t) = U 1 (t + T ) = Ψ[U 1 ](t + T ) = t+T −∞ S(t + T, τ )(G(U 1 )(τ ) + F (τ ))dτ = t −∞ S(t + T, s + T ) (G(U 1 )(s + T ) + F (s + T )) ds = t −∞ S(t, s) (G(U 2 )(s) + F (s)) ds = Ψ[U 2 ](t) (4.6)
where we have used S(t + T, s + T ) = S(t, s).
Then by uniqueness, U 2 = U 1 , which proves the periodicity of U 1 (t). Since U 1 (t) is differentiable with respect to t, it is the desired periodic solution of the system (2.1).
Step 2. Now, it remains to show that if (H1)-(H3) hold, and
sup 0≤t≤T f (t) H N−1 ∩L 1
is sufficiently small, then Ψ has a unique fixed point in the space X M 0 (0, T ) for some appropriate constant M 0 > 0. The proof is divided into two parts.
(i) Assume thatŨ = (σ,ṽ) in the system (2.4) is time periodic with period T . Denote U = Ψ[Ũ ] with U = (σ, v). Then by the same argument as (4.6), one can show that U is also time periodic with period T . Notice that U satisfies the system (2.4). Thus, for n ≥ 5 and N ≥ n + 2, Corollary 3.1 holds. Integrating (3.38)
in t over [0, T ] to get T 0 ∇σ(t) 2 N +1 + ∇v(t) 2 N dt ≤ C T 0 Ũ (t) 2 N −1 ∇Ũ (t) 2 N + f (t) 2 N −1 + f (t) 2 L 1 dt ≤ C sup 0≤t≤T Ũ (t) 2 N −1 T 0 ∇Ũ (t) 2 N dt + T 0 f (t) 2 H N−1 ∩L 1 dt ≤ C|||Ũ (t)||| 4 + CT sup 0≤t≤T f (t) 2 H N−1 ∩L 1 . (4.7)
On the other hand, by Lemma 2.1, we have
σ(t) N ≤ t −∞ (1 + t − τ ) − n 4 K 1 dτ, v(t) N −1 ≤ t −∞ (1 + t − τ ) − n 4 K 1 dτ,(4.8)
where
K 1 = (G 1 (Ũ ), G 2 (Ũ ) + λ 2 f )(τ ) L 1 + G 1 (Ũ )(τ ) N + (G 2 (Ũ ) + λ 2 f )(τ ) N −1 . (4.9)
From (2.2), (3.25) and (3.27), we easily deduce that
(G 1 (Ũ )(τ ) L 1 ≤ C ∇Ũ (τ ) Ũ (τ ) , (G 1 (Ũ )(τ ) N ≤ C ∇Ũ (τ ) N −2 ∇Ũ (τ ) N ,G 2 (Ũ ) + λ 2 f )(τ ) L 1 ≤ C ∇Ũ (τ ) 1 Ũ (τ ) + C f (τ ) L 1 , G 2 (Ũ ) + λ 2 f )(τ ) N −1 ≤ C Ũ (τ ) N −1 ∇Ũ (τ ) N + C f (τ ) N −1 .
(4.10)
Combining (4.8)-(4.10), we obtain
σ(t) N ≤ C t −∞ (1 + t − τ ) − n 4 Ũ (τ ) N −1 ∇Ũ (τ ) N + f (τ ) H N−1 ∩L 1 dτ ≤ C ∞ j=0 A j + C t −∞ (1 + t − τ ) − n 4 f (τ ) H N−1 ∩L 1 dτ ≤ C ∞ j=0 A j + C sup 0≤t≤T f (t) H N−1 ∩L 1 , (4.11) where A j = C t−jT t−(j+1)T (1 + t − τ ) − n 4 Ũ (τ ) N −1 ∇Ũ (τ ) N dτ ≤ C t−jT t−(j+1)T (1 + t − τ ) − n 2 dτ 1 2 t−jT t−(j+1)T Ũ (τ ) 2 N −1 ∇Ũ (τ ) 2 N dτ 1 2 ≤ C(1 + jT ) − n 4 sup 0≤τ ≤T Ũ (τ ) N −1 T 0 ∇Ũ (τ ) 2 N dτ 1 2 ≤ C(1 + jT ) − n 4 |||Ũ ||| 2 (4.12)
Since n 4 > 1 when n ≥ 5, substituting (4.12) into (4.11) gives
σ(t) N ≤ C|||Ũ ||| 2 + C sup 0≤t≤T f (t) H N−1 ∩L 1 . (4.13) Similarly, it holds that v(t) N −1 ≤ C|||Ũ ||| 2 + C sup 0≤t≤T f (t) H N−1 ∩L 1 .
(4.14)
Thus, we deduce from (4.7), (4.13) and (4.14) that
|||Ψ[Ũ ]||| ≤ C 1 |||Ũ ||| 2 + C 2 sup 0≤t≤T f (t) H N−1 ∩L 1 ,(4.15)
where C 1 and C 2 are some positive constants depending only on ρ ∞ , µ, ν, κ and T .
(ii) LetŨ 1 = (σ 1 ,ṽ 1 ) andŨ 2 = (σ 2 ,ṽ 2 ) be time periodic functions with period T in the space X M 0 (0, T ), where M 0 > 0 will be determined below. Then similar to (i), we can get
|||Ψ[Ũ 1 ] − Ψ[Ũ 2 ]||| ≤ C 3 |||Ũ 1 ||| + |||Ũ 2 ||| |||Ũ 1 −Ũ 2 |||,(4.16)
where C 3 is a positive constant depending only on ρ ∞ , µ, ν, κ and T . Choose M 0 > 0 and a sufficiently small constant δ > 0 such that
C 1 M 2 0 + C 2 δ ≤ M 0 , and 2C 3 M 0 < 1 (4.17) That is, 1 − √ 1 − 4C 1 C 2 δ 2C 1 ≤ M 0 ≤ min 1 + √ 1 − 4C 1 C 2 δ 2C 1 , 1 2C 3 , 1 (4.18) Notice that 1 − √ 1 − 4C 1 C 2 δ 2C 1 −→ 0 as δ −→ 0.
Then there exists a constant δ 0 > 0 depending only on ρ ∞ , µ, ν, κ and T such that if 0 < δ ≤ δ 0 , the set of M 0 that satisfying (4.18) is not empty. For 0 < δ ≤ δ 0 , when M 0 satisfies (4.18), Ψ is a contraction map in the complete space X M 0 (0, T ), thus Ψ has a unique fixed point in X M 0 (0, T ). This completes the proof of Theorem 1.1.
Stability of time periodic solution
This section is devoted to proving Theorem 1.2 on the stability of the obtained time periodic solution. We shall establish the global existence of smooth solutions to the Cauchy problem (1.1), (1.5). First, let (ρ per , u per ) be the time periodic solution obtained in Theorem 1.1 and (ρ, u) be the solution of the Cauchy problem (1.1), (1.5). Denote
(σ per , v per ) = (ρ per − ρ ∞ , λ 2 u per ), (σ, v) = (ρ − ρ ∞ , λ 2 u). Let (σ,v) = (σ − σ per , v − v per ), then (σ,v) satisfies σ t + γ∇ ·v = G 1 (σ + σ per ,v + v per ) − G 1 (σ per , v per ), v t − µ ′ ∆v − ν ′ ∇(∇ ·v) + γ∇σ − κ ′ ∇∆σ = G 2 (σ + σ per ,v + v per ) − G 2 (σ per , v per ),(5.1)
with the initial datē
σ| t=0 =σ 0 (x) = ρ 0 (x) − ρ per (0),v| t=0 =v 0 (x) = λ 2 (u 0 (x) − u per (0)). (5.2)
Define the solution space byX(0, ∞), where for 0 ≤ t 1 ≤ t 2 ≤ ∞,
X(t 1 , t 2 ) = (σ,v)(t, x) σ(t, x) ∈ C(t 1 , t 2 ; H N −1 (R n )) ∩ C 1 (t 1 , t 2 ; H N −3 (R n )), v(t, x) ∈ C(t 1 , t 2 ; H N −2 (R n )) ∩ C 1 (t 1 , t 2 ; H N −4 (R n )), ∇σ(t, x) ∈ L 2 (t 1 , t 2 ; H N −1 (R n )), ∇v(t, x) ∈ L 2 (t 1 , t 2 ; H N −2 (R n )), (5.3)
with the norm (σ,v)(t) ∦ 2 := sup
t 1 ≤t≤t 2 σ(t) 2 N −1 + v(t) 2 N −2 + t 2 t 1 ∇σ(t) 2 N −1 + ∇v(t) 2 N −2 dt. (5.4)
Notice that (σ per , v per ) ∈X(0, T ). By using the dual argument and iteration technique as [12], one can prove the following local existence of the Cauchy problem (5.1), (5.2). We omit the proof here for brevity. where C 4 is a positive constant independent of (σ 0 ,v 0 ) ∦.
As usual, the global existence will be obtained by a combination of the local existence result Lemma 5.1 and the a priori estimate below. (σ,v)(t) ∦≤ δ, (5.5) it holds that
σ(t) 2 N −1 + v(t) 2 N −2 + t 0 ∇σ(τ ) 2 N −1 + ∇v(τ ) 2 N −2 dτ ≤ C 5 σ 0 2 N −1 + v 0 2 N −2 (5.6)
for all t ∈ [0, T 1 ].
Proof. Noticing that some smallness conditions can be imposed on (σ per , v per ), without loss of generality, we may assume |||(σ per , v per )||| ≤ ǫ with ǫ > 0 being sufficiently small. Then by the similar argument as in the proof of Lemmas 3.3-3.4, we can obtain
d dt Ū 2 + ∇σ 2 + d 2 v, ∇σ + ∇v 2 + ∇σ 2 1 ≤ ǫC ∇ 3 σ 2 N −7 + ∇ 2v 2 N −6 ,(5.7)
and d dt
∇σ 2 N −2 + ∇v 2 N −3 + d 3 N −2 |α|=1 ∂ α xv , ∂ α x ∇σ + ∇ 2σ 2 N −2 + ∇ 2v 2 N −3 ≤ ǫC ∇σ 2 + ∇v 2 ,(5.8)
where d 2 > 0 and d 3 > 0 are some suitably small constants, and C is a constant depending only on ρ ∞ , µ, ν and κ. Adding (5.8) to (5.7), it holds
d dt σ 2 N −1 + v 2 N −2 + d 2 v, ∇σ + d 3 N −2 |α|=1 ∂ α xv , ∂ α x ∇σ + ∇σ 2 N −1 + ∇v 2 N −2 ≤ 0,(5.9)
provided that ǫ is sufficiently small. Integrating (5.9) in t over (0, t), one can immediately get (5.6) since
σ 2 N −1 + v 2 N −2 + d 2 v, ∇σ + d 3 N −2 |α|=1 ∂ α xv , ∂ α x ∇σ ∼ σ 2 N −1 + ∇v 2 N −2 .
by the smallness of d 2 and d 3 . This completes the proof of Lemma 5.2. Proof of Theorem 1.2. By Lemmas 5.1-5.2 and the continuity argument, the Cauchy problem (5.1), (5.2) admits a unique solution (σ,v) globally in time, which satisfies (1.6) and (1.7). Then all the statements in Theorem 1.2 follow immediately. This completes the proof of Theorem 1.2.
Theorem 1. 1 .
1Let n ≥ 5, N ≥ n + 2. Assume the assumptions (H1)-(H3) hold, and f (t, x) ∈ C(0, T ; H N −1 (R n ) ∩ L 1 (R n )). Then there exists a small constant δ 0 > 0 and a constant M 0 > 0 which are dependent on ρ ∞ , such that if sup 0≤t≤T f (t) H N−1 ∩L 1 ≤ δ 0 , (1.4) then the problem (1.1) admits a time periodic solution (ρ per , u per ) with period T , satisfying
Lemma 3. 1 .
1Let m be a positive integer and u ∈ H [ n 2 ]+1 (R n ), then
( 3 .
312) with a small constant d 0 > 0 and then adding the resultant equation to (3.7), one can get (3.1) immediately by the smallness of d 0 and ǫ. This completes the proof of Lemma 3.3.
Lemma 3. 4 .
4Let n ≥ 5, N ≥ n + 2, then there exists two suitably small constants d 1 > 0 and ǫ
with a suitably small constant d 1 > 0 and then adding the resultant equation to (3.30) gives
d 1 and ǫ are small enough, where C depends only on ρ ∞ , µ, ν and κ. Summing up α with 1 ≤ |α| ≤ N in (3.37), then (3.13) follows immediately by the smallness of ǫ. This completes the proof of Lemma 3.4.As a consequence of Lemmas 3.3-3.4, we have the following Corollary.
Corollary 3. 1 .
1Let n ≥ 5, N ≥ n + 2, then there exists two suitably small constants d 0 > 0 and
Lemma 5.1. (Local existence) Under the assumptions of Theorem 1.1, suppose that(σ 0 ,v 0 ) ∈ H N −1 (R n ) × H N −2 (R n ) and inf ρ 0 (x) > 0.Then there exists a positive constant T 0 depending only on (σ 0 ,v 0 ) ∦ such that the Cauchy problem (5.1), (5.2) admits a unique classical solution(σ,v) ∈X(0, T 0 ) which satisfies (σ,v)(t) ∦≤ C 4 (σ 0 ,v 0 ) ∦,
Lemma 5.2. (A priori estimate) Under the assumptions of Lemma 5.1, suppose that the Cauchy problem (5.1), (5.2) has a unique classical solution (σ,v) ∈X(0, T 1 ) for some positive constant T 1 . Then there exists two small constants δ > 0 and C 5 > 0 which are independent of T 1 such that if sup 0≤t≤T 1
On the thermomechanics of interstital working. J E Dunn, Arch. Rational Mech. Anal. 88J. SerrinJ. E. Dunn, J. Serrin, On the thermomechanics of interstital working, Arch. Rational Mech. Anal., 88 (1985), 95-133.
Diffuse-interface methods in fluid mechanics. D M Anderson, G B Mcfadden, G B Wheeler, Ann. Rev. Fluid Mech. 30D. M. Anderson, G. B. McFadden, G. B. Wheeler, Diffuse-interface methods in fluid mechanics, Ann. Rev. Fluid Mech., 30 (1998), 139-165.
Free energy of a nonuniform system, I. Interfacial free energy. J W Cahn, J E Hilliard, J. Chem. Phys. 28J. W. Cahn, J. E. Hilliard, Free energy of a nonuniform system, I. Interfacial free energy, J. Chem. Phys., 28 (1998), 258-267.
Two-phase binary fluids and immiscible fluids describled by an order parameter. M E Gurtin, D Polignone, J Vinals, Math. Models Methods Appl. Sci. 66M. E. Gurtin, D. Polignone, J. Vinals, Two-phase binary fluids and immiscible fluids describled by an order parameter, Math. Models Methods Appl. Sci., 6 (6) (1996), 815-831.
On some compressible fluid models: Korteweg, lubrication and shallow water systems. D Bresch, B Desjardins, C K Lin, Comm. Partial Differential Equations. 28D. Bresch, B. Desjardins, C. K. Lin, On some compressible fluid models: Korteweg, lubrication and shallow water systems, Comm. Partial Differential Equations, 28 (2003), 843-868.
Existence of global weak solution for compressible fluid models of Korteweg type. B Haspot, J. Math. Fluid Mech. 13B. Haspot, Existence of global weak solution for compressible fluid models of Korteweg type, J. Math. Fluid Mech., 13 (2011), 223-249.
Desjardins, Existence of solutions for compressible fluid models of Korteweg type. R Danchin, B , Ann. Inst. Henri Poincare Anal. Nonlinear. 18R. Danchin, B. Desjardins, Existence of solutions for compressible fluid models of Korteweg type, Ann. Inst. Henri Poincare Anal. Nonlinear, 18 (2001), 97-133.
Time periodic solutions of compressible Navier-Stokes equations. H F Ma, S Ukai, T Yang, J. Differential Equations. H. F. Ma, S. Ukai, T. Yang, Time periodic solutions of compressible Navier-Stokes equations, J. Differential Equations, 248 (2010), 2275-2293.
Time periodic solutions of Boltzmann equation. S Ukai, Discrete Contin. Dynam. Systems. 14S. Ukai, Time periodic solutions of Boltzmann equation, Discrete Contin. Dynam. Systems, 14 (2006), 579-596.
The Boltzmann equation in the sapce L 2 ∩ L ∞ β : global and time periodic solution. S Ukai, T Yang, Analysis and Applications. 43S. Ukai, T. Yang, The Boltzmann equation in the sapce L 2 ∩ L ∞ β : global and time periodic solution, Analysis and Applications, 4 (3) (2006), 263-310.
Optimal decay estimates on the linearized Boltzmann equations with time dependent force and their applications. R J Duan, S Ukai, T Yang, H J Zhao, Comm. Math. Phys. 2771R. J. Duan, S. Ukai, T. Yang, H. J. Zhao, Optimal decay estimates on the linearized Boltzmann equations with time dependent force and their applications, Comm. Math. Phys., 277 (1) (2008), 189-236.
Solutions for two dimensional system for materials of Korteweg type. H Hattori, D Li, SIAM J. Math. Anal. 25H. Hattori, D. Li, Solutions for two dimensional system for materials of Korteweg type, SIAM J. Math. Anal., 25 (1994), 85-98.
Golobal solutions of a high dimensional system for Korteweg materials. H Hattori, D Li, J. Math. Anal. Appl. 198H. Hattori, D. Li, Golobal solutions of a high dimensional system for Korteweg materials, J. Math. Anal. Appl., 198 (1996), 84-97.
Strong solutions for a compressible fluid model of Korteweg type. M Kotschote, Ann. Inst. Henri Poincare Anal. Nonlinear. 254M. Kotschote, Strong solutions for a compressible fluid model of Korteweg type, Ann. Inst. Henri Poincare Anal. Nonlinear, 25 (4) (2008), 679-696.
Optimal decay rates for the compressible fluid models of Korteweg type. Y J Wang, Z Tan, J. Math. Anal. Appl. 379Y. J. Wang, Z. Tan, Optimal decay rates for the compressible fluid models of Korteweg type, J. Math. Anal. Appl., 379 (2011), 256-271.
Global existence and optimal decay rate of the compressible Navier-Stokes-Korteweg equations with external force. Y P Li, J. Math. Anal. Appl. 388Y. P. Li, Global existence and optimal decay rate of the compressible Navier-Stokes-Korteweg equations with external force, J. Math. Anal. Appl., 388 (2012), 1218-1232.
| []
|
[
"SOME APPLICATIONS OF LINEAR ALGEBRA AND GEOMETRY IN REAL LIFE",
"SOME APPLICATIONS OF LINEAR ALGEBRA AND GEOMETRY IN REAL LIFE"
]
| [
"Vittoria Bonanzinga [email protected] \nMediterranean University of Reggio Calabria\nItaly\n"
]
| [
"Mediterranean University of Reggio Calabria\nItaly"
]
| []
| In this paper, some real-world motivated examples are provided illustrating the power of linear algebra tools as the product of matrices, determinants, eigenvalues and eigenvectors. In this sense, some practical applications related to computer graphics, geometry, areas, volumes are presented, along with some problems connected to sports and investments. | null | [
"https://arxiv.org/pdf/2202.10833v1.pdf"
]
| 247,025,565 | 2202.10833 | 4592abe8f1b427d61d597f3f1576c41f29f242d6 |
SOME APPLICATIONS OF LINEAR ALGEBRA AND GEOMETRY IN REAL LIFE
Vittoria Bonanzinga [email protected]
Mediterranean University of Reggio Calabria
Italy
SOME APPLICATIONS OF LINEAR ALGEBRA AND GEOMETRY IN REAL LIFE
linear algebramatrixdeterminantanalytic geometrycomputer graphicsGeoGebraeigenvalueeigenvector Mathematics Subject Classification 2020: Primary 97H6097M50Secondary: 15A0315A1515A18
In this paper, some real-world motivated examples are provided illustrating the power of linear algebra tools as the product of matrices, determinants, eigenvalues and eigenvectors. In this sense, some practical applications related to computer graphics, geometry, areas, volumes are presented, along with some problems connected to sports and investments.
The practical applications of mathematics before being applied to the courses of Geometry for Engineering were successfully tested by the author in the course of Mathematics Fundamentals for the basic training of the Master's Degree Course in Primary Education in the first semester of the academic year 2019/2020, as reported in [2]. To stimulate and support students in the learning process in order to reduce the dropout rate and ensure the achievement of educational success, good didactic planning is essential in which motivation plays a key role. Indeed, students often encounter difficulties in learning mathematics because it is decontextualized, abstract and incomprehensible. The contextualization of real problems that can be solved with the use of linear algebra was a basic element to increase interest and motivation. The educational model presented was inspired by some results obtained within the European Rules-Math Project and was shared by the research group of the University of Turin DELTA, (Digital education for Learning and Teaching Advances).
APPLICATION IN REAL LIFE OF SOME PROBLEMS OF LINEAR ALGEBRA
Among the innovative elements of the didactic methodologies used there is the contextualization in real life of linear algebra and geometry problems, [9], [10]. Some examples are presented.
Application of matrices in sport
Three friends, Steven, Marc and George decide to train to participate in a triathlon competition (running, swimming and cycling). Every day Steven trains 30 minutes for running, 20 minutes for swimming and 100 minutes for cycling, while Marc trains 25 minutes for running, 30 minutes for swimming and 60 minutes for cycling, and George trains 20 minutes for running, 45 minutes for swimming and 55 minutes for cycling.
i. Write matrix A, which has the training minutes for each sport of Steven, Marc and George as rows. ii. Considering the matrix shown below = 10.1 9.2 12.2 7.2 6.5 8.7 5.3 4.6 6.4 which represents the calories burned by Steven of 70 kg, Marc of 65 kg, and George of 85 kg in relation to their weight for each minute of training; in the first row there are the calories burned for each minute of training in running, in the second row the calories burned for swimming and in the third for cycling. The first column refers to Steven, the second to Marc and the third to George. Calculate the product rows by columns • . What do the elements of the main diagonal of the resulting matrix correspond to?
iii. Which of the three guys, based on their training, consumes the most calories in a day? iv.
What does the element of the second row and third column of the matrix obtained at point ii) correspond to? Solution i. The elements on the main diagonal of the obtained matrix C correspond to the total calories burned for each boy: c 11 =977 is Steven's total calories burned c 22 =701 is Marc's total calories burned c 33 =987.5 is George's total calories burned.
iii. George burns the most calories in one day. iv. The element of the second row and third column corresponds to the total calories that George would burn if he trained for Marc's minutes as element c 23 is obtained by multiplying the second row of matrix A containing Marc's training minutes by the third column of the matrix M containing the calories burned by George, in relation to his weight, for each minute of training.
Determinants, areas and volumes
We give a geometric interpretation of the determinant of a matrix of order 2 and of the determinant of a matrix of order 3 and we present some applications to compute areas and volumes using determinants. The absolute value of the determinant of a matrix of order 2 is equal to the area of the parallelogram spanned by the row vectors of the matrix, and the absolute value of the determinant of a matrix of order 3 is simply equal to the volume of the parallelepiped spanned by the row vectors of that matrix, [1], [8]. In dimension 2, we have the following Theorem 1 If A is a matrix of order 2, its rows determine a parallelogram P in R 2 . The area of the parallelogram P is the absolute value of the determinant of the matrix whose rows are the vectors forming two adjacent sides of the parallelogram:
Area P= = − .
For instance, the area of the parallelogram formed by 2 two-dimensional vectors (1,5) and (6,2) is the following: In general, we consider a parallelogram P and we choose any three vertices of the parallelogram A(x A ,y A ), B(x B ,y B ) and C(x C ,y C ), and two adjacent vectors CA(x A -x C , y A -y C ) and CB(x B -x C , y B -y C ), then
Area P= det 1 5 6 2 = 2 − 30 = 28Area P= ! − ! ! − ! ! − ! ! − ! = ! − ! • ! − ! − ! − ! • ! − ! .
Area P can also be written in this elegant way using the coordinates of any three vertices of a parallelogram:
Area P = ! ! 1 ! ! 1 ! ! 1 .
For instance, if A=(1,5), C=(7,7) and B=(6,2) are three vertices of a parallelogram P then
Area P= 1 − 7 5 − 7 6 − 7 2 − 7 = 28.
If ! = 4 6 6 2 then the row vectors of the matrix A' generate a parallelogram P', then
Area P'= det 4 6 6 2 = 8 − 36 = 28.
If we compare the first example, where the row vectors are R 1 (1,5) and R 2 (6,2) and the second example, where the row vectors are ! ! (4,6) and ! ! (6,2), we can observe that
R 1 -! ! = ! !! = (-3,-1)
and the corresponding determinant of the matrix −3 −1 6 2 is zero. This last result is a consequence of a general property of the determinant of a matrix = :
= ! ! = det ! + ℎ ! ! = ! ! + det ℎ ! ! ∀ℎ ℝ.
From the geometric point of view, the previous result means that the area of the parallelogram does not change if we change the row vectors using elementary transformations. In particular, if we fix the base of the parallelogram and we change a row vector using elementary transformations, the height of the parallelogram does not change and so the area of the parallelogram P and P' coincide. It is possible to see the previous result using GeoGebra, [6], drawing the two parallelograms, the first one P of vertices O(0,0), A(1,5), B(6,2), C(7,7) and the second P' of vertices O(0,0), A'(4,6), B(6,2), C'(10,8):
Figure 2
By viewing the parallelogram formed by two congruent triangles, we obtain the formula to compute the area of a triangle, if we know the coordinates of its vertices. Precisely, if A(x A ,y A ), B(x B ,y B ) and C(x C ,y C ) are the vertices of a triangle T then
Area(T)= = ! ! ! ! 1 ! ! 1 ! ! 1 .
For instance, if T is a triangle with vertices A(1,5), B(6,2) and C(7,7) then
Area(T)= = ! ! 1 5 1 6 2 1 7 7 1 =14.
In dimension 3, we have the following Theorem 2 If A is a matrix of order 3, its rows, linear independent, determine a parallelepiped P in R 3 . The volume of the parallelepiped P is the absolute value of the determinant of the matrix A whose rows are three vectors forming three edges of the parallelepiped.
Volume P= ′ ′ ′ ′′ ′′ ′′ .
For instance, the volume of the parallelepiped P determined by the vectors of (1,50), (6,2,0) and (3,2,4) is
Volume P= 1 5 0 6 2 0 3 2 4
=112 cubic units.
Figure 3
As a consequence of the previous result, we can compute the volume of a tetrahedron T, given its vertices, using a determinant. Since the volume of a tetrahedron is ! ! of the volume of the corresponding parallelepiped, if A(x A , y A, z A ), B(x B ,y B , z B ), C(x C ,y C , z C ) and D(x D ,y D , z D ) are the vertices of a tetrahedron T, then three non-coplanar vectors:
DA( ! − ! , ! − ! , ! − ! ), DB( ! − ! , ! − ! , ! − ! ), DC( ! − ! , ! − ! , ! − ! ),
give the desired volume:
Vol(T)= = ! ! ! − ! ! − ! ! − ! ! − ! ! − ! ! − ! ! − ! ! − ! ! − ! .
For instance, if A(1,5,0), B(6,2,0), C(3,2,4), D(0,0,0), are the vertices of a tetrahedron ABCD then the volume is
Vol T = = ! ! 1 5 0 6 2 0 3 2 4 = ! ! 4(2 − 30) = !" ! .
Figure 4
Remark We can observe in dimension 3 that the volume of a parallelepiped is the area of its base times its height: here the "base" is the parallelogram, determined by row vectors R 1 and R 2 of matrix A of order 3 and the "height" is the perpendicular distance of R 3 from the base. We consider a row replacement of the form R 3 =R 3 +cR i , for i=1,2 and for all ∈ ℝ. Translating R 3 by a multiple of R i moves R 3 in a direction parallel to the base. This changes neither the base nor the height. Thus, the volume of a parallelepiped is unchanged by row replacements. For instance, if = 1 5 0 6 2 0 3 2 4 then the corresponding volume of a parallelepiped P spanned by the row vectors of A is given by the absolute value of the determinant of A which is 112. Replacing R 3 with R 3 +2R 1 we have ′ = 1 5 0 6 2 0 5 12 4 and the corresponding volume of the parallelepiped P' is unchanged. Using GeoGebra, we can visualize parallelepipeds P and P' as follows:
Strawberry Fields forever -Investments, eigenvalues and eigenvectors
Let us look at a practical example in the agriculture sphere. A farmer owns land where he grows strawberries. Part of the strawberries are used in three different sectors, part A for the cake production sector, part B for the jam production sector and the last part, part C, is used for the fair in the local village. Section A produces a revenue equal to four times the expenditure made for the strawberries, section B produces revenue of double the expenditure, while part C produces revenue equal to two thirds, since it is based only on offers made during the fair. You want to obtain a revenue proportional to the money invested in strawberry seedlings, whereby, with the revenues, other strawberries are grown which are redistributed equally in the 3 sectors. If the farm initially invests € 4200 in seedlings, how much does it get after a year in the optimal situation? Let x, y and z be the investments in sectors A, B and C respectively.
!"#$% ! !"#$ 4 2 !! ! = + 3 + !! ! !"#!"#$!%&#!'( + + ! ! + ! ! + !! ! + + ! ! = 2 + ! ! + ! ! + ! ! + !! ! .
So the transformation of the investment after one year is given by the following linear map:
:
ℝ ! ⟶ ℝ ! , ℎ , , = 2 + ! ! , + ! ! , + ! ! + ! ! . Let = !! = 2 ! ! 0 1 ! ! 0 1 ! ! ! !
be the matrix associated with the linear map f with respect to the canonical bases in the domain and in the codomain. Let X be the vector of the initial distribution; we would like the initial distribution after one year to be of the type X, so we look for the eigenvalues and eigenvectors of M. The characteristic polynomial is: . If we invest € 4200, then + + ! ! = € 4200, hence we get r = €1500 and ! ! = €1200. Thus, the optimal solution is to invest €1500 respectively in sectors A and B and €1200 in sector C; after one year, the amount of money will increase and we will have €3500 in sectors A and B, while €2800 in sector C. Example 3 provides a simple modelling of a process that can also be applied in the industrial sphere, in the production of goods and in the organization of resources.
1 3 0 1 4 3 − 0 1 1 3 2 3 − = 2 3 − ! − 10 3 + 7 3 .
The eigenspace related to
! = ! ! is ! ! = 0,0, : ∈ ℝ . The eigenspace related to ! = 1 is ! ! = − ! ! , , 0 : ∈ ℝ . The eigenspace related to ! = ! ! is ! ! = , , ! ! : ∈ ℝ .
Computer graphics and transformations
In the field of computer graphics, the following 2D and 3D transformations are frequently used: translations, rotations, scaling and reflections. We consider some examples in dimension 2. Given the quadrilateral ABCD obtained by joining the points A(1; 1), B(2; 3), C(4; 3) and D(5; 1), we describe the transformation of the figure ABCD considering the translation of the given figure with a vector of components (3,2), and the rotation of the given figure by an angle θ = ! ! . We describe these transformations and we represent the figures described above. We represent the quadrilateral ABCD on an xy plane: The vector components of this quadrilateral are therefore v 1 =(1,1), v 2 =(2,3), v 3 =(4,3) and v 4 =(5,1). A translation is an isometry, that is, a geometric transformation that leaves the distances unchanged by moving all the points by a fixed distance in the same direction. The translation T v is the application ! : ℝ ! ⟶ ℝ ! and therefore:
where
Rotazione della figura di angolo = 90°
Come definito qualche pagina indietro, la rotazione si basa su 3 elementi:
1) Verso di rotazione, si basa sul segno dell'angolo: a) Se positivo senso antiorario b) Se negativo senso orario 2) Ampiezza di rotazione, è esattamente la misura dell'angolo . c) Centro di rotazione, in questo caso, coincide con l'origine. Se vogliamo cambiare il centro di rotazione, è opportuno svolgere una traslazione.
Andiamo a rappresentare la figura con una rotazione di 90° attorno all'origine: Valori Y Iniziamo con le varie trasformazioni:
R1( )=( − ) ( 1 1 )=( 1 * 90°− 1 * 90°1 * 90°+ 1 * 90°) =( −1 1 ) R2( )=( − ) ( 2 2 )=( 2 * 90°− 3 * 90°2 * 90°+ 3 * 90°) =( −3 2 )
Traslazione
Una traslazione è un'isometria, ossia una trasformazione geometrica che lascia invariate le distanze spostando tutti i punti di una distanza fissa nella medesima direzione, che per definizione è caratterizzato da un modulo, da un verso e da una direzione.
La traslazione Tv è l'applicazione Tv:R 2 -->R 2 e quindi:
Tv( ) = ( + + ).
Le componenti p e q sono i valori per cui vogliamo la traslazione del vettore v=(x,y). In poche parole, quindi, la traslazione ci permette di andare a spostare un dato vettore senza modificare la sua inclinazione, e di conseguenza la sua direzione.
Rotazione
La rotazione è una trasformazione del piano (o dello spazio euclideo) che sposta gli oggetti in modo rigido e che lascia fisso almeno un punto. Qualunque sia il numero delle dimensioni dello spazio di rotazione, gli elementi della rotazione sono:
a. il verso (orario-antiorario); b. l'ampiezza dell'angolo di rotazione; c. il centro di rotazione (il punto attorno a cui avviene il movimento rotatorio).
Sia ∈ R. Indichiamo con R( ): R 2 R 2 l'applicazione lineare che ad un vettore V ∈ R 2 associa il vettore ottenuto da V dopo la rotazione di un angolo intorno all'origine.
R( )=( − ) ( )=( − + ) componenti di V dopo la - rotazione. 2
Iniziamo con le varie trasformazioni:
Traslazione
Una traslazione è un'isometria, ossia una trasformazione geometrica che lascia invariate le distanze spostando tutti i punti di una distanza fissa nella medesima direzione, che per definizione è caratterizzato da un modulo, da un verso e da una direzione.
La traslazione Tv è l'applicazione Tv:R 2 -->R 2 e quindi:
Tv( ) = ( + + ).
Le componenti p e q sono i valori per cui vogliamo la traslazione del vettore v=(x,y). In poche parole, quindi, la traslazione ci permette di andare a spostare un dato vettore senza modificare la sua inclinazione, e di conseguenza la sua direzione.
Rotazione
La rotazione è una trasformazione del piano (o dello spazio euclideo) che sposta gli oggetti in modo rigido e che lascia fisso almeno un punto. Qualunque sia il numero delle dimensioni dello spazio di rotazione, gli elementi della rotazione sono:
a. il verso (orario-antiorario); b. l'ampiezza dell'angolo di rotazione; c. il centro di rotazione (il punto attorno a cui avviene il movimento rotatorio).
Sia ∈ R. Indichiamo con R( ): R 2 R 2 l'applicazione lineare che ad un vettore V ∈ R 2 associa il vettore ottenuto da V dopo la rotazione di un angolo intorno all'origine.
R( )=( − ) ( )=( − + ) componenti di V dopo la - rotazione.
We consider some examples in dimension 3. Given the tetrahedron ABCD obtained by joining the points A(1,5,0), B(6,2,0), C(3,2,4), D(0,0,0), we describe the transformation of the figure ABCD considering the translation of the given figure with a vector of components (4,3,-2), the rotation of the given figure by an angle θ = ! ! about the z-axis and the reflection of the tetrahedron ABCD through the xy-plane. We describe these transformations and we represent the figures described above. The tetrahedron ABCD is represented in Figure 4. The vector components of the tetrahedron ABCD are therefore v 1 =(1,5,0), v 2 =(6,2,0), v 3 =(3,2,4) and v 4 =(0,0,0). In dimension 3, the translation T v is the application ! : ℝ ! ⟶ ℝ ! and therefore:
! = + + +
where p, q and r are the components of the translation vector. If p=4, q=3 and r=-2 then by applying the translation to the vectors v 1 =(1,5,0), v 2 =(6,2,0), v 3 =(3,2,4) and v 4 =(0,0,0) we obtain the vectors of components: (5,8,-2), (10,5,-2), (7,5,2) and (4,3,-2) respectively, illustrated in Figure 9. The reflection of the tetrahedron ABCD through the xy-plane is a map of ℝ ! ⟶ ℝ ! described by the following function: , , = ( , , − ) then applying the reflection to the vectors v 1 =(1,5,0), v 2 =(6,2,0), v 3 =(3,2,4) and v 4 =(0,0,0) we obtain respectively the vectors of components: (1,5,0), (6,2,0), (3,2,-4), and (0,0,0) described in the following figure:
Figure 1
1Figure 1
Figure 5
5Figure 5
Now let us interpret the results in the light of our problem. The eigenvalue ! = ! ! is a possible solution to our problem: investing all € 4200 in sector C, but this solution would not be a wise choice as every year the investment would decrease more and more. The eigenvalue ! = 1 does not give any solution to our problem as the eigenvectors − ! ! , , 0 have no interest as we cannot invest a negative amount of money. We consider the eigenvalue ! = ! ! with the generic eigenvector , , ! !
Figure 6 -
6Quadrilateral ABCDEsercizio per l'appello di Geometria del 16 settembre 2021Nella computer grafica si considerano le seguenti trasformazioni 2D: traslazioni, rotazioni, ridimensionamento (scaling) e riflessioni. Dato il quadrilatero ABCD ottenuto congiungendo i punti A(1;1), B(2;3), C(4;3) e D(5;1) descrivere la trasformazione della figura ABCD utilizzando le trasformazioni precedenti. Utilizzare le matrici per ottenere la traslazione della figura data, la rotazione della figura data, il ridimensionamento e la riflessione del quadrilatero ABCD. Descrivere tali trasformazioni e rappresentare le figure sopra descritte. Svolgimento Andiamo innanzitutto a rappresentare il quadrilatero ABCD su un piano xy:Le componenti vettoriali di questo quadrilatero sono quindi v 1 =(1,1), v 2 =(2,3), v 3 =(4,3) e v 4 =(5,1). Andare a modificare questa figura significa, quindi, andare a modificare ogni componente vettoriale che ne fa parte.
1
1p and q are the components of the translation vector. If p=3 and q=2 then by applying the translation to the vectors v
Figure 7 -
7Quadrilateral ABCD and its translationRotation in the plane is a map of ℝ ! ⟶ ℝ ! described by the following function:
Figure 8 -
8Quadrilateral ABCD and its rotation
Figure 9 -
9Tetrahedron ABCD and its translationRotation in the space of the given figure by an angle θ about the z-axis is a map of ℝ ! ⟶ ℝ ! described by the following function: we obtain respectively the vectors of components: (-5,1,0), (-2,6,0),(-2,3,4) and (0,0,0) described in the following figure:
Figure 10 -
10Tetrahedron ABCD and its rotation
Figure 11 -
11Tetrahedron ABCD and its reflection
AcknowledgementVittoria Bonanzinga is member of the National Group for Algebraic and Geometric Structures and their applications (GNSAGA/INdAM).
Guide to Essential Math. S M Blinder, ElsevierSecond EditionBlinder S.M. Guide to Essential Math, Second Edition 2013, Elsevier.
Blended learning: an experience with use of mobile devices for the course `Fundamentals of Mathematics for Basic Education. V Bonanzinga, C Cisto, E Miceli, 19Bonanzinga V., Cisto C., Miceli E., Blended learning: an experience with use of mobile devices for the course `Fundamentals of Mathematics for Basic Education, 19th
Aplimat 2020, Proceedings, (2020). Conference on Applied Mathematics, Aplimat 2020, Proceedings, (2020), pp. 119 -125.
Integration of a computational mathematics education in the mechanical engineering curriculum. M Enelund, S Larsson, J Malmquist, Proceedings of the 7 th International CDIO Conference. the 7 th International CDIO ConferenceCopenhagenTechnical University of DenmarkEnelund M., Larsson S., Malmquist J., Integration of a computational mathematics education in the mechanical engineering curriculum, Proceedings of the 7 th International CDIO Conference, Technical University of Denmark, Copenhagen, June 20-23, 2011.
The global state of art in engineering education. R Graham, MITGraham R., The global state of art in engineering education, 2018, MIT, pp.1-170.
Learning Strategies in Engineering Mathematics, book. B Griese, SpringerGriese B., Learning Strategies in Engineering Mathematics, book, Springer, 2017.
Mathematics in Engineering Education: a review of the recent literature with a view towards innovative practices. B Pepin, R Biehler, G Gueudet, International Journal of Research in Undergraduate Mathematics Education. 7Pepin B., Biehler R., Gueudet G., Mathematics in Engineering Education: a review of the recent literature with a view towards innovative practices, International Journal of Research in Undergraduate Mathematics Education, (2021), 7, pp. 163-188.
. G Strang, Linear Algebra and its Applications. Fourth EditionStrang G., Linear Algebra and its Applications, Fourth Edition, 2005.
Some practical applications of matrices and determinants in real life. F Yilmaz, Mierlus Mazilu, I , 19th Conference on Applied Mathematics, Aplimat 2020, Proceedings, (2020). Yilmaz F., Mierlus Mazilu I., Some practical applications of matrices and determinants in real life, 19th Conference on Applied Mathematics, Aplimat 2020, Proceedings, (2020), pp. 814-823.
F Yilmaz, I Mierlus Mazilu, D Rasteiro, Solving Real Life Problems Using Matrices And Determinants, 19th Conference on Applied Mathematics, Aplimat 2020, Proceedings, (2020). Yilmaz F., Mierlus Mazilu I., Rasteiro D., Solving Real Life Problems Using Matrices And Determinants, 19th Conference on Applied Mathematics, Aplimat 2020, Proceedings, (2020), pp 1131-1139.
| []
|
[
"On direct and inverse spectral problems for sloshing of a two-layer fluid in an open container",
"On direct and inverse spectral problems for sloshing of a two-layer fluid in an open container"
]
| [
"Nikolay Kuznetsov [email protected] \nLaboratory for Mathematical Modelling of Wave Phenomena\nInstitute for Problems in Mechanical Engineering\nRussian Academy of Sciences\nV.O\n199178Bol'shoy pr. 61, St. PetersburgRussian Federation\n"
]
| [
"Laboratory for Mathematical Modelling of Wave Phenomena\nInstitute for Problems in Mechanical Engineering\nRussian Academy of Sciences\nV.O\n199178Bol'shoy pr. 61, St. PetersburgRussian Federation"
]
| []
| We study direct and inverse eigenvalue problems for a pair of harmonic functions with a spectral parameter in boundary and coupling conditions. The direct problem is relevant to sloshing frequencies of free oscillations of a two-layer fluid in a container. The upper fluid occupies a layer bounded above by a free surface and below by a layer of fluid of greater density. Both fluids are assumed to be inviscid, incompressible, and heavy, whereas the free surface and the interface between fluids are supposed to be bounded. | 10.17586/2220-8054-2016-7-5-854-864 | [
"https://arxiv.org/pdf/1608.05652v1.pdf"
]
| 3,549,220 | 1608.05652 | 6d2a45bb6f65a67b6dadd6fbfdb3b9388d40a1c2 |
On direct and inverse spectral problems for sloshing of a two-layer fluid in an open container
19 Aug 2016
Nikolay Kuznetsov [email protected]
Laboratory for Mathematical Modelling of Wave Phenomena
Institute for Problems in Mechanical Engineering
Russian Academy of Sciences
V.O
199178Bol'shoy pr. 61, St. PetersburgRussian Federation
On direct and inverse spectral problems for sloshing of a two-layer fluid in an open container
19 Aug 2016
We study direct and inverse eigenvalue problems for a pair of harmonic functions with a spectral parameter in boundary and coupling conditions. The direct problem is relevant to sloshing frequencies of free oscillations of a two-layer fluid in a container. The upper fluid occupies a layer bounded above by a free surface and below by a layer of fluid of greater density. Both fluids are assumed to be inviscid, incompressible, and heavy, whereas the free surface and the interface between fluids are supposed to be bounded.
Introduction
Linear water wave theory is a widely used approach for describing the behaviour of surface waves in the presence of rigid boundaries. In particular, this theory is a common tool for determining sloshing frequencies and modes in containers occupied by a homogeneous fluid, that is, having constant density. The corresponding boundary spectral problem usually referred to as the sloshing problem has been the subject of a great number of studies over more than two centuries (a historical review can be found, for example, in [6]). In the comprehensive book [7], an advanced technique based on spectral theory of operators in a Hilbert space was presented for studying this problem.
In the framework of the mathematical theory of linear water waves, substantial work has been done in the past two decades for understanding the difference between the results valid for homogeneous and two-layer fluids (in the latter case the upper fluid occupies a layer bounded above by a free surface and below by a layer of fluid whose density is greater than that in the upper one). These results concern wave/structure interactions and trapping of waves by immersed bodies (see, for example, [4], [11], [9] and references cited therein), but much less is known about the difference between sloshing in containers occupied by homogeneous and two-layer fluids. To the author's knowledge, there is only one related paper [8] with rigorous results for multilayered fluids, but it deals only with the spectral asymptotics in a closed container. Thus, the first aim of the present paper is to fill in this gap at least partially.
Another aim is to consider the so-called inverse sloshing problem; that is, the problem of recovering some physical parameters from known spectral data. The parameters to be recovered are the depth of the interface between the two layers and the density ratio that characterises stratification. It is demonstrated that for determining these two characteristics for fluids occupying a vertical-walled container with a horizontal bottom, one has to measure not only the two smallest sloshing eigenfrequencies, which must satisfy certain inequalities, but also to analyse the corresponding free surface elevations.
Statement of the direct problem
Let two immiscible, inviscid, incompressible, heavy fluids occupy an open container whose walls and bottom are rigid surfaces. We choose rectangular Cartesian coordinates (x 1 , x 2 , y) so that their origin lies in the mean free surface of the upper fluid and the y-axis is directed upwards. Then the whole fluid domain W is a subdomain of the lower half-space {−∞ < x 1 , x 2 < +∞, y < 0}. The boundary ∂W is assumed to be piece-wise smooth and such that every two adjacent smooth pieces of ∂W are not tangent along their common edge. We also suppose that each horizontal crosssection of W is a bounded two-dimensional domain; that is, a connected, open set in the corresponding plane. (The latter assumption is made for the sake of simplicity because it excludes the possibility of two or more interfaces between fluids at different levels.) The free surface F bounding above the upper fluid of density ρ 1 > 0 is the non-empty interior of ∂W ∩ {y = 0}. The interface I = W ∩ {y = −h}, where 0 < h < max{|y| : (x 1 , x 2 , y) ∈ ∂W }, separates the upper fluid from the lower one of density ρ 2 > ρ 1 . We denote by W 1 and W 2 the domains W ∩ {y > −h} and W ∩ {y < −h} respectively; they are occupied by the upper and lower fluids respectively. The surface tension is neglected and we suppose the fluid motion to be irrotational and of small amplitude. Therefore, the boundary conditions on F and I may be linearised. With a time-harmonic factor, say cos ωt, removed, the velocity potentials u (1) (x 1 , x 2 , y) and u (2) (x 1 , x 2 , y) (they may be taken to be real functions) for the flow in W 1 and W 2 respectively must satisfy the following coupled boundary value problem:
u (j) x1x1 + u (j) x2x2 + u (j) yy = 0 in W j , j = 1, 2,(1)u (1) y = νu (1) on F,(2)ρ u (2) y − νu (2) = u (1) y − νu (1) on I,(3)u (2) y = u (1) y on I,(4)∂u (j) /∂n = 0 on B j j = 1, 2.(5)
Here ρ = ρ 2 /ρ 1 > 1 is the non-dimensional measure of stratification, the spectral parameter ν is equal to ω 2 /g, where ω is the radian frequency of the water oscillations and g is the acceleration due to gravity; B j = ∂W j \ (F ∪Ī) is the rigid boundary of W j . By combining (3) and (4), we get another form of the spectral coupling condition (3):
(ρ − 1)u (2) y = ν ρu (2) − u (1) on I.(6)
We also suppose that the orthogonality conditions F u (1) dx = 0 and
I ρu (2) − u (1) dx = 0, dx = dx 1 dx 2 ,(7)
hold, thus excluding the zero eigenvalue of (1)-(5). When ρ = 1, conditions (3) and (4) mean that the functions u (1) and u (2) are harmonic continuations of each other across the interface I. Then problem (1)- (5) complemented by the first orthogonality condition (7) (the second condition (7) is trivial) becomes the usual sloshing problem for a homogeneous fluid. It is well-known since the 1950s that the latter problem has a positive discrete spectrum. This means that there exists a sequence of positive eigenvalues {ν W n } ∞ 1 of finite multiplicity (the superscript W is used here and below for distinguishing the sloshing eigenvalues that correspond to the case, when a homogeneous fluid occupies the whole domain W , from those corresponding to a two-layer fluid which will be denoted simply by ν n ). In this sequence the eigenvalues are written in increasing order and repeated according to their multiplicity; moreover, ν W n → ∞ as n → ∞. The corresponding eigenfunctions {u n } ∞ 1 ⊂ H 1 (W ) form a complete system in an appropriate Hilbert space. These results can be found in many sources, for example, in the book [7].
Variational principle
Let W be bounded. It is well known that the sloshing problem in W for homogeneous fluid can be cast into the form of a variational problem and the corresponding Rayleigh quotient is as follows:
R W (u) = W |∇u| 2 dxdy F u 2 dx .(8)
For obtaining the fundamental eigenvalue ν W 1 one has to minimize R W (u) over the subspace of the Sobolev space H 1 (W ) consisting of functions that satisfy the first orthogonality condition (7). In order to find ν W n for n > 1, one has to minimize (8) over the subspace of H 1 (W ) such that each its element u satisfies the first condition (7) along with the following equalities F u u j dx = 0, where u j is either of the eigenfunctions u 1 , . . . , u n−1 corresponding to the eigenvalues ν W 1 , . . . , ν W n−1 . In the case of a two-layer fluid we suppose that the usual embedding theorems hold for both subdomains W j , j = 1, 2 (the theorem about traces on smooth pieces of the boundary for elements of H 1 included). This impose some restrictions on ∂W , in particular, on the character of the intersections of F and I with ∂W ∩ {y < 0}. Then using (6), it is easy to verify that the Rayleigh quotient for the two-layer sloshing problem has the following form:
R(u (1) , u (2) ) = W1 ∇u (1) 2 dxdy + ρ W2 ∇u (2) 2 dxdy F u (1) 2 dx + (ρ − 1) −1 I ρu (2) − u (1) 2 dx .(9)
To determine the fundamental sloshing eigenvalue ν 1 one has to minimize R(u (1) , u (2) ) over the subspace of H 1 (W 1 ) ⊕ H 1 (W 2 ) defined by both orthogonality conditions (7).
In order to find ν n for n > 1, one has to minimize (9) over the subspace of H 1 (W 1 ) ⊕ H 1 (W 2 ) such that every element u (1) , u (2) of this subspace satisfies the equalities
F u (1) u (1) j dx = 0 and I ρu (2) − u (1) ρu (2) j − u (1) j dx = 0, j = 1, . . . , n − 1,
along with both conditions (7). Here u
(1) j , u (2) j
is either of the eigensolutions corresponding to ν 1 , . . . , ν n−1 .
Now we are in a position to prove the following assertion.
Proposition 1. Let ν W 1 and ν 1 be the fundamental eigenvalues of the sloshing problem in the bounded domain W for homogeneous and two-layer fluids respectively. Then the inequality ν 1 < ν W 1 holds.
The restriction that W is bounded is essential as the example considered in Proposition 4 below demonstrates.
Proof. If u 1 is an eigenfunction corresponding to ν W 1 , then
ν W 1 = W |∇u 1 | 2 dxdy F u 2 1 dx .
Let u (1) and u (2) be equal to the restrictions of ρu 1 and u 1 to W 1 and W 2 , respectively. Then the pair u (1) , u (2) is an admissible element for the Rayleigh quotient (9). Substituting it into (9), we obtain that
R(ρu 1 , u 1 ) = W1 |∇u 1 | 2 dxdy + ρ −1 W2 |∇u 1 | 2 dxdy F u 2 1 dx .
Comparing this equality with the previous one and taking into account that ρ > 1, one finds that R(ρu 1 , u 1 ) < ν W 1 . Since ν 1 is the minimum of (9), we conclude that ν 1 < ν W 1 .
Containers with vertical walls and horizontal bottoms
Let us consider the fluid domain
W = {x = (x 1 , x 2 ) ∈ D, y ∈ (−d, 0)},
where D is a piece-wise smooth two-dimensional domain (the container's horizontal cross-section) and d ∈ (0, ∞] is the container's constant depth. Thus, the container's side wall ∂D × (−d, 0) is vertical, the bottom {x ∈ D, y = −d} is horizontal, whereas the free surface and the interface are F = {x ∈ D, y = 0} and I = {x ∈ D, y = −h} respectively, 0 < h < d.
For a homogeneous fluid occupying such a container, the sloshing problem is equivalent to the free membrane problem. Indeed, putting
u(x, y) = v(x) cosh k(y + d) u(x, y) = v(x) e ky when d = ∞ ,
one reduces problem (1)-(5) with ρ = 1, complemented by the first orthogonality condition (7) to the following spectral problem:
∇ 2 x v + k 2 v = 0 in D, ∂v/∂n x = 0 on ∂D, D v dx = 0,(10)
where ∇ x = (∂/∂x 1 , ∂/∂x 2 ) and n x is a unit normal to ∂D in R 2 . It is clear that ν W is an eigenvalue of the former problem if and only if k 2 is an eigenvalue of (10) and
ν W = k tanh kd when d < ∞ ν W = k when d = ∞ , k > 0.(11)
It is well-known that problem (10) has a sequence of positive eigenvalues {k 2 n } ∞ 1 written in increasing order and repeated according to their finite multiplicity, and such that k 2 n → ∞ as n → ∞. The corresponding eigenfunctions form a complete system in H 1 (D). Let us describe the same reduction procedure in the case when W is occupied by a two-layer fluid and d < ∞. Putting
u (1) (x, y) = v(x) [A cosh k(y + h) + B sinh k(y + h)],(12)u (2) (x, y) = v(x) C cosh k(y + d),(13)
where A, B and C are constants, one reduces problem (1)-(5) and (7), ρ > 1, to problem (10) combined with the following quadratic equation:
ν 2 cosh kd − νk [sinh kd + (ρ − 1) cosh kh sinh k(d − h)] + k 2 (ρ − 1) sinh kh sinh k(d − h) = 0, k > 0. (14)
Thus ν is an eigenvalue of the former problem if and only if ν satisfies (14), where k 2 is an eigenvalue of (10). Indeed, the quadratic polynomial in ν on the left-hand side of (14) is the determinant of the following linear algebraic system for A, B and C:
A = C cosh k(d − h) − ν −1 (ρ − 1) k sinh k(d − h) , B = C sinh k(d − h),(15)A (k sinh kh − ν cosh kh) + C sinh k(d − h) (k cosh kh − ν sinh kh) = 0.(16)
The latter arises when one substitutes expressions (12) and (13) into the boundary condition (2) and the coupling conditions (3) and (4). This homogeneous system defines eigensolutions of the sloshing problem provided there exists a non-trivial solution, and so the determinant must vanish which is expressed by (14). Let us show that the roots ν (+) and ν (−) of (14) are real in which case
ν (±) = k b ± √ D 2 cosh kd > 0 ,(17)
where the inequality is a consequence of the formulae
b = sinh kd + (ρ − 1) cosh kh sinh k(d − h),(18)D = b 2 − 4 (ρ − 1) cosh kd sinh kh sinh k(d − h).(19)
Since D is a quadratic polynomial of ρ − 1, it is a simple application of calculus to demonstrate that it attains the minimum at
ρ − 1 = 2 cosh kd sinh kh − sinh kd cosh kh cosh 2 kh sinh k(d − h) ,
and after some algebra one finds that this minimum is equal to
4 cosh kd sinh kh sinh k(d − h) cosh 2 kh > 0,
which proves the assertion. Thus we arrive at the following.
Proposition 2. If W is a vertical cylinder with horizontal bottom, then the sloshing problem for a two-layer fluid occupying W has two sequences of eigenvalues
ν (+) n ∞ 1 and ν (−) n ∞ 1 defined by (17) with k = k n > 0, where k 2
n is an eigenvalue of problem (10). The same eigensolution (u (1) , u (2) ) corresponds to both ν (+) n and ν (−) n , where u (1) and u (2) (sloshing modes in W 1 and W 2 respectively) are defined by formulae (12) and (13) with v belonging to the set of eigenfunctions of problem (10) that correspond to k 2 n ; furthermore, C is an arbitrary non-zero real constant, whereas A and B depend on C through (15).
Next we analyse the behaviour of ν (±) n as a function of ρ. Proof. In order to prove the proposition it is sufficient to show that
∂(b ± √ D ) ∂ρ = sinh k(d − h) cosh kh ± D −1/2 cosh kh sinh kd +(ρ − 1) cosh 2 kh sinh k(d − h) − 2 cosh kd sinh kh > 0 .(20)
Since
∂(b + √ D ) ∂ρ ρ=1 = 2 sinh 2 k(d − h) sinh kd > 0 and ∂(b − √ D ) ∂ρ ρ→∞ = 0 ,
inequality (20) is a consequence of the following one:
± ∂ 2 (b ± √ D) ∂ρ 2 = 4 cosh kd sinh kh sinh 3 k(d − h) D 3/2 > 0 for all ρ > 1.
The second assertion immediately follows from the first one and formulae (17)-(19).
Combining Proposition 3 and formula (11), we arrive at the following assertion.
Corollary 1. The inequalities ν (−) n < ν W n < ν (+) n
hold for each n = 1, 2, . . . and every ρ > 1.
Dividing (17) by k and letting k = k n to infinity, it is straightforward to obtain the following.
Lemma 1. For every ρ > 1 the asymptotic formula
ν (±) n ∼ ρ + 1 ± |ρ − 3| 4 k n as n → ∞,
holds with the exponentially small remainder term; here k 2 n is an eigenvalue of (10).
In other words there are three cases:
(i) if ρ = 3, then ν (±) n ∼ k n as n → ∞; (ii) if ρ > 3, then ν (−) n ∼ k n and ν (+) n ∼ (ρ − 1) k n /2 as n → ∞; (iii) if ρ ∈ (1, 3), then ν (−) n ∼ (ρ − 1) k n /2 and ν (+) n ∼ k n as n → ∞.
Combining these relations and the asymptotic formula ν W n ∼ k n as n → ∞ (it is a consequence of formula (11) defining ν W n when a homogeneous fluid occupies W ), we obtain the following.
Corollary 2. As n → ∞, we have that ν (−) n ∼ ν W n when ρ ≥ 3, whereas ν (+) n ∼ ν W n provided ρ ∈ (1, 3].
Another corollary of Lemma 1 concerns the distribution function N (ν) for the spectrum of problem (1)-(5) and (7). This function is equal to the total number of eigenvalues ν n that do not exceed ν. An asymptotic formula for N (ν) immediately follows from Lemma 1 and the asymptotic formula for the distribution of the spectrum for the Neumann Laplacian (see [5], Chapter 6).
Corollary 3. The distribution function N (ν) of the spectrum for the sloshing of a two-layer fluid in a vertical cylinder of cross-section D has the following asymptotics
N (ν) ∼ 4 (ρ − 1) 2 + 1 |D| ν 2 4π as ν → ∞.
Here |D| stands for the area of D.
It should be also mentioned that in [8] the asymptotics for N (ν) was obtained for a multi-layer fluid occupying a bounded closed container.
It follows from Lemma 1 and Corollary 2 that the asymptotic formula for the distribution function of the spectrum ν W n ∞ 1 is similar to the above one, but the first term in the square brackets must be deleted. Moreover, in the case of homogeneous fluid the same asymptotic formula (up to the remainder term) holds for arbitrarily shaped fluid domains (see [7], Section 3.3). Since the first term in the square brackets tends to infinity as ρ → 1, the transition from the two-layer fluid to the homogeneous one in the asymptotic formula for N (ν) is a singular limit in the sense described in [3]. Similar effect occurs for modes trapped by submerged bodies in two-layer and homogeneous fluids as was noted in [11].
In conclusion of this section, it should be noted that in the case of an infinitely deep vertical cylinder it is easy to verify that ν = k is an eigenvalue of the sloshing problem for a two-layer fluid if and only if k 2 is an eigenvalue of problem (10). Comparing this assertion with that at the beginning of this section we obtain the following.
Proposition 4.
In an infinitely deep vertical-walled container, the sloshing problem for a two-layer fluid has the same set of eigenvalues and the same eigenfunctions of the form v(x) e ky , k > 0, as the sloshing problem for a homogeneous fluid in the same container; here k 2 is an eigenvalue and v is the corresponding eigenfunction of problem (10).
Inverse problem
Let a given container W be occupied by a two-layer fluid, but now we assume that the position of the interface between layers and the density of the lower layer are unknown. The density of the upper layer is known because one can measure it directly. The sequence of eigenvalues ν W n ∞ 1 corresponding to the homogeneous fluid is also known because it depends only on the domain W . The inverse problem we are going to consider is to recover the ratio of densities ρ and the depth of the interface h from measuring some sloshing frequencies on the free surface. Say, let the fundamental eigenvalue ν 1 is known along with the second-largest one.
The formulated inverse problem is not always solvable. Indeed, according to Proposition 4, it has no solution when W is an infinitely deep container with vertical walls. Moreover, the inverse problem is trivial for all domains when it occurs that ν 1 = ν W 1 . In this case Proposition 1 implies that the fluid is homogeneous, that is, ρ = 1 and h = d. Therefore, we restrict ourselves to the case of vertically-walled containers having the finite depth d in what follows.
Reduction to transcendental equations
In view what was said above, the inverse problem for W = D × (−d, 0) can be stated as follows. Find conditions that allow us to determine ρ > 1 and h ∈ (0, d) when the following two eigenvalues are known: the fundamental one ν 1 and the smallest eigenvalue ν N that is greater than ν 1 . Thus N is such that k 2 n = k 2 1 for all n = 1, . . . , N − 1, which means that the fundamental eigenvalue k 2 1 of problem (10) is of multiplicity N − 1 (of course, ν 1 has the same multiplicity). For example, if D is a disc, then the multiplicity of k 2 1 is two (see [2], Section 3.1), and so ν N = ν 3 in this case. According to formula (17), we have that ν 1 = ν (−) 1 . Hence the first equation for ρ and h is as follows:
b 1 − D 1 = 2 ν 1 k 1 cosh k 1 d.(21)
Here b 1 and D 1 are given by formulae (18) and (19) respectively with k = k 1 .
To write down the second equation for ρ and h we have the dilemma whether
ν N = ν (−) N or ν N = ν (+) 1 ?(22)
Let us show that either of these options is possible. Indeed, Proposition 3 implies that
ν N = ν (−) N
provided ρ − 1 is sufficiently small. On the other hand, let us demonstrate that there exists a triple (ρ, d, h) for which ν N = ν (+) 1 . For this purpose we have to demonstrate that the inequality ν (−)
N = k N b N − √ D N 2 cosh k N d ≥ k 1 b 1 + √ D 1 2 cosh k 1 d = ν (+) 1
holds for some ρ, d and h. As above b j and D j , j = 1, N , are given by formulae (18) and (19), respectively, with k = k j . Let h = d/2, then we have
4 ν (±) j = k j (ρ + 1) tanh k j d ± (ρ + 1) 2 tanh 2 k j d + 8 (ρ − 1) 1 − cosh k j d cosh k j d 1/2 , and so 4 ν (−) N − ν (+) 1 → k N (ρ + 1 − |ρ − 3|) − k 1 (ρ + 1 + |ρ − 3|) as d → ∞.
The limit is piecewise linear function of ρ, attains its maximum value 4(k N − k 1 ) at ρ = 3 and is positive for ρ ∈ (1 + 2 (k 1 /k N ), 1 + 2 (k N /k 1 )). Summarising, we arrive at the following. when ρ ∈ (1+2 (k 1 /k N ), 1+2 (k N /k 1 )), h = d/2 and d is sufficiently large (of course, its value depends on ρ and D).
Obviously, assertion (ii) can be extended to values of h that are sufficiently close to d/2.
Options for the second equation
Let us develop a procedure for determining which of the two equalities (22) can be chosen to complement equation (21) in order to find ρ and h. Our procedure is based on an analysis of the free surface elevations corresponding to the measured values ν 1 and ν N . Indeed, when a two-layer fluid oscillates at the frequency defined by some ν j , the free surface elevation is proportional to the trace u (1) j (x, 0) (see, for example, [10], Section 227).
According to formula (12), the trace u is also proportional to a linear combination of v 1 , . . . , v N −1 . Since these functions are known, one has to determine whether the measured free-surface elevation corresponding to ν N can be represented in the form of such a combination and only in such a form. If this is the case, then ν N = ν (+) 1 < ν (−) N and the following equation
b 1 + D 1 = 2 ν N k 1 cosh k 1 d(23)
forms the system for ρ and h together with (21). Besides, it can occur that the measured free-surface elevation corresponding to ν N can be represented in two forms, one of which is a linear combination of v 1 , . . . , v N −1 , whereas the other one involves the function v N as well as other eigenfunctions that correspond to the eigenvalue k 2 N of problem (10)
b N − D N = 2 ν N k N cosh k N d.(24)
Of course, it is better to use the system that comprises equations (21) and (23) because the right-hand side terms in these equations are proportional. If the measured free-surface elevation corresponding to ν N cannot be represented as a linear combination of v 1 , . . . , v N −1 , then ν N = ν
(−) N < ν (+)
1 , in which case the elevation is a linear combination of eigenfunctions that correspond to the eigenvalue k 2 N of problem (10) the second largest after k 2 1 . In this case, equation (21) must be complemented by (24).
Thus we arrive at the following procedure for reducing the inverse sloshing problem to a system of two equations.
Procedure. Let v 1 , . . . , v N −1 be the set of linearly independent eigenfunctions of problem (10) corresponding to k 2 1 . If the observed elevation of the free surface that corresponds to the measured value ν N has a representation as a linear combination of v 1 , . . . , v N −1 , then ρ and d must be determined from equations (21) and (23). Otherwise, equations (21) and (24) must be used.
The simplest case is when the fundamental eigenvalue of problem (10) is simple, that is, N = 2. Then the above procedure reduces to examining whether the free surface elevations corresponding to ν 1 and ν 2 are proportional or not. In the case of proportionality, equations (21) and (23) must be used. Equations (21) and (24) are applicable when there is no proportionality.
Solution of the transcendental systems
In this section we consider the question how to solve systems (21) and (24), and (21) and (23) for finding ρ and h.
System (21) and (23)
Equations (21) and (23) can be easily simplified. Indeed, the sum and difference of these equations are as follows:
b 1 = ν N + ν 1 k 1 cosh k 1 d and D 1 = ν N − ν 1 k 1 2 cosh 2 k 1 d .
Substituting the first expression into the second equation (see formulae (18) and (19)), we obtain
(ρ − 1) sinh k 1 h sinh k 1 (d − h) = ν N ν 1 k 2 1 cosh k 1 d ,(25)
whereas the first equation itself has the following form:
(ρ − 1) cosh k 1 h sinh k 1 (d − h) = ν N + ν 1 k 1 cosh k 1 d − sinh k 1 d .(26)
The last two equations immediately yield
tanh k 1 h = ν N ν 1 k 1 (ν N + ν 1 − ν W 1 ) ,
where formula (11) is applied. Thus we are in a position to formulate the following.
Proposition 6. Let ν 1 and ν N = ν 1 be the smallest two sloshing eigenvalues measured for a two-layer fluid occupying W = D × (−d, 0). Let also
0 < ν N ν 1 k 1 (ν N + ν 1 − ν W 1 ) < tanh k 1 d ,
where k 2 1 is the fundamental eigenvalue of problem (10) in D and ν W 1 is defined by formula (11) with k = k 1 . If Procedure guarantees that ρ and h satisfy equations (21) and (23), then
h = 1 k 1 tanh −1 ν N ν 1 k 1 (ν N + ν 1 − ν W 1 )
, whereas ρ is determined either by (25) or by (26) with this h.
We recall that tanh −1 z = 1 2 ln 1+z 1−z (see [1], Section 4.6).
System (21) and (24)
Since equations (21) and (24) have the same form, we treat them simultaneously. Eliminating square roots, we get
(ρ − 1) sinh k j (d − h) (ν j cosh k j h − k j sinh k j h) = ν j k j (ν j cosh k j d − k j sinh k j d) , j = 1, N,
which is linear with respect to ρ − 1. Taking into account formula (11), we write this system in the form:
(ρ − 1) sinh k j (d − h) (k j sinh k j h − ν j cosh k j h) = ν j k j ν W j − ν j cosh k j d, j = 1, N,(27)
where the right-hand side term is positive in view of Corollary 1. We eliminate ρ − 1 from system (27), thus obtaining the following equation for h:
ν 1 k 1 ν W 1 − ν 1 cosh k 1 d sinh k N (d − h) (k N sinh k N h − ν N cosh k N h) − ν N k N ν W N − ν N cosh k N d sinh k 1 (d − h) (k 1 sinh k 1 h − ν 1 cosh k 1 h) = 0.(28)
Let us denote by U (h) the expression on the left-hand side and investigate its behaviour for h ≥ 0, because solving equation (28) is equivalent to finding zeroes of U (h) that belong to (0, d).
It is obvious that U (d) = 0, and we have that
U (0) = −ν N ν 1 ν W 1 − ν 1 k 1 cosh k 1 d sinh k N d − ν W N − ν N k N cosh k N d sinh k 1 d .
After applying formula (11), this takes the form:
U (0) = ν W N ν 1 − ν N ν W 1 ν N ν 1 k N k 1 cosh k N d cosh k 1 d ,(29)
and so U (0) is positive, negative or zero simultaneously with ν W N ν 1 − ν N ν W 1 . We have that
U ′ (h) = ν 1 k N cosh k 1 d k 1 (ν W 1 − ν 1 ) [k N sinh k N (d − 2h) + ν N cosh k N (d − 2h)] − ν N k 1 cosh k N d k N (ν W N − ν N ) [k 1 sinh k 1 (d − 2h) + ν 1 cosh k 1 (d − 2h)] , U ′′ (h) 2 = ν N k 2 1 cosh k N d k N (ν W N − ν N ) [k 1 cosh k 1 (d − 2h) + ν 1 sinh k 1 (d − 2h)] − ν 1 k 2 N cosh k 1 d k 1 (ν W 1 − ν 1 ) [k N cosh k N (d − 2h) + ν N sinh k N (d − 2h)] .
Then formula (11) yields the following asymptotic formula:
U (h) ∼ (d − h) (ν W N − ν N ) (ν W 1 − ν 1 ) ν 1 k N k 1 − ν N k 1 k N cosh k N d cosh k 1 d as d − h → +0.(30)
Since equation (28) is obtained under the assumption that ν N = ν (−) N and ν 1 = ν (−) 1 , Corollary 1 yields that each factor in the asymptotic formula is positive except for the difference in the square brackets.
The next lemma gives a condition providing a relationship between the value U (0) and the behaviour of U (h) for h < d and sufficiently close to d.
Lemma 2. If the following inequality holds:
ν 1 k N k 1 − ν N k 1 k N ≤ 0,(31)
then U (0) < 0 and U (h) < 0 when h < d and sufficiently close to d.
Proof. Let us prove the inequality U (0) < 0 first. Since
ν W N ν 1 − ν N ν W 1 = ν 1 k N tanh k N d − ν N k 1 tanh k 1 d,
according to formula (11). Furthermore, it follows from (31) that
ν W N ν 1 − ν N ν W 1 ≤ ν N k 2 1 d tanh k N d k N d − tanh k 1 d k 1 d < 0,(32)
because z −1 tanh z is a monotonically decreasing function on (0, +∞) and k 1 < k N . Then (29) implies that U (0) < 0. If inequality (31) is strict, then the second assertion immediately follows from the asymptotic formula (30).
In the case of equality in (31), the asymptotic formula (30) must be extended to include the second-order term with respect to d − h (see the second derivative above). Thus we obtain that
U (h) ∼ (d − h) 2 ν N k 2 1 cosh k N d k N (ν W N − ν N ) [ k 1 cosh k 1 d − ν 1 sinh k 1 d ] − ν 1 k 2 N cosh k 1 d k 1 (ν W 1 − ν 1 ) [ k N cosh k N d − ν N sinh k N d ] as d − h → +0.
Applying the equality ν N = ν 1 (k N /k 1 ) 2 along with formula (11), we write the expression in braces as follows:
ν 1 k N k −1 1 cosh k N d cosh k 1 d (ν W N − ν N ) (k 2 1 − ν 1 ν W 1 ) − (ν W 1 − ν 1 ) (k 2 N − ν N ν W N ) ,
and we have in the square brackets
k 2 1 ν W N − k 2 N ν W 1 + ν W N ν W 1 ν N − ν W N ν W 1 ν 1 + ν W 1 ν N ν 1 − ν W N ν N ν 1 .
Substituting ν N = ν 1 (k N /k 1 ) 2 , we see that this expression is the following quadratic polynomial in ν 1 :
ν W 1 − ν W N (k N /k 1 ) 2 ν 2 1 + ν W N ν W 1 (k N /k 1 ) 2 − 1 ν 1 + ν W N k 2 1 − ν W 1 k 2 N .
Its first and third coefficients are negative (for the latter one this follows from formula (32) because it is equal to the expression in the square brackets multiplied by a positive coefficient). On the other hand, the second coefficient is positive. Therefore, the last expression is negative when ν 1 > 0, which implies that the right-hand side of the last asymptotic formula is negative. This completes the proof of the second assertion.
Immediate consequences of Lemma 2 are the following two corollaries. Proof. Inequality (31) implies that U (0) < 0 and U (h) < 0 for h < d, but sufficiently close to d. Hence U (h) either has no zeroes on (0, d), or has more than one zero.
Corollary 5. Let ν 1 and ν N ∈ (ν 1 , ν W N ) be the smallest two measured sloshing eigenvalues for a two-layer fluid occupying W = D × (−d, 0). Then a necessary condition that equation (28) has a unique solution h is the simultaneous validity of the following two inequalities:
ν 1 k N k 1 − ν N k 1 k N > 0 and ν W N ν 1 − ν N ν W 1 < 0.(33)
Proof. Let equation (28) have a unique solution on (0, d). According to Corollary 4, inequality (31) contradicts to this assumption, and so the first inequality (33) must hold. Then the asymptotic formula (30) implies that U (h) > 0 when h < d and is sufficiently close to d. Hence the assumption that equation (28) have a unique solution on (0, d) implies that either the second inequality (33) is true or ν W N ν 1 = ν N ν W 1 . Let us show that this equality is impossible which completes the proof.
Indeed, according to formula (29), the latter equality means that U (0) = 0, and so
U (h) ∼ h ν W N ν W 1 − ν N ν 1 ν 1 k N k 1 − ν N k 1 k N cosh k N d cosh k 1 d as h → +0.
Here the formula for U ′ is used along with (11) and the fact that ν W N ν 1 = ν N ν W 1 . Since the first inequality (33) is already shown to be true, we have that U (h) > 0 when h = 0, but is sufficiently close to +0. Since we also have that U (h) > 0 when h < d and is sufficiently close to d, we arrive at a contradiction to the assumption that equation (28) has a unique solution on (0, d).
Now we are in a position to formulate the following Proposition 7. Let ν 1 and ν N ∈ (ν 1 , ν W N ) be the smallest two sloshing eigenvalues measured for a two-layer fluid occupying W = D × (−d, 0). If inequalities (33) hold for ν 1 and ν N , then either of the following two conditions is sufficient for equation (28) to have a unique solution h ∈ (0, d) :
(i) U ′ (h) vanishes only once for h ∈ (0, d);
(ii) U ′′ (h) < 0 on (0, d).
Proof. Inequalities (33) and formulae (29) and (30) imply that U (0) < 0 and U (h) > 0 for h < d and sufficiently close to d. Then either of the formulated conditions is sufficient to guarantee that equation (28) has a unique solution on (0, d).
It is an open question whether equation (28) can have more than one solution (consequently, at least three solutions), when inequalities (33) are fulfilled.
(i) variational principle and its corollary concerning inequality between the fundamental sloshing eigenvalues for homogeneous and two-layer fluids occupying the same bounded domain.
(ii) Analysis of the behaviour of eigenvalues for containers with vertical walls and horizontal bottoms. It demonstrates that there are two sequences of eigenvalues with the same eigenfunctions corresponding to eigenvalues having the same number in each of these sequences. The elements of these sequences are expressed in terms of eigenvalues for the Neumann Laplacian in the two-dimensional domain which is a horizontal cross-section of the container.
(iii) In the particular case of infinitely deep container with vertical boundary, eigenvalues and eigenfunctions for homogeneous and two-layer fluids are the same for any depth of the interface. This makes senseless the inverse sloshing problem in a two-layer fluid occupying such a container.
Inverse sloshing problem for a two-layer fluid, that occupies a container of finite constant depth with vertical walls, is formulated as the problem of finding the depth of the interface and the ratio of fluid densities from the smallest two eigenvalues measured by observing them at the free surface. This problem is reduced to two transcendental equations depending on the measured eigenvalues. There are two systems of such equations and to obtain these systems one has to take into account the behaviour of the observed free surface elevation. Sufficient conditions for solubility of both systems have been found.
Proposition 3 .
3For every n = 1, 2, . . . the functions ν increasing as ρ goes from 1 to infinity. Their ranges are (0, k n tanh k n h) and (k n tanh k n d, ∞) respectively.
Proposition 5 .
5Let k 2 N be the smallest eigenvalue of problem (10) other than k 2 1 , and let ν (−) N be the sloshing eigenvalue defined by (17)-(19) with k = k N . Then (i) ν ρ − 1 > 0 is sufficiently small (of course, its value depends on d, h and the domain D);
is a linear combination of linearly independent eigenfunctions v 1 (x), . . . , v N −1 (x) corresponding to the fundamental eigenvalue k 2 1 of problem (10); of course, its multiplicity is taken into account. By Proposition 2 the free surface elevation associated with ν (+) 1
N
along with v 1 , . . . , v N −1 . It is clear that this happens when ν N = ν . Indeed, if all coefficients at the former functions vanish, then the profile is represented by v 1 , . . . , v N −1 , otherwise not. In this case, equation (21) can be complemented by either equation (23) or the following one:
Corollary 4 .
4If inequality (31) holds, then equation (28) for h (and the inverse sloshing problem for a two-layer fluid occupying W ) either has no solution or have more than one solution.
Handbook of Mathematical Functions. M Abramowitz, I A Stegun, DoverMineola, N YAbramowitz, M., Stegun, I. A. Handbook of Mathematical Functions. Dover, Mineola, N Y: 1965. 1046 pp.
Isoperimetric Inequalities and Applications. C Bandle, 228Pitman, LondonBandle, C. Isoperimetric Inequalities and Applications. Pitman, London: 1980. 228 pp.
Singular limits. M Berry, Physics Today. 2002. N. 5Berry, M. Singular limits. Physics Today. 2002. N 5. 10-11.
Three-dimensional water-wave scattering in two-layer fluids. J R Cadby, C M Linton, J. Fluid Mech. 423Cadby, J. R., Linton, C. M. Three-dimensional water-wave scattering in two-layer fluids. J. Fluid Mech. 2000 423, 155-173.
. R Courant, D Hilbert, Methods of Mathematical Physics. 11953xv+561 ppCourant, R., Hilbert, D. Methods of Mathematical Physics. Vol. 1. Interscience, N Y: 1953. xv+561 pp.
Sloshing frequencies. D W Fox, J R Kuttler, Z. angew. Math. Phys. 34Fox, D. W., Kuttler, J. R. Sloshing frequencies. Z. angew. Math. Phys. 1983. 34. 668-696.
Operator Approach to Linear Problems of Hydrodynamics. N D Kopachevsky, S G Krein, Birkhäuser, Basel -Boston -Berlinxxiv+384 ppKopachevsky, N. D., Krein, S. G. Operator Approach to Linear Problems of Hydrodynam- ics. Birkhäuser, Basel -Boston -Berlin: 2001. xxiv+384 pp.
Asymptotics of the spectrum of the contact problem for elliptic equations of the second order. N A Karazeeva, M Z Solomyak, Selecta Math. Sovietica. 61Karazeeva, N. A., Solomyak, M. Z. Asymptotics of the spectrum of the contact problem for elliptic equations of the second order. Selecta Math. Sovietica 1987. 6 (1). 151-161.
Wave interaction with two-dimensional bodies floating in a two-layer fluid: uniqueness and trapped modes. N Kuznetsov, M Mciver, P Mciver, J. Fluid Mech. 490Kuznetsov, N., McIver, M., McIver, P. Wave interaction with two-dimensional bodies floating in a two-layer fluid: uniqueness and trapped modes. J. Fluid Mech. 2003. 490. 321-331.
. H Lamb, Hydrodynamics, Cambridge University Press1932Cambridgexv+738 ppLamb, H. Hydrodynamics. Cambridge University Press, Cambridge: 1932. xv+738 pp.
Trapped modes in a two-layer fluid. C M Linton, J R Cadby, J. Fluid Mech. 481Linton, C. M., Cadby, J. R. Trapped modes in a two-layer fluid. J. Fluid Mech. 2003. 481. 215-234.
| []
|
[
"Extracting the 21-cm Power Spectrum and the reionization parameters from mock datasets using Artificial Neural Networks",
"Extracting the 21-cm Power Spectrum and the reionization parameters from mock datasets using Artificial Neural Networks"
]
| [
"Madhurima Choudhury \nDepartment of Astronomy\nAstrophysics Indian Institute of Technology Indore\nIndia\n",
"Abhirup Datta \nDepartment of Astronomy\nAstrophysics Indian Institute of Technology Indore\nIndia\n",
"Suman Majumdar \nDepartment of Astronomy\nAstrophysics Indian Institute of Technology Indore\nIndia\n\nDepartment of Physics\nBlackett Laboratory\nImperial College\nSW7 2AZLondonUK\n"
]
| [
"Department of Astronomy\nAstrophysics Indian Institute of Technology Indore\nIndia",
"Department of Astronomy\nAstrophysics Indian Institute of Technology Indore\nIndia",
"Department of Astronomy\nAstrophysics Indian Institute of Technology Indore\nIndia",
"Department of Physics\nBlackett Laboratory\nImperial College\nSW7 2AZLondonUK"
]
| [
"MNRAS"
]
| Detection of the H 21-cm power spectrum is one of the key science drivers of several ongoing and upcoming low-frequency radio interferometers. However, the major challenge in such observations come from bright foregrounds, whose accurate removal or avoidance is key to the success of these experiments. In this work, we demonstrate the use of artificial neural networks (ANNs) to extract the H 21-cm power spectrum from synthetic datasets and extract the reionization parameters from the H 21-cm power spectrum. For the first time, using a suite of simulations, we present an ANN based framework capable of extracting the H signal power spectrum directly from the total observed sky power spectrum (which contains the 21-cm signal, along with the foregrounds and effects of the instrument). To achieve this, we have used a combination of two separate neural networks sequentially. As the first step, ANN1 predicts the 21-cm power spectrum directly from foreground corrupted synthetic datasets. In the second step, ANN2 predicts the reionization parameters from the predicted H power spectra from ANN1. Our ANN-based framework is trained at a redshift of 9.01, and for k-modes in the range, 0.17 < k < 0.37 Mpc −1 . We have tested the network's performance with mock datasets that include foregrounds and are corrupted with thermal noise, corresponding to 1080 hrs of observations of the -1 and . Using our ANN framework, we are able to recover the H power spectra with an accuracy of ≈ 95 − 99% for the different test sets. For the predicted astrophysical parameters, we have achieved an accuracy of ≈ 81 − 90% and ≈ 50 − 60% for the test sets corrupted with thermal noise corresponding to the -1 and , respectively. | null | [
"https://arxiv.org/pdf/2112.13866v2.pdf"
]
| 245,537,404 | 2112.13866 | 2c9086915c63a91afdcc3a6a5bf79977afa5b1b4 |
Extracting the 21-cm Power Spectrum and the reionization parameters from mock datasets using Artificial Neural Networks
2021
Madhurima Choudhury
Department of Astronomy
Astrophysics Indian Institute of Technology Indore
India
Abhirup Datta
Department of Astronomy
Astrophysics Indian Institute of Technology Indore
India
Suman Majumdar
Department of Astronomy
Astrophysics Indian Institute of Technology Indore
India
Department of Physics
Blackett Laboratory
Imperial College
SW7 2AZLondonUK
Extracting the 21-cm Power Spectrum and the reionization parameters from mock datasets using Artificial Neural Networks
MNRAS
0002021Accepted XXX. Received YYY; in original form ZZZPreprint 10 February 2022 Compiled using MNRAS L A T E X style file v3.0cosmology: reionization, first stars -cosmology: observations -methods: statistical
Detection of the H 21-cm power spectrum is one of the key science drivers of several ongoing and upcoming low-frequency radio interferometers. However, the major challenge in such observations come from bright foregrounds, whose accurate removal or avoidance is key to the success of these experiments. In this work, we demonstrate the use of artificial neural networks (ANNs) to extract the H 21-cm power spectrum from synthetic datasets and extract the reionization parameters from the H 21-cm power spectrum. For the first time, using a suite of simulations, we present an ANN based framework capable of extracting the H signal power spectrum directly from the total observed sky power spectrum (which contains the 21-cm signal, along with the foregrounds and effects of the instrument). To achieve this, we have used a combination of two separate neural networks sequentially. As the first step, ANN1 predicts the 21-cm power spectrum directly from foreground corrupted synthetic datasets. In the second step, ANN2 predicts the reionization parameters from the predicted H power spectra from ANN1. Our ANN-based framework is trained at a redshift of 9.01, and for k-modes in the range, 0.17 < k < 0.37 Mpc −1 . We have tested the network's performance with mock datasets that include foregrounds and are corrupted with thermal noise, corresponding to 1080 hrs of observations of the -1 and . Using our ANN framework, we are able to recover the H power spectra with an accuracy of ≈ 95 − 99% for the different test sets. For the predicted astrophysical parameters, we have achieved an accuracy of ≈ 81 − 90% and ≈ 50 − 60% for the test sets corrupted with thermal noise corresponding to the -1 and , respectively.
INTRODUCTION
The redshifted 21-cm line of neutral hydrogen is a sensitive probe to investigate the different phases of the evolution of our Universe. Observations of this 21-cm line will directly enable us to map the young Universe, over a range of cosmic times and give us deep insight into the morphology of the ionisation structures which were carved out by the first sources of light, as well as about the origin and evolution of these first generation sources. (Morales & Wyithe 2010;Pritchard & Loeb 2012;Furlanetto & Oh 2016). The state of the intergalactic medium, right from the period when the first stars and galaxies were formed (Cosmic Dawn or CD) through the period when the Universe became completely ionized (Epoch of Reionization or EoR) and evolved to the Universe we see today, can be probed with the 21-cm line. The measurement of H 21-cm power spectrum using large interferometric arrays currently hold the greatest potential to observe the redshifted H 21-cm line (Bharadwaj & Sethi 2001;Bharadwaj & Ali 2005;Morales 2005b; Zaldarriaga et al. 2004) ★ E-mail: [email protected] and probe the large-scale distribution of neutral hydrogen across a range of redshifts. Measurements of the H 21-cm power spectrum is a major goal of several ongoing and future experiments. Several radio interferometers such as the Giant Meterwave Radio Telescope (GMRT, Swarup et al. (1991)); the Low Frequency Array (LOFAR, van Haarlem et al. (2013)); the Murchison Wide-field Array (MWA, Tingay et al. (2013)); and the Donald C. Backer Precision Array to Probe the Epoch of Reionization (PAPER, Parsons et al. (2010)) have carried out observations to help constrain the 21-cm power spectrum from the Epoch of Reionization. Future experiments like the Hydrogen Epoch of Reionization Array (HERA, DeBoer et al. (2017)) and the Square Kilometer Array (SKA, Koopmans et al. (2015)) also aim to measure the EoR 21-cm power spectrum with much improved sensitivities promising to give a deeper insight into the physics of the evolution of the Universe. Experiments such as, the Experiment to Detect the Global EoR Signature (EDGES), (Bowman et al. 2018); Shaped Antenna measurement of the background RAdio Spectrum (SARAS), (Singh et al. 2021); the Large-Aperture Experiment to Detect the Dark Ages (LEDA), (Greenhill & Bernardi 2012); SCI-HI, (Voytek et al. 2014); the Broadband Instrument for Global Hydrogen Reionisation Signal (BIGHORNS) (Sokolowski et al. 2015), and the Cosmic Twilight Polarimeter, CTP (Nhan et al. 2018) aim to measure the sky averaged global 21-cm signature.
However, the detection of this faint H 21-cm signal is quite challenging. This is because it is buried in a sea of galactic and extragalactic foregrounds, which are several orders of magnitude brighter than the signal. In addition, the instrument response also varies with increased observation times. The Earth's ionosphere also further distorts the signal by introducing direction-dependent effects, making ground-based observations even more challenging. Several novel techniques have been explored to remove bright foregrounds from both interferometric as well as total power experiments. The 21-cm observations heavily rely on the accuracy of foreground removal and instruments with high sensitivity. The foreground sources comprise of diffuse Galactic synchrotron emission from our Galaxy (Shaver et al. 1999), free-free emission from ionizing haloes (Oh & Mack 2003), synchrotron emission and radiation from comptonization processes from faint radio-loud quasars (Di Matteo et al. 2002), synchrotron emission from low-redshift galaxy clusters (Di Matteo et al. 2004),etc. Constructing precise catalogues of the low-frequency radio sky and utilizing them for calibration and foreground removal is a popular method, but still, the latest catalogues cannot entirely suppress the contamination by foregrounds. Other methods include modelling of the foregrounds as a sparsebasis, without the need to actually identify the source of foreground emission in the sky. These methods can lead to over-subtraction, resulting in loss of the signal (Harker et al. 2009;Tauscher et al. 2018). Another approach is by sampling the joint posterior between the signal and the foreground, using Bayesian techniques (Sims & Pober 2019). FastICA is a method based on independent component analysis (ICA), which assumes that the foreground components are statistically independent. For a detailed description we direct the readers to Chapman et al. (2012). FastICA provides a foreground subtraction method, in which it allows the foregrounds to choose their own shape instead of assuming a specific form a priori. This type of 'blind source separation (BSS) method' has been also applied successfully to other cosmological observations, such as in CMB foreground removal. GMCA is another method based on independant component analysis, which uses diversity in morphology to separate out the various components. A comparision between the various methods of foreground mitigation can be found in Chapman & Jelić (2019). A recent method was proposed by Mertens et al. (2018), claiming that foreground modelling via Gaussian processes, might be able to recover the low k-modes of the power spectrum measurements of the EoR signal. However, recent power spectrum limits at = 9.1 by Mertens et al. (2020) demonstrate that the Gaussian Process Regression (GPR) based algorithm is also prone to signal loss, and there needs to be improvements in the signal processing chain to reduce the losses. All these above mentioned methods require a priori assumptions about the foreground model to perform the analysis.
Over the past few years, applications of machine learning (ML) techniques in various aspects of cosmology, astrophysics, statistics and inference and imaging, have come into being. For example, Schmit & Pritchard (2018), developed an emulator for the 21-cm power spectrum using artificial neural networks (ANN) for an wide range of the parameters. This emulator has been further used to constrain the EoR parameters using the 21-cm power spectrum within a Bayesian inference framework. Tiwari et al. (2021) have extended the same idea and developed an ANN based emulator for 21-cm bispectrum, a higher order statistics to quantify the high and time evolving non-Gaussianity in the signal (Majumdar et al. 2018(Majumdar et al. , 2020Kamran et al. 2021b,a). They have further used this bispectrum emulator within a Bayesian framework to demonstrate that bispectrum can put better constraints on the EoR parameters compared to power spectrum. Cohen et al. (2019) developed an emulator for the 21-cm Global signal using ANN, that connects the astrophysical parameters to the predicted global signal. Convolutional Neural Networks (CNN) was implemented by Hassan et al. (2019) to identify reionization sources from 21-cm maps. Deep learning models have been used to emulate the entire time evolving 21-cm brightness temperature maps from the epoch of reionization in Chardin et al. (2019), where the authors have further tested their predicted 21-cm maps against the brightness temperature maps produced by the radiative transfer simulations. Gillet et al. (2019) have recovered the astrophysical parameters directly from 21-cm images, using deep learning with CNN. Jennings et al. (2019) have compared machine learning techniques for predicting 21-cm power spectrum from reionization simulations. Li et al. (2019) have implemented a convolutional de-noising autoencoder (CDAE) to recover the EoR signal by training on SKA images simulated with realistic beam effects. While the above examples are focussed on the imaging domain, Choudhury et al. (2020) have used ANN to predict the astrophysical information by extracting the parameters from 21-cm Global signal in the presence of foregrounds and instrument response. Choudhury et al. (2021) have incorporated physically motivated 21-cm signal and foreground models, and have used ANN to extract the astrophysical parameters from 21-cm Global signal observations directly. They have also applied their ANN to extract the 21-cm parameters from the EDGES data. In an earlier work, Shimabukuro & Semelin (2017) used ML algorithms for extracting the parameters of the 21-cm power spectrum, however they have not considered foregrounds in their analysis.
The extraction of astrophysical and cosmological information is heavily dependent on accurate modelling, parametrization, and use of very well-calibrated instruments. In this paper, we propose an alternate method of signal extraction using ANN. We demonstrate that a very basic ANN based framework can be formulated to extract the H 21-cm power spectrum from the total observed sky power spectrum directly.
The structure of this paper is as follows. We start with a basic overview of the 21-cm brightness temperature fluctuations and expressions for the power spectrum of the EoR redshifted 21-cm signal, and describe the details of the semi-numerical simulations that we have used to generate the realizations of the EoR 21-cm signal, in Section § 2. In Section § 3, we describe the challenges faced while trying to detect this faint signal and explain how the foreground power spectrum is generated. We also briefly describe how the noise power spectrum is computed. Following this, we describe our artificial neural network based framework, and elaborate on how we construct the training and test sets in Section § 4. In Section § 5, we present our results: the predicted signal parameters from 21-cm power spectrum simulations and the recovered power spectrum from the simulated mock observations. Finally, in Section § 6, we discuss and summarize our results and elaborate on the scope for future work.
THE 21-CM SIGNAL
The hyper-fine splitting of the lowest energy level of hydrogen, gives rise to the rest-frame 21 = 1.42 GHz radio signal corresponding to a wavelength of 21 cm. The 21-cm signal is produced by the neutral hydrogen atoms in the IGM, collectively observed as a contrast against the background CMB radiation (Pritchard & Loeb (2012) gives a detailed review). This contrast is the differential brightness temperature, which is the primary observable in 21-cm experiments, given by:
( ) = − 1 + (1 − exp − 0 ) ≈ 27 (1 + ) Ω ℎ 2 0.023 0.15 Ω ℎ 2 1 + 10 1/2 1 − ( ) (1 + ) ( ) −1 ,(1)
where is the neutral fraction of hydrogen, is the fractional over-density of baryons, Ω and Ω are the baryon and total matter density respectively, in units of the critical density, ( ) is the Hubble parameter and ( ) is the CMB temperature at redshift z, is the spin temperature of neutral hydrogen, and is the velocity gradient along the line of sight. The underlying cosmology and rich astrophysics associated with the evolution of the 21cm signal, makes it a very promising probe into the less explored phases of the evolution of the Universe. It is a prospective tool which will enable us to characterize the formation and the evolution of the first astrophysical sources and, potentially, properties of dark matter across cosmic time. Most recent experiments target to measure the 21-cm signal either as a sky averaged Global signal or as a power spectrum. The 21-cm power spectrum from the epoch of reionization can be quantified using the fluctuations of the brightness temperature.
T b ( ) = T b ( ) −T b ( ).(2)
The spatial fluctuations of the 21-cm signal in a volume can be decomposed into Fourier modes, given by:
T b (¯) = ∫ d 3 k (2 ) 3 e (ik·r) T b (k).(3)
The expectation value of this quantity T b (k) can be expressed as:
< T b (k) T * b (k ) > = (2 ) 3 (k − k )P 21 (k)(4)
Here, P 21 (k) is the 21-cm power spectrum, which represents the fluctuations of the brightness temperature and tells us about its statistical properties. The dimensionless power spectrum of the brightness temperature, is given by Δ 2 k = k 3 P(k)/2 2 . Throughout this paper, we work with the dimensionless power spectrum, Δ 2 , also interchangeably referred to as the 'Signal PS' in units of mK 2 (see Fig. 1).
Simulating the 21cm signal from the EoR
Modelling the 21-cm signal involves taking into consideration various details of the reionization process. These include answering questions such as: when and how did reionization start; how long did it last; was it uniform or patchy; what were the main sources of ionizing photons; etc. While analytical models can be very useful to quickly generate the 21-cm signal with a high dynamic range, they have a disadvantage of not being able to properly deal with the spatial distribution of the reionization process. Numerical models, are able to provide a better improved description of the 21-cm signal, also taking care of the spatial distribution of the associated fields, the disadvantage being that they are comparatively quite slow to run. Hybrid models are based on a combination of the best of both these methods, and are faster and also able to deal with the spatial resolution of the fields to a large extent. There are a number of publicly available semi-numerical codes to simulate the evolution of the 21-cm signal, such as 21cmFAST (Mesinger et al. 2011) and SimFAST21 (Santos et al. 2010). In the publicly available work by Mesinger et al. (2016), the Evolution of 21cm Structure (EoS), full simulations (spanning across CD-EoR) considering spin temperature fluctuations have been carried out using two different galaxy models. Considering a maximally optimistic scenario, where the foregrounds have been cleaned very efficiently, they have predicted the duration of reionization. The semi-numerical code used in this paper (ReionYuga, described in the following paragraph) focuses specifically on the EoR.
In this work, we have used a semi-numerical code ReionYuga 1 (Majumdar et al. 2014Mondal et al. 2015) to generate coeval ionization cubes at redshift = 9.01, considering an inside-out model of reionization (which closely follows Choudhury (2009)). This model is primarily based on an assumption that H follows the underlying matter density contrast. The dark matter haloes, which have a masses above a certain threshold ( M h,min ), are the hosts of the sources producing ionizing radiation. To begin with, a dark matter (DM) distribution is generated using a particle-mesh N-body code. The corresponding N-body simulation has 4288 3 grids with a grid-spacing of 0.07 Mpc and uses DM particles each of mass 1.09 × 10 8 M . In the following step, a Friends-of-Friends algorithm is used to identify halos in those dark matter distributions, and a halo catalogue is prepared. In the final module, an ionization field is produced using an excursion set formalism , which closely follows the assumption of homogeneous recombination (Choudhury 2009). The H distribution is mapped by those particles whose neutral Hydrogen masses are calculated from the neutral Hydrogen fraction x H , interpolated from its eight adjacent grid points. Following this, for each coeval cube, the positions, peculiar velocities and the H1 masses of these particles are saved. There are three ionization parameters which we can vary in this prescription, which are:
• M h,min : Minimum halo mass, which represents the lower limit of the mass of a halo which can collapse and cool down to form the first generation of sources. This cut-off is decided by various cooling mechanisms which are taken into consideration in the model. The value of this parameter decides the onset and duration of the reionization process. • , Ionizing efficiency: This parameter takes into account a combination of the mostly degenerate astrophysical parameters like the escape fraction of ionizing photons f esc ,the star formation efficiency f * and the recombination rate N rec (Choudhury 2009) in the prescription for estimating the number of ionizing photons N . For a halo with mass M h ≥ M h,min , N can be written as:
N (M h ≥ M h,min ) = Ω b Ω b M h m p ,(5)
where, ∼ f * · f esc * n * m p , Ω , Ω and m p are the dark matter density parameter, baryon density parameter and mass of proton respectively. A larger value of should imply that more efficient ionizing sources are available and hence a larger N , which would accelerate the process of reionization.
• R mfp , mean free path of the ionizing radiation: The ionizing photons travel a certain distance from the point where they were produced, ionising the intervening medium. Thus R mfp determines the size of the ionized regions. Varying this parameter, keeping other parameters fixed, has minimal effect on the shape of the power spectrum (Greig & Mesinger 2015;Park et al. 2019). So we do not include this parameter in our list of target output parameters.
Preparing the data by varying the ionization parameters
Our simulations were performed on a cubic box of comoving volume 215 Mpc 3 ], at redshift, = 9.01, using ReionYuga. To generate the gallery of 21cm power spectra required for our work (see Fig. 1), we have varied the ionizing efficiency, in the range [18 − 200], and the minimum halo mass, M h,min in the range [10.87 × 10 9 − 1.195 × 10 11 M ], generating different reionization histories ( see Tab.1 for the range of parameters).Each of the 21cm power spectrum in Fig. 1 corresponds to a different combination of the parameters ( , M h,min ), gridded uniformly across the mentioned range. We choose our redshift of interest around the estimated midpoint of reionization, where the 21-cm signal is expected to peak and the Universe is assumed to be approximately 50% ionized. For our work, we choose the modes to be in the range, 0.17 < k < 0.35 Mpc −1 . Most current and upcoming EoR experiments aim to observe the 21cm signal signal in this redshift regime, with better sensitivities at lower k-modes. We use this set of 21-cm power spectra to construct the various training and test sets for ths work, which is explained in detail in § 5.
OBSERVATIONAL CHALLENGES
One of the major challenges in the detection of the 21-cm signal are the bright galactic and extragalactic foregrounds. These foregrounds are primarily constituted of synchrotron radiation, which are several orders of magnitude brighter than the H signal. It is extremely challenging to model and subtract out the sources constituting the foregrounds, due to ionospheric fluctuations, lack of detailed knowledge of the instrument and systematics. Strategically, there could be two ways of dealing with this issue. One is to model the foregrounds very well and subtract it from the observed sky signal, while another would be to opt for foreground avoidance. A lot of effort has been made in the past decade on foreground removal for detecting the 21-cm power spectrum from EoR. For example, Morales (2005a); Jelić et al. (2008); Liu et al. (2009a,b); Chapman et al. (2012); Paciga et al. (2013) have discussed in detail, the foreground subtraction technique. In this approach, a foreground model (empirically obtained form prior observations) is subtracted from the data and the residual data is used to compute the 21-cm power spectrum. In contrast, foreground avoidance (Datta et al. 2010;Parsons et al. 2012;Pober et al. 2013;Ali et al. 2015;Trott et al. 2016) is an alternative approach based on the idea that contamination from any foreground with smooth spectral behaviour is confined only to a wedge in cylindrical ( ⊥ , ) space due to chromatic coupling of an interferometer with the foregrounds. The H power spectrum can be estimated from the uncontaminated modes outside the wedge region termed as the EoR window where the H signal is dominant over the foregrounds. With their merits and demerits, these two approaches are considered complementary (Chapman et al. 2016).
Developing algorithms and techniques to characterize and remove the foregrounds efficiently, are the major goals of most of the current and future radio-telescopes. Generally, foreground sources have a smooth power-law continuum spectra. Averaging over many such sources with different spectral indices and spectral structure would yield a smooth power-law foreground. Owing to this fact, detection of the spectral structure corresponding to a distribution of regions containing neutral and ionized hydrogen, would be feasible. The foreground power spectrum is usually represented as an angular power spectrum in ℓ. Since we have considered coeval simulation boxes at redshift 9.01 only, and are interested in the spherically averaged power spectrum, we need to represent the foreground power spectrum also in terms of k. In the following subsection, we will describe how we have simulated the foreground power spectrum for our work.
Simulating the foreground PS
The foreground power spectrum is usually modelled as a power law, in both ℓ and . It can be expressed as (Santos et al. 2005):
C ℓ ≈ A(ℓ/1000) − ( f / ) 2 .(6)
The spectral index depends on the energy distribution of relativistic electrons, and it varies slightly in the sky, with a spectral steepening at larger frequencies. Following Tegmark et al. (1997), the mean value of is usually taken to be 2.8. The 408 MHz Haslam map suggests that the power law index, , varies approximately between 2.5 − 2.8. From (Santos et al. 2005), we take the value of amplitude A to be 700. We have varied in the range 2.75−2.85 and in the range 2.20 − 2.79 to generate the set of foreground power spectra for this work. Our choice of the range of is motivated from actual observations of the ELIAS N1 field (Chakraborty et al. 2019). The power spectrum of the fluctuations at two different frequencies can be written as:
ℓ ( 1 , 2 ) ≡ ℓ ( 1 ) * ℓ ( 2 ) .(7)
In the flat-sky approximation, following the formulation in Mondal et al. (2018); Datta et al. (2007), P(k) is the Fourier transform of ℓ (Δ ), and can be expressed as,
P(k ⊥ , k ) = r 2 r ∫ d(Δ )e −ik r Δ C ℓ (Δ ),(8)
where, k and k ⊥ = ℓ/r are the components of k = √︃ k 2 + (ℓ/r ) 2 , which are parallel and perpendicular to the line of sight, respectively; is the comoving distance given by: r = ∫ z 0 dz [c/H(z )]. Following Datta et al. (2007); Mondal et al. (2018), and solving this integration for P(k), we convert the foreground angular power spectrum to P FG (k).
Simulating the Noise PS
In this subsection, we briefly describe the formalism to obtain the sensitivity to the 21-cm power spectrum, and direct the readers to Pober et al. (2013); Parsons et al. (2012), for details. In this method, a uv coverage of the observation is generated by gridding each baseline into the uv plane and including the effects of the Earth's rotation over the course of the observation. The sensitivity to any mode of the dimensionless power spectrum is given by:
Δ 2 N (k) ≈ X 2 Y k 3 2 2 Ω 2t T 2 sys ,(9)
where t is the integration time for sampling a particular ( , , )mode, and the factor of two in the denominator comes from the explicit inclusion of two orthogonal polarization components to measure the total unpolarized signal. Ω is a factor related to the solid angle of the primary beam and is the system temperature. 2 is a cosmological scalar converting ( , ) units into comoving wavenumber units. In this work, we have used a slightly modified version of 21 S (Parsons et al. 2012;Pober et al. 2013) to obtain the uv-coverage for instruments HERA and SKA (actually SKA1 LOW), by using the corresponding calibration files. We have obtained the thermal noise power spectra for these experiments corresponding to a total of 1080 hours of observation for SKA (in 6 hours of tracking mode observation per day, for 180 days) and HERA (1080 hours in drift mode), for this work. Now that we have the simulated signal, foreground and the noise power spectrum ready, we now describe our ANN framework and the construction of the corresponding training and test sets in the following sections.
A NEURAL NETWORK FRAMEWORK TO EXTRACT THE 21-CM PS
Artificial neural networks (ANN) are one of the several available machine learning techniques which can be used for developing supervised learning-based frameworks. ANN are capable of mapping associations with the given input data and the target parameters of interest, without the knowledge of their explicit analytical relationships. A basic neural network has three kinds of layers: an input layer, one or more hidden layers, and an output layer. Each neuron in a layer is connected with every neuron in the next layer, and the connection is associated by a weight and a bias. During the training process, the network optimises a chosen cost function by repeatedly back-propagating the errors and adjusting the weights and biases. A chunk of the data is put aside as the validation dataset and the performance of the network is validated using this set. For a detailed description of the training process, please see Choudhury et al. (2020). Once training and validation is completed, a test set is used to predict the output parameters. We have used publicly available python-based packages (Pedregosa et al. 2011) and in this work to design the network.
In this paper, we compute R2-scores for each of the output parameters as the performance metric. The R2-score is defined as:
R 2 = Σ(y pred − y ori ) 2 Σ(y ori − y ori ) 2 = 1 − Σ(y pred − y ori ) 2 Σ(y ori − y ori ) 2(10)
where, ori is the average of the original parameter, and the summation is over the entire test set. The score R 2 = 1, implies a perfect inference of the parameters, while R 2 can vary between 0 and 1. Training of artificial neural networks (ANN) in a supervised manner is very target-specific, and has very different network architectures for different applications. While we use simple multi-layer perceptron type of ANN in this work and also in the previous ANN-based works (Choudhury et al. 2020, the methodology and the ANN architecture to extract the parameters from the observations of the sky-averaged 21-cm Global signal is very different from our implementation to the power spectrum synthetic-observations. The parameter extraction process becomes less straightforward when we have to deal with the extraction of the faint HI 21-cm power spectrum parameters, taking into consideration fluctuations in the foreground. While ML methods are extremely fast and computationally very efficient, the ANN model performances need to be carefully analysed. There can be issues of "overfitting", which is typically the case when the trained model fails to generalize what it has learned during the training process. Such a scenario can be overcome by suitable regularization techniques. There can also be an "underfitting" scenario, where the training error does not reduce even after several iterations. This can also be solved by making the network more complex (e.g: by adding more layers) and also by using a well represented training datasets. The ANN is trained to find an association or mapping between the input data and the target features. In parameter estimation, the use of ANN allows us to to explore a large parameter space conveniently without requiring a specified prior and also by-passes the calculation of the likelihood, expediting the process tremendously. In the following section, we first describe the basic architecture of the ANN and elaborate on the details of how the training and test datasets were created. We then explain the ANN framework which we have used to extract the H PS from the synthetic datasets, and discuss the results that we have obtained.
RESULTS
In this work, we have used a basic multi-layer perceptron neural network to calculate the parameters from the H power spectrum data. The number of input neurons of the ANN corresponds to the dimension of each of the input power spectra. In our study, we have restricted our analyses to the k-modes where we expect the sensitivity of the upcoming telescopes to be optimal. Hence, we have considered only 0.17 < k < 0.35 Mpc 1 for the upcoming experiments like the HERA and the SKA. Our training sets consists of 1148 sample 21-cm PS (see Fig. 1). Using the simulated signal power spectrum, the foreground, and the thermal noise corresponding to the instrument in question, we construct datasets to train the artificial neural network.
Case 1: Only 21-cm Signal without foregrounds
Firstly, we demonstrate a basic ANN framework to predict the astrophysical parameters from the H power spectrum (Fig. 2). Such an implementation was earlier demonstrated by Shimabukuro & Figure 2. This flowchart describes the framework used to predict the parameters , M h,min from the mock datasets from which the foregrounds are assumed to have been perfectly removed. Semelin (2017). This is a proof-of-concept check to see how well the parameters can be extracted from the H power spectrum data. Each sample in the training dataset for this case, consists of only the H power spectra and is constructed as follows:
PS tot (k) = PS H (k);(11)
where, PS tot (k) is the total power spectrum. The ANN consists of an input layer, 3 hidden layers (activated by tanh, elu, tanh respectively) and an output layer with a linear activation function. We have used 'mean squared error'('mse') as the loss function and 'adagrad' as the optimizer (Duchi et al. 2011) for this network. Adagrad is a gradient-squared based optimizing algorithm which is used to minimize the loss-function without getting stuck in local minima, saddle points, or plateau regions. To construct the test datasets, we consider a randomly drawn set consisting of 100 H power spectra. This H 21-cm PS set is kept fixed, for all the different test sets constructed in this work. In addition to the H power spectrum, PS H , we add thermal noise corresponding to 1080 hours of observation of HERA and SKA, represented by PS noise . We construct three different test sets, the total power spectrum ( ) being defined as: The noise contribution is computed using a slightly modified version of 21 S , corresponding to 1080 hours of observation for each of the telescopes. In Fig. 3, we have plotted the predicted parameters when the test set does not contain any added noise. In Fig. 5, we have used test sets with the thermal noise corresponding to SKA and HERA, which are shown in each of the panels. We have plotted the minimum halo mass in log-scale, M h,min . It can be noted from the plots that the R2-scores of the predictions corresponding to the no-noise test sets are much higher than the ones in which the noise corruption has been added, as expected. Table 2 summarizes the details of the predictions for each of the cases.
Case 2: 21-cm signal in presence of foregrounds
As we progress to build more realistic training datasets from the upcoming radio-experiments, mock datasets include the effect of the foregrounds as well as the thermal noise corruptions. For the training sets, we use the foregrounds that has been generated using the prescription described in §3.1. Though it would be more realistic to add the foregrounds in the image domain, and then compute the power spectra to build the training sets, we have assumed that by adding the FG power spectra across the available k-modes, we can mimic the total power spectrum from foreground corrupted datasets to some extent. While working with synthetic datasets, the simulated FG maps are ideally added in the image plane, and then the power spectrum is computed. Such a formalism has been followed by Li et al. (2019) to separate the EoR signal from an EoR+FG field. Though this is a more ideal way of simulating observational effects, we have presented an alternate approach to obtain the power spectra, without going into the imaging domain. As the flat sky approximation holds good in the scales considered in this work, we assume that adding the FG power spectra per k-mode would also be a good representation of the total power spectrum from foreground corrupted datasets to some extent, without actually going into the imaging domain and introducing more imaging related corruptions. The purpose of making such an assumption is to demonstrate that neural network can be used to recover the H 21cm power spectra from a foreground dominated dataset without going into the imaging domain. Mesinger et al. (2016) have used full EoS simulations but have considered only two possible cases: a maximally optimistic scenario, with no effect of foreground or perfect foreground cleaning. We define the total power spectra as:
PS tot (k) = PS H1 (k) + PS FG (k),(12)
here, PS H1 and PS FG are the 21-cm signal power spectrum and the foreground power spectrum respectively. The training datasets for this case are shown in Fig. 4. We see that the H signal PS is totally buried in the foregrounds, which are orders of magnitude higher. The test sets are constructed in a similar manner, by adding the noise PS corresponding to HERA and SKA as:
PS tot (k) = PS H1 (k) + PS FG (k) + PS noise (k).(13)
When we attempted to train the ANN to predict the associated 21-cm signal parameters directly from these training sets, we were not able to recover the parameters and achieve good accuracy. The best accuracies that we could achieve, were only ∼ 30 − 35% for and M h,min , that too, for the case where no noise was added to the test sets. As an alternate approach, we have proposed a two step ANN framework to predict the 21-cm signal from the total observed power spectrum.
Invoking the 2-step ANN framework: When we attempted to extract the reionizaton parameters directly from the 21cm signal+foreground datasets, the accuracies of the predictions fall considerably. Thus we suggest to perform the training in two segments: in the first step we extract the H power spectrum from the foreground corrupted datasets. Next, using this as the input into another ANN, we predict the astrophysical parameters ( Fig. 7. In this framework, at first, we use ANN1 to extract the H 21-cm PS directly from the total sky power spectra. ANN1 is a multilayer perceptron, with 128 input nodes, corresponding to the number of k-modes we have considered in our training set data. In our model, we use 3 hidden layers, with 'tanh' activation functions, with a learning rate of 0.01. The final output layer also contains of 128 nodes. The overall network has a node structure of 128 − {1024 − 512 − 216} − 128. The output from this ANN1 is the H signal power spectrum, that we are interested in. We construct three different test sets to check the performance of the ANN-framework: Here, , H are the total power spectrum and the H power spectrum respectively. denotes the thermal noise power spectrum corresponding to SKA and HERA respectively. ANN1 predicts the 21-cm PS from the test sets. We have plotted the predictions from ANN1 in the left panels of Figs.11,9,10). Computing the performance metrics : In order to compare the accuracy of the 21-cm PS predicted by the ANN1 for the different test cases, we have computed the RMSE's for each of the samples in our test dataset. We use the relation:
√︃ N i=1 (y o − y p )/y o ) 2 /N, to cal- culate the RMSE. Here, ,
denotes the original and predicted values of the H PS at a particular value of k. The summation is over the k's, so N is the dimension of the power spectrum (which is 128 in our case). This normalised RMSE gives us a measure of the offset or error in the predicted H PS. We also observed that sampling our chosen parameter space (corresponding to the reionization parameters) with latin hypercube sampling (LHS) method did not significantly improve the performance of the ANNs, particularly for the case where training is done with the foreground-corrupted power spectra. The calculated RMSE for each of the samples in the test sets corresponding to the no-noise, HERA-noise and SKA-noise case, are plotted in Fig. 6. We achieve high accuracies of ≈ 95 − 99%
The predicted 21-cm-Signal power spectra, for all the samples of the test sets is shown in the left panels of Figs. 9, 10, 11. The dotted plots represent the predicted power spectra and the solid lines are the original input power spectra. We obtain very good accuracy from the first stage of the framework, ANN1, whose output is then passed into another ANN, which we have labelled as ANN2. ANN2 is the same model as discussed in § 5.1, which has been trained for the no-foreground case to predict the parameters associated with the input signal. The flowchart describing the two-step framework is shown in Fig. 7.The predicted parameters are are shown in the right panels of Figs. 9, 10, 11. We can see that the error in the first step of the framework, is carried over to the prediction of the parameters. This is particularly evident from the test sets containing HERA-noise. The recovered 21-cm PS have an RMSE much higher than the no-noise and the SKA-noise case (see Fig. 6). In Fig. 8, we have picked up three randomly selected samples from the test sets, and plotted the predicted power spectra. We observe that the predictions of the H power spectrum is the best for the no-noise case and for the SKA-noise case. However, sensitivities for HERA . The training dataset for the 2-step ANN framework, constructed by combining the 21-cm Power Spectra and foregrounds. The training set is limited to k-modes in the range, 0.17 < k < 0.35Mpc −1 , which is suitably chosen to accommodate the maximum sensitivities corresponding to HERA and SKA.
is higher than that of SKA, by an order of magnitude around the lowest k-modes. This is reflected in the predictions of the H signal PS for the test sets corrupted with thermal noise. For the SKA-noise test sets, the reconstruction of the H 21-cm power spectra is very accurate, and hence the accuracy of the predicted parameters from the second step ANN2 is also appreciable, being ≈ 80 − 92%.
DISCUSSIONS
In this paper, we present a new ANN based machine learning framework to extract the redshifted 21-cm power spectrum and the associated astrophysical parameters from radio observations at relevant low frequencies. To the best of our knowledge, this work is the first demonstration of such an ANN framework on simulated data, in the presence of foreground. We have simulated H 21-cm power spectra at redshift = 9.01, by varying the reionization parameters M h,min , using a semi-numerical code ReionYuga.
We have considered two separate cases in our work. In the first case, our training set consists of the simulated 21-cm power spectra (PS). We have implemented ANN to extract the reionization parameters from the H 21-cm power spectrum, and obtained high accuracies for the corresponding test sets, which consists of thermal noise corresponding to 1080 hours of observation of HERA and SKA in addition to the 21-cm PS: ≈ 83 − 99% for log M h,min and ≈ 77 − 94% for . This kind of framework can be extended and applied to datasets from observations in the foreground avoidance regime or for datasets from which the foreground has been very carefully modelled and removed. In the second case, we have used foreground-dominated datasets for training and have introduced a dual ANN framework to perform the 21-cm power spectrum and parameter extraction. The ANN model from Case 1 is integrated into a two-step ANN framework, to extract the signal power spectrum and then predict the parameters from datasets dominated by foregrounds. The ANN-framework is ignorant about the underlying formulation of how the training sets have been generated. It is trained to extract the H PS from a dataset that includes foregrounds and eventually learn about the associated reionization parameters. The 21-cm power spectrum extraction was nearly accurate in the noise free case (i.e, case 2a). However, we observed that by adding thermal noise corresponding to two different experiments (SKA and HERA), the accuracy of prediction of the H PS reduced considerably, particularly for HERA. Even a 10% deviation (as shown in Fig. 8) from the original H 21-cm PS for HERA, when carried forward to the next stage of the ANN framework (where the astrophysical parameters are predicted), resulted in a much lower R2-score for the predicted parameters as compared to the no-noise case (Case 2a). The reduced accuracy for HERA, from our ANNframework, could be due to the fact that the k-mode sensitivities are much limited for HERA as compared to SKA for a fixed observation time of 1080 hours. The performance of the network would increase if the training datasets included more realistic simulations of the sky (sig-nal+foreground), incorporating detailed instrument models and pri- Figure 5. The above plot shows the original versus predicted values of the parameters, and M h,min in M , from the ANN when the test sets contains only thermal noise corresponding to 1080 hours of observations of SKA (Case 1b) and HERA (Case 1c) respectively, in addition to the signal. The R2 score calculated for each of the parameters are on the top left hand corner box for each plot. We see that the parameters are recovered much more accurately from the case where SKA-noise has been added, as compared to HERA. mary beam information. We are taking into account such realistic synthetic observations along with more sophisticated CD/EoR signal models in an upcoming work. We plan to incorporate these advanced synthetic observations in our future ANN based frameworks.
ANN can be used to extract features from any kind of data by constructing functions which associate the input with the target parameters. In contrast to the existing techniques of parameter estimation (for example, Bayesian frameworks), ANNs do not require a specific, pre-defined prior. The training sets can be thought to be representative of the prior, as used in Bayesian inference. This allows us to use a varied range of shapes of the 21-cm PS as training sets. The use of ML expedites the computational process for parameter estimation considerably. In this work, the training of the networks were done using 1148 samples. To train an ANN, for example, for the maximum number of epochs (∼ 5000) and batch size corresponding to ∼ 1148, it took ∼ 5 − 7 minutes on a computer with 40 cores and 62 GB RAM. With smaller batch sizes, it can take up to ∼ 20 − 30 minutes on the same machine. Use of ANN is computationally very efficient, and can easily predict from the various test sets in less than a minute. In a follow-up work in future, we would like to design a complete signal extraction pipeline which would be trained with several different models of the signal, foreground and will also include the effects of the instrument response, such that it will be a robust tool for predicting various astrophysical parameters associated with the EoR 21-cm signal.
ACKNOWLEDGEMENTS
MC acknowledges the support of DST for providing the INSPIRE fellowship (IF160153).AD acknowledges the support of EMR-II under CSIR.
DATA AVAILABILITY
Data will be available on request. The RMSE for each of the predicted 21-cm PS in the foregroundcorrupted test datasets, is calculated and plotted in the above plot. We see that the RMSE corresponding to the case where the test set samples contain noise corresponding to HERA, for 1080 hours of observation is much higher than the cases where the added noise corresponds to SKA (1080 hours of observation) and when no noise is added. Figure 7. This flowchart describes the two-step ANN framework which can extract the 21-cm power spectrum and the associated reionization parameters from foreground dominated datasets. The framework incorporates two neural networks: ANN1 predicts the 21-cm power spectrum from the total power spectrum. ANN2 predicts the associated astrophysical parameters from the output of ANN1. The RMSE for each of the predicted 21-cm power spectra from the foreground-corrupted test datasets is calculated and plotted in the above plot. We see that the RMSE corresponding to the test set samples containing noise corresponding to HERA for 1080 hours of observation is much higher than the test set containing noise corresponding to SKA for 1080 hours of observation. The green and when no noise is added. The x-axis is just a numeric label corresponding to each of the test set samples. This paper has been typeset from a T E X/L A T E X file prepared by the author.
Figure 1 .
1The set of 21-cm power spectra at z=9.01, which are generated by varying the reionization parameters: (a) the ionizing efficiency, ,; (b) the minimum halo mass, M h.min , within our chosen range.
(k) = PS H1 (k) (ii) Case 1b: Signal + Noise(SKA) PS tot (k) = PS H1 (k) + PS noise,SKA (iii) Case 1c: Signal + Noise (HERA) PS tot (k) = PS H1 (k) + PS noise,HERA
(k) = PS H1 (k) + PS FG (k) (ii) Case 2b: Signal + Foreground + Noise (SKA) PS tot (k) = PS H1 (k) + PS FG (k) + PS noise,SKA (iii) Case 2c: Signal + Foreground + Noise (HERA) PS tot (k) = PS H1 (k) + PS FG (k) + PS noise,HERA .
Figure 3 .
3Case 1a: Predicted values of the astrophysical parameters log M h,min in (M ) (left) and (right), along with the computed R2-scores, from the test set which does not contain any added noise or foregrounds.
Figure 4
4Figure 4. The training dataset for the 2-step ANN framework, constructed by combining the 21-cm Power Spectra and foregrounds. The training set is limited to k-modes in the range, 0.17 < k < 0.35Mpc −1 , which is suitably chosen to accommodate the maximum sensitivities corresponding to HERA and SKA.
Figure 6 .
6Figure 6. The RMSE for each of the predicted 21-cm PS in the foregroundcorrupted test datasets, is calculated and plotted in the above plot. We see that the RMSE corresponding to the case where the test set samples contain noise corresponding to HERA, for 1080 hours of observation is much higher than the cases where the added noise corresponds to SKA (1080 hours of observation) and when no noise is added.
Figure 8 .
8Figure 8. The RMSE for each of the predicted 21-cm power spectra from the foreground-corrupted test datasets is calculated and plotted in the above plot. We see that the RMSE corresponding to the test set samples containing noise corresponding to HERA for 1080 hours of observation is much higher than the test set containing noise corresponding to SKA for 1080 hours of observation. The green and when no noise is added. The x-axis is just a numeric label corresponding to each of the test set samples.
Figure 9 .
9Case 2a: The predictions of the 21-cm power spectra from the foreground-corrupted test set without thermal noise, is shown in the above figure. The left panel shows the output of ANN1, i.e., the original and predicted H power spectra. The right panel shows the predicted parameters and M h,min in M , from ANN2.
Figure 11 .
11Case 2c: In the left panel, the predicted 21-cm power spectra from the test set corrupted by foregrounds and thermal noise corresponding to 1080 hours of observations using HERA, are shown. It can be observed that there is a clear deviation from the original PS, in each of the predicted power spectra from ANN1. The right panel shows the predicted parameters and M h,min in M , from ANN2.
Table 1. Range of values of the parameters used to generate different ionization historiesParameter
Range
18 − 200
M h,min
1.09 × 10 9 − 1.19 × 10 11 M
Table 2 .
2Case 1: Training set does not contain added foregrounds or noise, while the test set contains noise corresponding to 1080 hours of observation of HERA and SKA. The calculated R2 scores are listed in this table for HERA and SKA. However, when the test sets do not contain any added noise, the R2 scores are 0.993 and 0.946 for log(M h,min ) and respectively.Test Sets
log M h,min
No-noise 0.946
0.993
SKA
0.846
0.868
HERA
0.774
0.830
21 cm
Fore-
Noise
Noise
ANN
signal
-ground (SKA) (HERA)
1a
×
×
×
Single
1b
×
×
Single
1c
×
×
Single
2a
×
×
Dual
2b
×
Dual
2c
×
Dual
Table 3. In this table, we summarise all the various cases that we have
explored in this paper.
https://github.com/rajeshmondal18/ReionYuga MNRAS 000, 1-11(2021)
MNRAS 000, 1-11(2021)
Figure 10. Case 2b: The left panel shows the original and the predicted 21-cm power spectra, from the foreground-corrupted test set, containing thermal noise corresponding to 1080 hours of observations using SKA. The right panel shows the predicted parameters and M h,min in M , from ANN2
. Z S Ali, 10.1088/0004-637X/809/1/61ApJ. 80961Ali Z. S., et al., 2015, ApJ, 809, 61
. S Bharadwaj, S S Ali, 10.1111/j.1365-2966.2004.08604.xMNRAS. 3561519Bharadwaj S., Ali S. S., 2005, MNRAS, 356, 1519
. S Bharadwaj, S K Sethi, 10.1007/BF02702273Journal of Astrophysics and Astronomy. 22293Bharadwaj S., Sethi S. K., 2001, Journal of Astrophysics and Astronomy, 22, 293
. J D Bowman, A E E Rogers, R A Monsalve, T J Mozdzen, N Mahesh, 10.1038/nature25792Nature. 55567Bowman J. D., Rogers A. E. E., Monsalve R. A., Mozdzen T. J., Mahesh N., 2018, Nature, 555, 67
. A Chakraborty, 10.1093/mnras/stz1580Monthly Notices of the Royal Astronomical Society. 4874102Chakraborty A., et al., 2019, Monthly Notices of the Royal Astronomical Society, 487, 4102
. E Chapman, V Jelić, arXiv:1909.12369arXiv e-printsChapman E., Jelić V., 2019, arXiv e-prints, p. arXiv:1909.12369
. E Chapman, 10.1111/j.1365-2966.2012.21065.xMonthly Notices of the Royal Astronomical Society. 4232518Chapman E., et al., 2012, Monthly Notices of the Royal Astronomical Soci- ety, 423, 2518
. E Chapman, S Zaroubi, F B Abdalla, F Dulwich, V Jelić, B Mort, 10.1093/mnras/stw161Monthly Notices of the Royal Astronomical Society. 4582928Chapman E., Zaroubi S., Abdalla F. B., Dulwich F., Jelić V., Mort B., 2016, Monthly Notices of the Royal Astronomical Society, 458, 2928
. J Chardin, G Uhlrich, D Aubert, N Deparis, N Gillet, P Ocvirk, J Lewis, 10.1093/mnras/stz2605MNRAS. 4901055Chardin J., Uhlrich G., Aubert D., Deparis N., Gillet N., Ocvirk P., Lewis J., 2019, MNRAS, 490, 1055
. T R Choudhury, Current Science. 97841Choudhury T. R., 2009, Current Science, 97, 841
. M Choudhury, A Datta, A Chakraborty, 10.1093/mnras/stz3107MNRAS. 4914031Choudhury M., Datta A., Chakraborty A., 2020, MNRAS, 491, 4031
. M Choudhury, A Chatterjee, A Datta, T R Choudhury, 10.1093/mnras/stab180MNRAS. 5022815Choudhury M., Chatterjee A., Datta A., Choudhury T. R., 2021, MNRAS, 502, 2815
. A Cohen, A Fialkov, R Barkana, R Monsalve, arXiv:1910.06274arXiv e-printsCohen A., Fialkov A., Barkana R., Monsalve R., 2019, arXiv e-prints, p. arXiv:1910.06274
. K K Datta, T R Choudhury, S Bharadwaj, 10.1111/j.1365-2966.2007.11747.xMNRAS. 378119Datta K. K., Choudhury T. R., Bharadwaj S., 2007, MNRAS, 378, 119
. A Datta, J D Bowman, C L Carilli, 10.1088/0004-637X/724/1/526ApJ. 724526Datta A., Bowman J. D., Carilli C. L., 2010, ApJ, 724, 526
. D R Deboer, 10.1088/1538-3873/129/974/045001PASP. 12945001DeBoer D. R., et al., 2017, PASP, 129, 045001
. Di Matteo, T Perna, R Abel, T Rees, M J , 10.1086/324293ApJ. 564576Di Matteo T., Perna R., Abel T., Rees M. J., 2002, ApJ, 564, 576
. Di Matteo, T Ciardi, B Miniati, F , 10.1111/j.1365-2966.2004.08443.xMNRAS. 3551053Di Matteo T., Ciardi B., Miniati F., 2004, MNRAS, 355, 1053
. J Duchi, E Hazan, Y Singer, J. Mach. Learn. Res. 12Duchi J., Hazan E., Singer Y., 2011, J. Mach. Learn. Res., 12, 2121-2159
. S R Furlanetto, S P Oh, 10.1093/mnras/stw104MNRAS. 4571813Furlanetto S. R., Oh S. P., 2016, MNRAS, 457, 1813
. S R Furlanetto, A Sokasian, L Hernquist, 10.1111/j.1365-2966.2004.07187.xMNRAS. 347187Furlanetto S. R., Sokasian A., Hernquist L., 2004, MNRAS, 347, 187
. N Gillet, A Mesinger, B Greig, A Liu, G Ucci, 10.1093/mnras/stz010MNRAS. 484282Gillet N., Mesinger A., Greig B., Liu A., Ucci G., 2019, MNRAS, 484, 282
. L J Greenhill, G Bernardi, arXiv:1201.1700Greenhill L. J., Bernardi G., 2012, arXiv e-prints, p. arXiv:1201.1700
. B Greig, A Mesinger, 10.1093/mnras/stv571MNRAS. 4494246Greig B., Mesinger A., 2015, MNRAS, 449, 4246
. G Harker, 10.1111/j.1365-2966.2009.15081.xMNRAS. 3971138Harker G., et al., 2009, MNRAS, 397, 1138
. S Hassan, A Liu, S Kohn, La Plante, P , 10.1093/mnras/sty3282MNRAS. 4832524Hassan S., Liu A., Kohn S., La Plante P., 2019, MNRAS, 483, 2524
. V Jelić, 10.1111/j.1365-2966.2008.13634.xMNRAS. 3891319Jelić V., et al., 2008, MNRAS, 389, 1319
. W D Jennings, C A Watkinson, F B Abdalla, J D Mcewen, 10.1093/mnras/sty3168MNRAS. 4832907Jennings W. D., Watkinson C. A., Abdalla F. B., McEwen J. D., 2019, MNRAS, 483, 2907
. M Kamran, S Majumdar, R Ghara, G Mellema, S Bharadwaj, J R Pritchard, R Mondal, I T Iliev, arXiv:2108.08201Kamran M., Majumdar S., Ghara R., Mellema G., Bharadwaj S., Pritchard J. R., Mondal R., Iliev I. T., 2021a, arXiv e-prints, p. arXiv:2108.08201
. M Kamran, R Ghara, S Majumdar, R Mondal, G Mellema, S Bharadwaj, J R Pritchard, I T Iliev, 10.1093/mnras/stab216MNRAS. 5023800Kamran M., Ghara R., Majumdar S., Mondal R., Mellema G., Bharadwaj S., Pritchard J. R., Iliev I. T., 2021b, MNRAS, 502, 3800
Advancing Astrophysics with the Square Kilometre Array (AASKA14). L Koopmans, 1Koopmans L., et al., 2015, Advancing Astrophysics with the Square Kilo- metre Array (AASKA14), p. 1
. W Li, 10.1093/mnras/stz582mnras. 4852628Li W., et al., 2019, mnras, 485, 2628
. A Liu, M Tegmark, M Zaldarriaga, 10.1111/j.1365-2966.2009.14426.xMonthly Notices of the Royal Astronomical Society. 3941575Liu A., Tegmark M., Zaldarriaga M., 2009a, Monthly Notices of the Royal Astronomical Society, 394, 1575
. A Liu, M Tegmark, M Zaldarriaga, 10.1111/j.1365-2966.2009.14426.xMonthly Notices of the Royal Astronomical Society. 3941575Liu A., Tegmark M., Zaldarriaga M., 2009b, Monthly Notices of the Royal Astronomical Society, 394, 1575
. S Majumdar, G Mellema, K K Datta, H Jensen, T R Choudhury, S Bharadwaj, M M Friedrich, 10.1093/mnras/stu1342MNRAS. 443Majumdar S., Mellema G., Datta K. K., Jensen H., Choudhury T. R., Bharad- waj S., Friedrich M. M., 2014, MNRAS, 443, 2843-2861
. S Majumdar, 10.1093/mnras/stv2812MNRAS. 456Majumdar S., et al., 2015, MNRAS, 456, 2080-2094
. S Majumdar, J R Pritchard, R Mondal, C A Watkinson, S Bharadwaj, G Mellema, 10.1093/mnras/sty535MNRAS. 4764007Majumdar S., Pritchard J. R., Mondal R., Watkinson C. A., Bharadwaj S., Mellema G., 2018, MNRAS, 476, 4007
. S Majumdar, M Kamran, J R Pritchard, R Mondal, A Mazumdar, S Bharadwaj, G Mellema, 10.1093/mnras/staa3168MNRAS. 4995090Majumdar S., Kamran M., Pritchard J. R., Mondal R., Mazumdar A., Bharad- waj S., Mellema G., 2020, MNRAS, 499, 5090
. F G Mertens, A Ghosh, L V E Koopmans, 10.1093/mnras/sty1207MNRAS. 4783640Mertens F. G., Ghosh A., Koopmans L. V. E., 2018, MNRAS, 478, 3640
. F G Mertens, 10.1093/mnras/staa327MNRAS. 4931662Mertens F. G., et al., 2020, MNRAS, 493, 1662
. A Mesinger, S Furlanetto, R Cen, 10.1111/j.1365-2966.2010.17731.xMNRAS. 411955Mesinger A., Furlanetto S., Cen R., 2011, MNRAS, 411, 955
. A Mesinger, B Greig, E Sobacchi, 10.1093/mnras/stw831MNRAS. 4592342Mesinger A., Greig B., Sobacchi E., 2016, MNRAS, 459, 2342
. R Mondal, S Bharadwaj, S Majumdar, 10.1093/mnras/stv2772MNRAS. 456Mondal R., Bharadwaj S., Majumdar S., 2015, MNRAS, 456, 1936-1947
. R Mondal, S Bharadwaj, K K Datta, 10.1093/mnras/stx2888MNRAS. 4741390Mondal R., Bharadwaj S., Datta K. K., 2018, MNRAS, 474, 1390
. M F Morales, 10.1086/426730ApJ. 619678Morales M. F., 2005a, ApJ, 619, 678
. M F Morales, 10.1086/426730ApJ. 619678Morales M. F., 2005b, ApJ, 619, 678
. M F Morales, J S B Wyithe, 10.1146/annurev-astro-081309-130936ARA&A. 48127Morales M. F., Wyithe J. S. B., 2010, ARA&A, 48, 127
. B D Nhan, D D Bordenave, R F Bradley, J O Burns, K Tauscher, D Rapetti, P J Klima, arXiv:1811.04917Nhan B. D., Bordenave D. D., Bradley R. F., Burns J. O., Tauscher K., Rapetti D., Klima P. J., 2018, arXiv e-prints, p. arXiv:1811.04917
. S P Oh, K J Mack, 10.1111/j.1365-2966.2003.07133.xMNRAS. 346871Oh S. P., Mack K. J., 2003, MNRAS, 346, 871
. G Paciga, 10.1093/mnras/stt753Monthly Notices of the Royal Astronomical Society. 433639Paciga G., et al., 2013, Monthly Notices of the Royal Astronomical Society, 433, 639
. J Park, A Mesinger, B Greig, N Gillet, 10.1093/mnras/stz032MNRAS. 484933Park J., Mesinger A., Greig B., Gillet N., 2019, MNRAS, 484, 933
. A R Parsons, 10.1088/0004-6256/139/4/1468AJ. 1391468Parsons A. R., et al., 2010, AJ, 139, 1468
. A Parsons, J Pober, M Mcquinn, D Jacobs, J Aguirre, 10.1088/0004-637X/753/1/81ApJ. 75381Parsons A., Pober J., McQuinn M., Jacobs D., Aguirre J., 2012, ApJ, 753, 81
. F Pedregosa, Journal of Machine Learning Research. 122825Pedregosa F., et al., 2011, Journal of Machine Learning Research, 12, 2825
. J C Pober, 10.1088/0004-6256/145/3/65AJ. 14565Pober J. C., et al., 2013, AJ, 145, 65
. J R Pritchard, A Loeb, 10.1088/0034-4885/75/8/086901Reports on Progress in Physics. 7586901Pritchard J. R., Loeb A., 2012, Reports on Progress in Physics, 75, 086901
. M G Santos, A Cooray, L Knox, 10.1086/429857ApJ. 625575Santos M. G., Cooray A., Knox L., 2005, ApJ, 625, 575
. M G Santos, L Ferramacho, M B Silva, A Amblard, A Cooray, 10.1111/j.1365-2966.2010.16898.xmnras. 4062421Santos M. G., Ferramacho L., Silva M. B., Amblard A., Cooray A., 2010, mnras, 406, 2421
. C J Schmit, J R Pritchard, 10.1093/mnras/stx3292MNRAS. 4751213Schmit C. J., Pritchard J. R., 2018, MNRAS, 475, 1213
. P A Shaver, R A Windhorst, P Madau, A G De Bruyn, A&A. 345380Shaver P. A., Windhorst R. A., Madau P., de Bruyn A. G., 1999, A&A, 345, 380
. H Shimabukuro, B Semelin, 10.1093/mnras/stx734MNRAS. 4683869Shimabukuro H., Semelin B., 2017, MNRAS, 468, 3869
. P H Sims, J C Pober, 10.1093/mnras/stz1888MNRAS. 4882904Sims P. H., Pober J. C., 2019, MNRAS, 488, 2904
. S Singh, arXiv:2112.06778arXiv e-printsSingh S., et al., 2021, arXiv e-prints, p. arXiv:2112.06778
. M Sokolowski, 10.1017/pasa.2015.3Publ. Astron. Soc. Australia324Sokolowski M., et al., 2015, Publ. Astron. Soc. Australia, 32, e004
. G Swarup, S Ananthakrishnan, V K Kapahi, A P Rao, C R Subrahmanya, V K Kulkarni, Current Science. 6095Swarup G., Ananthakrishnan S., Kapahi V. K., Rao A. P., Subrahmanya C. R., Kulkarni V. K., 1991, Current Science, 60, 95
. K Tauscher, D Rapetti, J O Burns, E Switzer, 10.3847/1538-4357/aaa41fThe Astrophysical Journal. 853187Tauscher K., Rapetti D., Burns J. O., Switzer E., 2018, The Astrophysical Journal, 853, 187
. M Tegmark, J Silk, M J Rees, A Blanchard, T Abel, F Palla, 10.1086/303434ApJ. 4741Tegmark M., Silk J., Rees M. J., Blanchard A., Abel T., Palla F., 1997, ApJ, 474, 1
. S J Tingay, 10.1017/pasa.2012.007Publ. Astron. Soc. Australia307Tingay S. J., et al., 2013, Publ. Astron. Soc. Australia, 30, e007
. H Tiwari, A K Shaw, S Majumdar, M Kamran, M Choudhury, arXiv:2108.07279Tiwari H., Shaw A. K., Majumdar S., Kamran M., Choudhury M., 2021, arXiv e-prints, p. arXiv:2108.07279
. C M Trott, 10.3847/0004-637X/818/2/139ApJ. 818139Trott C. M., et al., 2016, ApJ, 818, 139
. T C Voytek, A Natarajan, Jáuregui García, J M Peterson, J B López-Cruz, O , 10.1088/2041-8205/782/1/L9ApJ. 7829Voytek T. C., Natarajan A., Jáuregui García J. M., Peterson J. B., López-Cruz O., 2014, ApJ, 782, L9
. M Zaldarriaga, S R Furlanetto, L Hernquist, M P Van Haarlem, 10.1051/0004-6361/201220873ApJ. 6082A&AZaldarriaga M., Furlanetto S. R., Hernquist L., 2004, ApJ, 608, 622 van Haarlem M. P., et al., 2013, A&A, 556, A2
| [
"https://github.com/rajeshmondal18/ReionYuga"
]
|
[
"AN EFFICIENT ALGORITHM FOR THE RIEMANNIAN 10j SYMBOLS",
"AN EFFICIENT ALGORITHM FOR THE RIEMANNIAN 10j SYMBOLS"
]
| [
"J Daniel ",
"Greg Egan "
]
| []
| []
| The 10j symbol is a spin network that appears in the partition function for the Barrett-Crane model of Riemannian quantum gravity. Elementary methods of calculating the 10j symbol require O(j 9 ) or more operations and O(j 2 ) or more space, where j is the average spin. We present an algorithm that computes the 10j symbol using O(j 5 ) operations and O(j 2 ) space, and a variant that uses O(j 6 ) operations and a constant amount of space. An implementation has been made available on the web. | 10.1088/0264-9381/19/6/310 | [
"https://arxiv.org/pdf/gr-qc/0110045v3.pdf"
]
| 14,908,906 | gr-qc/0110045 | d546dcc7215156a2018d094d29d66970b473a762 |
AN EFFICIENT ALGORITHM FOR THE RIEMANNIAN 10j SYMBOLS
24 Jan 2002
J Daniel
Greg Egan
AN EFFICIENT ALGORITHM FOR THE RIEMANNIAN 10j SYMBOLS
24 Jan 2002
The 10j symbol is a spin network that appears in the partition function for the Barrett-Crane model of Riemannian quantum gravity. Elementary methods of calculating the 10j symbol require O(j 9 ) or more operations and O(j 2 ) or more space, where j is the average spin. We present an algorithm that computes the 10j symbol using O(j 5 ) operations and O(j 2 ) space, and a variant that uses O(j 6 ) operations and a constant amount of space. An implementation has been made available on the web.
Introduction
The Barrett-Crane model of four-dimensional Riemannian quantum gravity [6] has been of significant interest recently [1,2,10,12]. The model is discrete and well-defined, and the partition function for the Perez-Rovelli version has been rigorously shown to converge [11] for a fixed triangulation of spacetime. The Riemannian model serves as a step along the way to understanding the less tractable but physically more realistic Lorentzian version [7]. However, despite its simplicity, we are currently lacking explicit numerical computations of the partition function and of expectation values of observables in the Riemannian model. These are necessary to test its large-scale behaviour and other physical properties.
It has been shown [3] that the amplitudes in the Barrett-Crane model are always non-negative, and therefore that the expectation values of observables can be approximated using the Metropolis algorithm. This greatly reduces the number of samples that must be taken, and thus the remaining obstacle is the time required to compute each sample. This paper presents a very efficient algorithm for doing these computations. The algorithm is used in [4] and [5] to understand the asymptotic behaviour of the 10j symbols and the dependence of the partition function on a cutoff.
To explain further, we need to describe the Barrett-Crane model in more detail. It has been formulated by Baez [2] as a discrete spin foam model, in which faces in the dual 2-skeleton of a fixed triangulation of spacetime are labeled by spins. The dual 2-skeleton consists of a dual vertex at the center of each 4-simplex of the triangulation, five dual edges incident to each dual vertex (one for each tetrahedron in the boundary of the 4-simplex), and ten dual faces incident to each dual vertex (one for each triangle in the boundary of the 4-simplex).
Baez notes that the partition function for this model is the sum, over all labelings of the dual faces by spins, of an expression that contains the product of a 10j symbol for each dual vertex. A 10j symbol, described in detail in Section 2, is a Spin(4) spin Date: January 23, 2002. The authors would like to thank John Baez for many useful conversations about the material in this paper. network. Roughly speaking, a spin network is a graph whose vertices are labelled by tensors, and whose edges indicate how to contract these tensors. A spin network evaluates to a complex number in the way explained in Section 3. In short, the 10j symbol is a function taking ten input spins and producing a complex number. It is at the heart of the calculation of the partition function, and thus an algorithm for calculating the 10j symbols efficiently is quite important.
In Section 2 we recall the definition of the 10j symbol using spin networks. Then in Section 3 we briefly describe the elementary algorithms for evaluating these spin networks, and give their running times and memory use. We conclude with Section 4, which presents our algorithms and their time and space needs.
The 10j symbol
In the dual 2-skeleton of a triangulation of a 4-manifold, each dual vertex belongs to five dual edges, and each pair of these dual edges borders a dual face. A 10j symbol is a Spin(4) spin network with five vertices (corresponding to the five dual edges) and ten edges, one connecting each pair of vertices (corresponding to the ten dual faces), with the edges labeled by spins. In the context of a Spin(4) spin network, a spin j labeling an edge denotes the representation j ⊗ j of Spin(4) ∼ = SU(2) × SU(2), where j is the spin-j representation of SU (2). Such representations are called "balanced." We use the convention that spins are non-negative halfintegers.
Here is a picture of the 10j symbol, with the vertices numbered 0 through 4, and the spins divided into two groups: j 1,i are the spins on the edges joining vertex i to vertex i + 1 (modulo 5), and j 2,i are the spins on the edges joining vertex i to vertex i + 2 (modulo 5).
The five vertices of the network are equal to Barrett-Crane intertwiners. These are the unique intertwiners (up to a factor) between four balanced representations of Spin(4) with the property that their expansion as a sum of tensor products of trivalent SU(2) networks only contains balanced representations on the internal edge, regardless of which pairs of external edges are joined [13]. Barrett and Crane give the formula for these intertwiners in [6]:
• j 1 j 2 j 3 j 4 := l ∆ l • j 1 j 2 • l j 3 j 4 • j 1 j 2 • l j 3 j 4
Here the sum is over all admissible values of l, i.e. those that satisfy the Clebsch-Gordan condition for both SU(2) vertices. So l ranges from max(|j 1 − j 2 |, |j 3 − j 4 |) to min(j 1 + j 2 , j 3 + j 4 ) in integer steps. If the difference between these bounds is not an integer, the Spin(4) vertex will be zero. When l satisfies these conditions, there is a unique intertwiner up to normalization which can be used to label the trivalent SU(2) vertices. (These intertwiners are normalized so that the theta network in the numerator of equation (4) has value 1.) ∆ l is the value of a loop in the spin-l representation, which is just (−1) 2l (2l + 1), the superdimension of the representation.
The uniqueness result of Reisenberger [13] tells us that if we replace the vertical edges in the above definition by horizontal edges, the result differs at most by a constant factor. In fact, Barrett and Crane [6] stated that the two definitions give exactly the same Spin(4) vertex, and Yetter [14] has proved this.
Any closed spin network evaluates to a complex number, by contracting the tensors at the vertices according to the pairings specified by the edges. Thus the 10j symbol is a complex number. (In fact, one can show that it is always a real number.)
To avoid confusion, we want to make it clear that we are working with the "classical" (non q-deformed) evaluation of our spin networks. We will frequently reference the book [9] by Kauffman and Lins; while it explicitly discusses the qdeformed version, the formulas we use apply to the classical evaluation as well.
Elementary algorithms
To set the context, we begin by explaining some elementary algorithms for computing the 10j symbol. These algorithms all share the feature that they evaluate a spin network by choosing bases for the representations labelling the edges, computing the components of the tensors representing the intertwiners, and computing the contraction of the tensors in some way. They make no use of special features of these tensors, except for the vanishing property mentioned below.
The first three methods each have two versions, one which works directly with the Spin(4) network (1), and one that converts it into a five-fold sum over SU (2) networks, by expanding each Barrett-Crane intertwiner:
l0,... ,l4 ( 4 k=0 ∆ l k ) j 1,0 j 1,1 j 1,2 j 1,3 j 1,4 j 2,0 j 2,1 j 2,2 j 2,3 j 2,4 l 0 l 1 l 2 l 3 l 4 0 0' 1 1' 2 2' 3 3' 4 4' j 1,0 j 1,1 j 1,2 j 1,3 j 1,4 j 2,0 j 2,1 j 2,2 j 2,3 j 2,4 l 0 l 1 l 2 l 3 l 4 0 0' 1 1' 2 2' 3 3' 4 4'(2)
Here, l i is the spin labeling the new edge introduced by the expansion of the intertwiner at vertex i, and it ranges in integer steps from
L i := max(|j 1,i −j 2,i |, |j 1,i−1 − j 2,i−2 |) to H i := min(j 1,i + j 2,i , j 1,i−1 + j 2,i−2 )
, where the vertex numbers are all to be interpreted modulo 5. (If H i − L i is not a non-negative integer, then the sum over l i is empty and the 10j symbol is zero. In fact, if vertex i is non-zero, then
|j 1,i − j 2,i |, |j 1,i−1 − j 2,i−2 |, j 1,i + j 2,i and j 1,i−1 + j 2,i−2 must all differ by integers.)
There are at most O(j) terms in each sum, where j is the average of the ten spins.
Since the two decagonal networks are the same, one only needs to evaluate one of them and square the answer. The first elementary method is one we call direct contraction. One simply labels each edge in the spin network with a basis vector from the representation labelling the edge, and multiplies together the corresponding components of the tensors. Then this is summed up over all labellings. In fact, one can restrict to a smaller set of labellings: the bases can be chosen so that for each choice of two basis vectors on two of the three edges meeting an SU(2) vertex, there is at most one choice of basis vector on the third edge giving a non-zero tensor component. The Spin(4) vertices also have the property that the bases can be chosen so that when three of the basis vectors adjacent to a vertex are specified, the last one is determined.
The second elementary method is staged contraction. In the Spin(4) version of this method, one starts with the tensor at vertex 0, contracts with the tensor at vertex 1, and then vertex 4, and then vertex 2, and finally vertex 3, again taking care to save space and time by using the vanishing properties of the tensors. Similarly, one can iteratively contract the tensors in the decagonal SU(2) network. At intermediate stages one is storing tensors with a large number of components.
The third elementary method is 3cut. Here one takes a ray from the center of (1) and cuts the three edges it crosses. Then one takes the trace of the operator this defines on the three-fold tensor product. In more detail, one sums over basis vectors for the factors in this tensor product, computing the effect of the network on these basis vectors, and using the vanishing properties. The memory required for this method (and the next) is dominated by the memory needed to store the tensors themselves.
The fourth and final elementary method is 2cut. This one only makes sense for the decagonal network, since one proceeds by taking a ray from the center of the decagon which crosses just two edges, cutting those two edges, and taking the trace of the resulting operator, using the vanishing properties.
Here is a table which gives an upper bound on the number of operations (additions and multiplications) that these algorithms use, and the amount of memory they require, as a function of a typical spin j. The space requirements include the space to store the Barrett-Crane tensors.
direct contraction staged contraction 3cut 2cut Spin(4) Time j 12
j 12 j 12 N/A Space j 6 j 10 j 6 N/A SU(2) Time j 11 j 9 j 10 j 9 Space j 2 j 4 j 2 j 2 Either Time mp 6 mp 2v+4 mp v+5 mp 4 Space p v+2 p v+4 p v+2 p 2
In the last two rows, we represent the entries in a uniform way for either version by writing v = 1 for the Spin(4) version of each algorithm and v = 0 for the SU (2) version. Then we let p = j v+1 and m = j 5−5v . This shows how they are related. For example, the SU(2) methods always get a factor of j 5 in time from the five loops coming from expanding the Barrett-Crane vertices. Also, the Spin(4) methods get powers of j 2 (because dim j ⊗ j = (2j + 1) 2 ), while the SU(2) methods get powers of j (because dim j = (2j + 1)).
The 2cut method has the best worst-case behaviour, in both space and time.
In the next section we present an algorithm which has running time O(j 5 ) and requires O(j 2 ) space, and give variants with running time O(j 6 ) and O(j 7 ) and which use a constant amount of space.
The difference between j 5 and the running time, j 9 , for the best of the elementary methods is significant. For example, with all spins equal to 20, our j 5 algorithm runs in under six minutes on a 300 MHz microprocessor. A back of the envelope calculation suggests that this would take about 30 years with a j 9 algorithm.
Our new algorithms
In this section we describe a new method for computing 10j symbols. The key feature of this method is that it does not proceed by computing the tensor components for the intertwiners. Instead it uses recoupling to simplify the network to one that can be evaluated directly. Thus this method makes use of special properties of the tensors that occur in the 10j symbols.
There are three versions of this method. We explain one of these in some detail, and briefly describe the variants at the appropriate points. We use the notation of equation (1), and consider the expansion (2) as a sum of squares of decagonal networks.
The decagonal networks can be deformed into the "ladders" shown below, where the vertices at the bottom of each ladder are to be identified with those at the top of the same ladder. Any sign introduced by this deformation is produced twice and so can be ignored.
4'
To simplify these networks further, we recouple the sub-networks consisting of a horizontal edge and half of each vertical edge incident to its endpoints, rewriting them as sums of sub-networks with the same external edges, using the following recoupling formula for SU(2) spin networks [9,Ch. 7]:
• d c • j a b = k a b k c d j • b c
• k a d The horizontal and vertical networks appearing above are two different ways of writing intertwiners from one two-fold tensor product of irreducible representations to another; both kinds of networks form bases for the space of intertwiners, and the 6j symbols appearing in the formula are defined to be the change-of-basis coefficients.
At first glance, it might look as if this recoupling would introduce sums over ten new spins, labeling the ten new vertical edges. However, the total networks that result from the recoupling consist of chains of sub-networks shaped like
• a b • c d(3)
By Schur's Lemma, such sub-networks will only be non-zero when the incoming and outgoing edges have identical spins. So the recoupled networks can be written as a sum over just two new spins, m 1 and m 2 . This sum is over all values such that every vertex in the diagram satisfies the Clebsch-Gordan condition, so both m 1 and m 2 will independently range in integer steps from max i (|l i −j 2,i−1 |) to min i (l i +j 2,i−1 ). Here we use the fact that if the five Barrett-Crane vertices are non-zero, then as i varies, the ten quantities |l i − j 2,i−1 | and l i + j 2,i−1 all differ by integers. Indeed, |l i − j 2,i−1 | ≡ l i − j 2,i−1 ≡ l i + j 2,i−1 modulo integers, and by the paragraph after equation (2), l i + j 2,i−1 ≡ |j 1,i − j 2,i | + j 2,i−1 ≡ j 1,i + j 2,i−1 + j 2,i ≡ j 1,i+1 + j 2,i+1 + j 2,i ≡ l i+1 + j 2,i modulo integers, where in the third step we use that vertex i + 1 is non-zero.
At this point there is a choice which determines which version of the algorithm one obtains. If the sum over m 1 and m 2 is left inside the sum over the l i , then it can be written as the square of a sum over a single m. As described below, each of the terms in this sum can be computed with O(j) operations and a constant amount of memory, where j is the average of the ten spins. Thus, this method produces an algorithm that runs in O(j 7 ) time and takes a constant amount of space.
In general it turns out to be more efficient to make the sum over the m's outermost, in order to reinterpret the sum over the l i as the trace of a matrix product. The range for m 1 and m 2 must encompass all potentially admissible values, and the range for each l i can then be adjusted for the current values of m 1 and m 2 . The original range for l i was from L i to H i (see the paragraph after equation (2)), so the m's can never be greater than min i (H i + j 2,i−1 ) without violating the triangle inequality at one of the vertices. The lower bound is given by
max i (min li (|l i − j 2,i−1 | : L i ≤ l i ≤ H i )), where the minimum breaks down into three cases: if j 2,i−1 ≥ H i , it is j 2,i−1 − H i ; if j 2,i−1 ≤ L i , it is L i − j 2,i−1 ; and if L i < j 2,i−1 < H i , it is either 0 or 1 2 , depending on whether 2j 2,i−1 ≡ 2L i (mod 2), or not.
Each of the l i is then restricted to take account of the current m values, ranging from max(L i , |m 1 − j 2,i−1 |, |m 2 − j 2,i−1 |) to min(H i , m 1 + j 2,i−1 , m 2 + j 2,i−1 ).
By Schur's Lemma, each sub-network of the form (3) is equal to a multiple of the identity, so each of the recoupled ladders, with top and bottom edges joined, is simply a multiple of a loop. Kauffman and Lins [9] give the following formula, which can be checked by taking the trace of both sides:
• a b • c a = • b a • c a a(4)
The numerator of the fraction is written θ(a, b, c), and the denominator is ∆ a = (−1) 2a (2a + 1), the superdimension of the representation. Kauffman and Lins also give a formula for the 6j symbol in terms of tetrahedral and θ networks:
a b i c d j = Tet a b i c d j ∆ i θ(a, d, i) θ(b, c, i)(5)
Equations (4) and (5) allow us to write the following expression for the 10j symbol:
m1,m2 (2m 1 + 1)(2m 2 + 1)(−1) 2(L0+j2,4)−m1−m2 l0,... ,l4 4 k=0 (M m1,m2 k ) l k+1 l k (6) where (M m1,m2 k ) l k+1 l k = ∆ l k Tet l k j 2,k m 1 l k+1 j 2,k−1 j 1,k Tet l k j 2,k m 2 l k+1 j 2,k−1 j 1,k θ(j 2,k , l k+1 , m 1 ) θ(j 2,k , l k+1 , m 2 )(7)
The twists implicit in the identification of the top and bottom parts of each network introduce signs of (−1) l0+j2,4−m1 and (−1) l0+j2,4−m2 ; since 2l 0 ≡ 2L 0 (mod 2), the product is (−1) 2(L0+j2,4)−m1−m2 . The loop values for the spin-m 1 and spin-m 2 representations have signs of (−1) 2m1 and (−1) 2m2 , but since 2m 1 ≡ 2m 2 (mod 2), the product of these two signs is always unity. We have made use of the symmetries of the tetrahedral networks to put the coefficients in a uniform order for all terms.
The sum over the l i in Equation (6) is the trace of the product of the five matrices M m1,m2 k . For each pair of values of m 1 and m 2 , these matrices can be computed using closed formulas for the tetrahedral and θ networks given by Kauffman and Lins in [9]. The formula for the tetrahedral networks involves a sum with O(j) terms, so computing each matrix requires O(j 3 ) operations 1 . The trace of the matrix product can also be found in O(j 3 ) steps. There are two factors of j coming from the sums over m 1 and m 2 , yielding an overall count of O(j 5 ) operations. This method requires O(j 2 ) space to store the matrices.
For some 10-tuples of spins, if all the spins are multiplied by λ, the time required will scale at a lower power than λ 5 . Multiplying all the spins by a factor will increase the upper and lower bounds of all the sums linearly, but in cases where the two bounds are equal, the sum will consist of a single term, regardless of the scaling factor. When many of the upper and lower bounds coincide, the first variant of the algorithm, with worst case running time O(j 7 ), in fact becomes faster than the O(j 5 ) version. Thus one may wish to use the first variant for certain 10j symbols.
For large spins, the memory usage can be a problem. For example, with spins of around 180, storing each matrix M m1,m2 k requires about 1 gigabyte. In this case, one can recalculate the matrix entries as needed, resulting in O(j 6 ) time and O(j 0 ) space (O(j 1 ) if factorials are cached).
The formulas in [9] for the network evaluations are unnormalized. To normalize all the SU(2) intertwiners according to the convention that any θ network has a value of 1-which is the convention used in the formula for the Barrett-Crane intertwiner-it is simpler to divide the matrix elements by the appropriate θ networks than to take the existing θ networks in Equation (7) to be unity and normalize the tetrahedral networks. Including this normalization, the matrices become:
(N m1,m2 k ) l k+1 l k = (M m1,m2 k ) l k+1 l k
θ(j 2,k−1 , l k+1 , j 1,k ) θ(j 2,k+1 , l k+1 , j 1,k+1 )
A subroutine written in C++ that implements this algorithm is available on the web [8], along with some sample computations.
We have not dealt explicitly with the q-deformed case, where the representations of SU(2) are replaced with representations of SU(2) q , but it is straightforward to adapt each stage of the development above, using the formulas in [9] for the qdeformed twist, loop, θ and tetrahedral networks.
Each of the O(j) terms in the formula for the tetrahedral network contains factorials, which themselves require O(j) operations. However, with some care, the formula can be evaluated with a total of O(j) operations. In practice, we precalculate the factorials, using O(j) space.
Spin foam models. J C Baez, Class. Quantum Grav. 15J.C. Baez, Spin foam models, Class. Quantum Grav. 15 (1998), 1827-1858.
An introduction to spin foam models of quantum gravity and BF theory. J C Baez, Geometry and Quantum Physics. Helmut Gausterer and Harald GrosseBerlinSpringerPreprint available as gr-qc/9905087J.C. Baez, An introduction to spin foam models of quantum gravity and BF theory, in Geometry and Quantum Physics, edited by Helmut Gausterer and Harald Grosse, Springer, Berlin, 2000. Preprint available as gr-qc/9905087.
Positivity of spin foam amplitudes. J C Baez, J D Christensen, Class. Quantum Grav. to appear in. Preprint available as gr-qc/0110044J.C. Baez and J.D. Christensen, Positivity of spin foam amplitudes, to appear in Class. Quantum Grav. Preprint available as gr-qc/0110044.
Asymptotics of 10j symbols. J C Baez, J D Christensen, G Egan, In preparationJ.C. Baez, J.D. Christensen and G. Egan, Asymptotics of 10j symbols. In preparation.
Spin foam models of Riemannian quantum gravity. J C Baez, J D Christensen, T Halford, D Tsang, In preparationJ.C. Baez, J.D. Christensen, T. Halford and D. Tsang, Spin foam models of Riemannian quantum gravity. In preparation.
Relativistic spin networks and quantum gravity. J Barrett, L Crane, J. Math. Phys. 39Preprint available as gr-qc/9709028J. Barrett and L. Crane, Relativistic spin networks and quantum gravity, J. Math. Phys. 39 (1998), 3296-3302. Preprint available as gr-qc/9709028.
A Lorentzian signature model for quantum general relativity. J W Barrett, L Crane, Class. Quantum Grav. 17J.W. Barrett and L. Crane, A Lorentzian signature model for quantum general relativity, Class. Quantum Grav. 17 (2000), 3101-3118.
. J D Christensen, Spin foams pageJ.D. Christensen, Spin foams page, http://jdc.math.uwo.ca/spin-foams/.
L Kauffman, S Lins, Temperley-Lieb recoupling theory and invariants of 3-manifolds. PrincetonPrinceton University PressL. Kauffman and S. Lins, Temperley-Lieb recoupling theory and invariants of 3-manifolds, Princeton University Press, Princeton, 1994.
Spacetime geometry from algebra: spin foam models for non-perturbative quantum gravity. D Oriti, gr-qc/0106091D. Oriti, Spacetime geometry from algebra: spin foam models for non-perturbative quantum gravity, available as gr-qc/0106091.
Finiteness of a spin foam model for Euclidean quantum general relativity. A Perez, Nucl. Phys. 599A. Perez, Finiteness of a spin foam model for Euclidean quantum general relativity, Nucl. Phys. B599 (2001) 427-434.
A spin foam model without bubble divergences. A Perez, C Rovelli, Nucl. Phys. 599A. Perez and C. Rovelli, A spin foam model without bubble divergences, Nucl. Phys. B599 (2001) 255-282.
On relativistic spin network vertices. M P Reisenberger, J. Math. Phys. 40Preprint available as gr-qc/9809067M.P. Reisenberger, On relativistic spin network vertices, J. Math. Phys. 40 (1999), 2046- 2054. Preprint available as gr-qc/9809067.
Generalized Barrett-Crane vertices and invariants of embedded graphs. D N Yetter, J. Knot Theory Ramifications. 8Preprint available as math.QA/9801131D.N. Yetter, Generalized Barrett-Crane vertices and invariants of embedded graphs, J. Knot Theory Ramifications 8 (1999), 815-829. Preprint available as math.QA/9801131.
| []
|
[
"From English to Code-Switching: Transfer Learning with Strong Morphological Clues",
"From English to Code-Switching: Transfer Learning with Strong Morphological Clues"
]
| [
"Gustavo Aguilar [email protected] \nDepartment of Computer Science\nUniversity of Houston Houston\n77204-3010TX\n",
"Thamar Solorio [email protected] \nDepartment of Computer Science\nUniversity of Houston Houston\n77204-3010TX\n"
]
| [
"Department of Computer Science\nUniversity of Houston Houston\n77204-3010TX",
"Department of Computer Science\nUniversity of Houston Houston\n77204-3010TX"
]
| [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"
]
| Linguistic Code-switching (CS) is still an understudied phenomenon in natural language processing. The NLP community has mostly focused on monolingual and multi-lingual scenarios, but little attention has been given to CS in particular. This is partly because of the lack of resources and annotated data, despite its increasing occurrence in social media platforms. In this paper, we aim at adapting monolingual models to code-switched text in various tasks. Specifically, we transfer English knowledge from a pre-trained ELMo model to different code-switched language pairs (i.e., Nepali-English, Spanish-English, and Hindi-English) using the task of language identification. Our method, CS-ELMo, is an extension of ELMo with a simple yet effective position-aware attention mechanism inside its character convolutions. We show the effectiveness of this transfer learning step by outperforming multilingual BERT and homologous CS-unaware ELMo models and establishing a new state of the art in CS tasks, such as NER and POS tagging. Our technique can be expanded to more English-paired code-switched languages, providing more resources to the CS community. | 10.18653/v1/2020.acl-main.716 | [
"https://www.aclweb.org/anthology/2020.acl-main.716.pdf"
]
| 202,558,708 | 1909.05158 | c1e54abcdcbb17668cc6da7cda093d85d230e804 |
From English to Code-Switching: Transfer Learning with Strong Morphological Clues
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Gustavo Aguilar [email protected]
Department of Computer Science
University of Houston Houston
77204-3010TX
Thamar Solorio [email protected]
Department of Computer Science
University of Houston Houston
77204-3010TX
From English to Code-Switching: Transfer Learning with Strong Morphological Clues
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics8033July 5 -10, 2020. 2020
Linguistic Code-switching (CS) is still an understudied phenomenon in natural language processing. The NLP community has mostly focused on monolingual and multi-lingual scenarios, but little attention has been given to CS in particular. This is partly because of the lack of resources and annotated data, despite its increasing occurrence in social media platforms. In this paper, we aim at adapting monolingual models to code-switched text in various tasks. Specifically, we transfer English knowledge from a pre-trained ELMo model to different code-switched language pairs (i.e., Nepali-English, Spanish-English, and Hindi-English) using the task of language identification. Our method, CS-ELMo, is an extension of ELMo with a simple yet effective position-aware attention mechanism inside its character convolutions. We show the effectiveness of this transfer learning step by outperforming multilingual BERT and homologous CS-unaware ELMo models and establishing a new state of the art in CS tasks, such as NER and POS tagging. Our technique can be expanded to more English-paired code-switched languages, providing more resources to the CS community.
Introduction
Although linguistic code-switching (CS) is a common phenomenon among multilingual speakers, it is still considered an understudied area in natural language processing. The lack of annotated data combined with the high diversity of languages in which this phenomenon can occur makes it difficult to strive for progress in CS-related tasks. Even though CS is largely captured in social media platforms, it is still expensive to annotate a sufficient amount of data for many tasks and languages. Additionally, not all the languages have the same incidence and predominance, making annotations impractical and expensive for every combination
Hindi-English Tweet
Original: Keep calm and keep kaam se kaam !!! other #office #tgif #nametag #buddha ne #SouvenirFromManali #keepcalm English: Keep calm and mind your own business !!!
Nepali-English Tweet
Original: Youtube ne ma live re , other chalcha ki vanni aash garam ! other Optimistic . other English: They said Youtube live, let's hope it works! Optimistic.
Spanish-English Tweet
Original: @MROlvera06 other @T11gRe other go too cavenders ne y tambien ve a @ElToroBoots ne other English: @MROlvera06 @T11gRe go to cavenders and also go to @ElToroBoots Figure 1: Examples of code-switched tweets and their translations from the CS LID corpora for Hindi-English, Nepali-English and Spanish-English. The LID labels ne and other in subscripts refer to named entities and punctuation, emojis or usernames, respectively (they are part of the LID tagset). English text appears in italics and other languages are underlined. of languages. Nevertheless, code-switching often occurs in language pairs that include English (see examples in Figure 1). These aspects lead us to explore approaches where English pre-trained models can be leveraged and tailored to perform well on code-switching settings.
In this paper, we study the CS phenomenon using English as a starting language to adapt our models to multiple code-switched languages, such as Nepali-English, Hindi-English and Spanish-English. In the first part, we focus on the task of language identification (LID) at the token level using ELMo (Peters et al., 2018) as our reference for English knowledge. Our hypothesis is that English pre-trained models should be able to recognize whether a word belongs to English or not when such models are fine-tuned with codeswitched text. To accomplish that, we introduce CS-ELMo, an extended version of ELMo that contains a position-aware hierarchical attention mechanism over ELMo's character n-gram representations. These enhanced representations allow the model to see the location where particular n-grams occur within a word (e.g., affixes or lemmas) and to associate such behaviors with one language or another. 1 With the help of this mechanism, our models consistently outperform the state of the art on LID for Nepali-English (Solorio et al., 2014), Spanish-English (Molina et al., 2016), and Hindi-English (Mave et al., 2018). Moreover, we conduct experiments that emphasize the importance of the position-aware hierarchical attention and the different effects that it can have based on the similarities of the code-switched languages. In the second part, we demonstrate the effectiveness of our CS-ELMo models by further fine-tuning them on tasks such as NER and POS tagging. Specifically, we show that the resulting models significantly outperform multilingual BERT and their homologous ELMo models directly trained for NER and POS tagging. Our models establish a new state of the art for Hindi-English POS tagging and Spanish-English NER (Aguilar et al., 2018).
Our contributions can be summarized as follows: 1) we use transfer learning from models trained on a high-resource language (i.e., English) and effectively adapt them to the code-switching setting for multiple language pairs on the task of language identification; 2) we show the effectiveness of transferring a model trained for LID to downstream code-switching NLP tasks, such as NER and POS tagging, by establishing a new state of the art; 3) we provide empirical evidence on the importance of the enhanced character n-gram mechanism, which aligns with the intuition of strong morphological clues in the core of ELMo (i.e., its convolutional layers); and 4) our CS-ELMo model is self-contained, which allows us to release it for other researchers to explore and replicate this technique on other code-switched languages. 2
Related Work
Transfer learning has become more practical in the last years, making possible to apply very large neural networks to tasks where annotated data is limited (Howard and Ruder, 2018;Peters et al., 2018;Devlin et al., 2019). CS-related tasks are good candidates for such applications, since they are usually framed as low-resource problems. However, previous research on sequence labeling for code-switching mainly focused on traditional ML techniques because they performed better than deep learning models trained from scratch on limited data (Yirmibeşoglu and Eryigit, 2018;Al-Badrashiny and Diab, 2016). Nonetheless, some researchers have recently shown promising results by using pre-trained monolingual embeddings for tasks such as NER (Trivedi et al., 2018;Winata et al., 2018) and POS tagging Ball and Garrette, 2018). Other efforts include the use of multilingual sub-word embeddings like fastText (Bojanowski et al., 2017) for LID (Mave et al., 2018), and cross-lingual sentence embeddings for text classification like LASER (Schwenk, 2018;Schwenk and Li, 2018;Schwenk and Douze, 2017), which is capable of handling code-switched sentences. These results show the potential of pre-trained knowledge and they motivate our efforts to further explore transfer learning in code-switching settings.
Our work is based on ELMo (Peters et al., 2018), a large pre-trained language model that has not been applied to CS tasks before. We also use attention (Bahdanau et al., 2015) within ELMo's convolutions to adapt it to code-switched text. Even though attention is an effective and successful mechanism in other NLP tasks, the code-switching literature barely covers such technique (Sitaram et al., 2019). Wang et al. (2018) use a different attention method for NER, which is based on a gated cell that learns to choose appropriate monolingual embeddings according to the input text. Recently, Winata et al. (2019) proposed multilingual meta embeddings (MME) combined with self-attention (Vaswani et al., 2017). Their method establishes a state of the art on Spanish-English NER by heavily relying on monolingual embeddings for every language in the code-switched text. Our model outperforms theirs by only fine-tuning a generic CS-aware model, without relying on task-specific designs. Another contribution of our work are position embeddings, which have not been considered for code-switching either. These embeddings, combined with CNNs, have proved useful in computer vision (Gehring et al., 2017); they help to localize non-spatial features extracted by convolutional networks within an image. We apply the same prin-ciple to code-switching: we argue that character n-grams without position information may not be enough for a model to learn the actual morphological aspects of the languages (e.g., affixes or lemmas). We empirically validate those aspects and discuss the incidence of such mechanism in our experiments.
Methodology
ELMo is a character-based language model that provides deep contextualized word representations (Peters et al., 2018). We choose ELMo for this study for the following reasons: 1) it has been trained on a large amount of English data as a general-purpose language model and this aligns with the idea of having English knowledge as starting point; 2) it extracts morphological information out of character sequences, which is essential for our case since certain character n-grams can reveal whether a word belongs to one language or another; and 3) it generates powerful word representations that account for multiple meanings depending on the context. Nevertheless, some aspects of the standard ELMo architecture could be improved to take into account more linguistic properties. In Section 3.1, we discuss these aspects and propose the position-aware hierarchical attention mechanism inside ELMo. In Section 3.2 and Section 3.3, we describe our overall sequence labeling model and the training details, respectively.
Position-Aware Hierarchical Attention
ELMo convolves character embeddings in its first layers and uses the resulting convolutions to represent words. During this process, the convolutional layers are applied in parallel using different kernel sizes, which can be seen as character n-gram feature extractors of different orders. The feature maps per n-gram order are max-pooled to reduce the dimensionality, and the resulting single vectors per n-gram order are concatenated to form a word representation. While this process has proven effective in practice, we notice the following shortcomings:
1. Convolutional networks do not account for the positions of the character n-grams (i.e., convolutions do not preserve the sequential order), losing linguistic properties such as affixes.
2. ELMo down-samples the outputs of its convolutional layers by max-pooling over the feature maps. However, this operation is not ideal to adapt to new morphological patterns from other languages as the model tends to discard patterns from languages other than English.
To address these aspects, we introduce CS-ELMo, an extension of ELMo that incorporates a positionaware hierarchical attention mechanism that enhances ELMo's character n-gram representations. This mechanism is composed of three elements: position embeddings, position-aware attention, and hierarchical attention. Figure 2A describes the overall model architecture, and Figure 2B details the components of the enhanced character n-gram mechanism.
Position embeddings. Consider the word x of character length l, whose character n-gram vectors are (x 1 , x 2 , . . . , x l−j+1 ) for an n-gram order j ∈ {1, 2, . . . , n}. 3 The n-gram vector x i ∈ R c is the output of a character convolutional layer, where c is the number of output channels for that layer. Also, consider n position embedding matrices, one per n-gram order,
{E 1 , E 2 , . . . , E n } defined as E j ∈ R (k−j+1)×e
where k is the maximum length of characters in a word (note that l ≤ k), e is the dimension of the embeddings and j is the specific n-gram order. Then, the position vectors for the sequence x are defined by p = (p 1 , p 2 , . . . , p l−j+1 ) where p i ∈ R e is the i-th vector from the position embedding matrix E j . We use e = c to facilitate the addition of the position embeddings and the n-gram vectors. 4 Figure 2B illustrates the position embeddings for bi-grams and tri-grams.
Position-aware attention. Instead of downsampling with the max-pooling operation, we use an attention mechanism similar to the one introduced by Bahdanau et al. (2015). The idea is to concentrate mass probability over the feature maps that capture the most relevant n-gram information along the word, while also considering positional information. At every individual n-gram order, our attention mechanism uses the following equations: where W x ∈ R a×c is a projection matrix, a is the dimension of the attention space, c is the number of channels for the n-gram order j, and p i is the position embedding associated to the x i n-gram vector. v ∈ R a is the vector that projects from the attention space to the unnormalized scores, and α i is a scalar that describes the attention probability associated to the x i n-gram vector. z is the weighted sum of the input character n-gram vectors and the attention probabilities, which is our down-sampled word representation for the n-gram order j. Note that this mechanism is used independently for every order of n-grams resulting in a set of n vectors {z 1 , z 2 , . . . , z n } from Equation 3. This allows the model to capture relevant information across individual n-grams before they are combined (i.e., processing independently all bi-grams, all tri-grams, etc.).
u i = v tanh(W x x i + p i + b x ) (1) α i = exp(u i ) N j=1 exp(u j ) , s.t. i=1 α i = 1 (2) z = i=1 α i x i(3)
Hierarchical attention. With the previous mechanisms we handle the problems aforementioned. That is, we have considered positional information as well as the attention mechanism to down-sample the dimensionality. These components retrieve one vector representation per n-gram order per word. While ELMo simply concatenates the n-gram vectors of a word, we decide to experiment with another layer of attention that can prioritize n-gram vectors across all the orders. We use a similar formulation to Equations 1 and 3, except that we do not have p i , and instead of doing the weighted sum, we concatenate the weighted inputs. This concatenation keeps the original dimensionality expected in the upper layers of ELMo, while it also emphasizes which n-gram order should receive more attention.
Sequence Tagging
We follow Peters et al. (2018) to use ELMo for sequence labeling. They reported state-of-the-art performance on NER by using ELMo followed by a bidirectional LSTM layer and a linear-chain conditional random field (CRF). We use this architecture as a backbone for our model (see Figure 2A), but we add some modifications. The first modification is the concatenation of static English word embeddings to ELMo's word representation, such as Twitter (Pennington et al., 2014) and fastText (Bojanowski et al., 2017) embeddings similar to Howard and Ruder (2018) and Mave et al. (2018). The idea is to enrich the context of the words by providing domain-specific embeddings and subword level embeddings. The second modification is the concatenation of the enhanced character ngram representation with the input to the CRF layer. This emphasizes even further the extracted morphological patterns, so that they are present during inference time for the task at hand (i.e., not only LID, but also NER and POS tagging). The last modification is the addition of a secondary task on a simplified 5 language identification label scheme (see Section 4 for more details), which only uses the output of the enhanced character n-gram mechanism. Intuitively, this explicitly forces the model to associate morphological patterns (e.g., affixes, lemmas, etc.) to one or the other language.
Multi-Task Training
We train the model by minimizing the negative loglikelihood loss of the CRF classifier. Additionally, we force the model to minimize a secondary loss over the simplified LID label set by only using the morphological features from the enhanced character n-gram mechanism (see the softmax layer in Figure 2A). The overall loss L of our model is defined as follows:
L taskt = − 1 N N i y i log p(y i |Θ) (4) L = L task 1 + βL task 2 + λ |Θ| k w 2 k(5)
where L task 1 and L task 2 are the negative loglikelihood losses conditioned by the model parameters Θ as defined in Equation 4. L task 1 is the loss of the primary task (i.e., LID, NER, or POS tagging), whereas L task 2 is the loss for the simplified LID task weighted by β to smooth its impact on the model performance. Both losses are the average over N tokens. 6 The third term provides 2 regularization, and λ is the penalty weight. 7
Datasets
Language identification. We experiment with code-switched data for Nepali-English, Spanish-English, and Hindi-English. The first two datasets were collected from Twitter, and they were introduced at the Computational Approaches to Linguistic Code-Switching (CALCS) workshops in 2014 and 2016 (Solorio et al., 2014;Molina et al., 2016). The Hindi-English dataset contains Twitter and Facebook posts, and it was introduced by Mave et al. (2018). These datasets follow the CALCS label scheme, which has eight labels: lang1 (English), lang2 (Nepali, Spanish, or Hindi), mixed, ambiguous, fw, ne, other, and unk. We show the distribution of lang1 and lang2 in Table 1. Moreover, we add a second set of labels using a simplified LID version of the original CALCS label set. The simplified label set uses lang1, 6 While Equation 4 is formulated for a given sentence, in practice N is the number of tokens in a batch of sentences. 7 We exclude the CRF parameters in this term. lang2, and other. We use this 3-way tokenlevel labels in the secondary loss of our model where only morphology, without any context, is being exploited. This is because we are interested in predicting whether a word's morphology is associated to English more than to another language (or vice versa), instead of whether, for example, its morphology describes a named entity (ne).
Part-of-speech tagging. provide 1,489 tweets (33,010 tokens) annotated with POS tags. The labels are annotated using the universal POS tagset proposed by Petrov et al. (2012) with the addition of two labels: PART NEG and PRON WH. This dataset does not provide training, development, or test splits due to the small number of samples. Therefore, we run 5-fold cross validations and report the average scores.
Named entity recognition. We use the Spanish-English NER corpus introduced in the 2018 CALCS competition (Aguilar et al., 2018), which contains a total of 67,223 tweets with 808,663 tokens. The entity types are person, organization, location, group, title, product, event, time, and other, and the labels follow the BIO scheme. We used the fixed training, development, and testing splits provided with the datasets to benchmark our models. Importantly, Hindi and Nepali texts in these datasets appear transliterated using the English alphabet (see Figure 1). The lack of a standardized transliteration process leads code-switchers to employ mostly ad-hoc phonological rules that conveniently use the English alphabet when they write in social media. This behavior makes the automated processing of these datasets more challenging be- cause it excludes potentially available resources in the original scripts of the languages.
Experiments
We describe our experiments for LID in Section 5.1, including insights of the optimized models. In Section 5.2, the optimized LID models are further fine-tuned on downstream NLP tasks, such as NER and POS tagging, to show the effectiveness of our preliminary CS adaptation step. We test for statistical significance across our incremental experiments following Dror et al. (2018), and we report p-values below 0.02 for LID. We discuss hyperparameters and fine-tuning details in Appendix D.
Language Identification
Approach 1. We establish three strong baselines using a vanilla ELMo (Exp 1.1), ELMo combined with BLSTM and CRF (Exp 1.2) as suggested by Peters et al. (2018), and a multilingual BERT (Exp 1.3) provided by Devlin et al. (2019). We experiment with frozen weights for the core parameters of ELMo and BERT, but we find the best results when the full models are fine-tuned, which we report in Table 2.
Approach 2. In the second set of experiments, we add the components of our mechanism upon ELMo combined with BLSTM and CRF (Exp 1.2). We start by replacing the max-pooling operation with the attention layer at every individual n-gram order in Exp 2.1. In Exp 2.2, we incorporate the position information. The third experiment, Exp 2.3, adds the hierarchical attention across all n-gram order vectors. It is worth noting that we experiment by accumulating consecutive n-gram orders, and we find that the performance stops increasing when n > 3. Intuitively, this can be caused by the small size of the datasets since n-gram features of greater order are infrequent and would require more data to be trained properly. We apply our mechanism for n-gram orders in the set {1, 2, 3}, which we report in Table 2.
Approach 3. For the third set of experiments, we focus on emphasizing the morphological clues extracted by our mechanism (Exp 2.3). First, in Exp 3.1, we concatenate the enhanced character n-grams with their corresponding word representation before feeding the input to the CRF layer. In Figure 2A) has been adapted to code-switching by using the LID task.
Exp 3.2, we add the secondary task over the previous experiment to force the model to predict the simplified LID labels by only using the morphological clues (i.e., no context is provided). Finally, in Exp 3.3, we add static word embeddings that help the model to handle social media style and domain-specific words. We achieve the best results on Exp 3.3, which outperforms both the baselines and the previous state of the art on the full LID label scheme (see Table 2). However, to compare with other work, we also calculate the average of the weighted F1 scores over the labels lang1 and lang2. Table 3 shows a comparison of our results and the previous state of the art. Note that, for Spanish-English and Hindi-English, the gap of improvement is reasonable, considering that similar gaps in the validation experiments are statistically significant. In contrast, in the case of Nepali-English, we cannot determine whether our improvement is marginal or substantial since the authors only provide one decimal in their scores. Nevertheless, Al-Badrashiny and Diab (2016) use a CRF with hand-crafted features (Al-Badrashiny and Diab, 2016), while our approach does not require any feature engineering.
POS Tagging and NER
We use LID to adapt the English pre-trained knowledge of ELMo to the code-switching setting, effectively generating CS-ELMo. Once this is achieved, we fine-tune the model on downstream NLP tasks such as POS tagging and NER. In this section, our goal is to validate whether the CS-ELMo model can improve over vanilla ELMo, multilingual BERT, and the previous state of the art for both tasks. More specifically, we use our best architecture (Exp 3.3) from the LID experiments 1) without the codeswitching adaptation, 2) with the code-switching Table 5: The F1 scores on the Spanish-English NER dataset. CS knowledge means that the CS-ELMo architecture (see Figure 2A) has been adapted to codeswitching by using the LID task.
adaptation and only retraining the inference layer, and 3) with the code-switching adaptation and retraining the entire model. Table 4 shows our experiments on POS tagging using the Hindi-English dataset. When we compare our CS-ELMO + BLSTM + CRF model without CS adaptation (Exp 4.1) against the baseline (ELMo + BLSTM + CRF), the performance remains similar. This suggests that our enhanced n-gram mechanism can be added to ELMo without impacting the performance even if the model has not been adapted to CS. Slightly better performance is achieved when the CS-ELMo has been adapted to code-switching, and only the BLSTM and CRF layers are retrained (Exp 4.2). This result shows the convenience of our model since small improvements can be achieved faster by leveraging the already-learned CS knowledge while avoiding to retrain the entire model. Nevertheless, the best performance is achieved by the adapted CS-ELMO + BLSTM + CRF when retraining the entire model (Exp 4.3). Our results are better than the baselines and the previous state of the art. Interestingly, our model improves over multilingual BERT, which is a powerful and significantly bigger model in terms of parameters. Our intuition is that this is partly due to the word-piece tokenization process combined with the transliteration of Hindi. The fact that we use the multilingual version of BERT does not necessarily help to handle transliterated Hindi, since Hindi is only present in BERT's vocabulary with the Devanagari script. Indeed, we notice that in some tweets, the original number of tokens was almost doubled by the greedy tokenization process in BERT. This behavior tends to degrade the syntactic and semantic Figure 3: Visualization of the tri-gram attention weights for the 2016 Spanish-English LID dataset. The boxes contain the tri-grams of the word below them along with the right () or wrong () predictions by the model. information captured in the original sequence of tokens. In contrast, ELMo generates contextualized word representations out of character sequences, which makes the model more suitable to adapt to the transliteration of Hindi.
POS tagging experiments.
NER experiments. Table 5 contains our experiments on NER using the 2018 CALCS Spanish-English dataset. Exp 5.1 shows that the enhanced n-gram mechanism can bring improvements over the ELMo + BLSTM + CRF baseline, even though the CS-ELMo has not been adapted to the codeswitching setting. However, better results are achieved when the CS-ELMo model incorporates the code-switching knowledge in both Exp 5.2 and 5.3. Unlike the POS experiments 4.2 and 4.3, fixing the parameters of CS-ELMo model yields better results than updating them during training. Our intuition is that, in the NER task, the model needs the context of both languages to recognize entities within the sentences, and having the code-switching knowledge fixed becomes beneficial. Also, by freezing the CS-ELMo model, we can accelerate training because there is no backpropagation for the CS-ELMo parameters, which makes our code-switching adapatation very practical for downstream tasks.
Analysis
Position embeddings. Localizing n-grams within a word is an important contribution of our method. We explore this mechanism by using our fine-tuned CS-ELMo to predict the simplified LID labels on the validation set from the secondary task (i.e., the predictions solely rely on morphology) in two scenarios. The first one uses the position embeddings corresponding to the actual place of the character n-gram, whereas the second one chooses position embeddings randomly. We notice a consistent decay in performance across the language pairs, and a variation in the confidence of the predicted classes. The most affected language pair is Spanish-English, with an average difference of 0.18 based on the class probability gaps between both scenarios. In contrast, the probability gaps in Hindi-English and Nepali-English are substantially smaller; their average differences are 0.11 and 0.09, respectively.
Position distribution. Considering the previous analysis and the variations in the results, we gather insights of the attention distribution according to their n-gram positions (see position-aware attention in Section 3.1). Although the distribution of the attention weights across n-gram orders mostly remain similar along the positions for all language pairs, Spanish-English has a distinctive concentration of attention at the beginning and end of the words. This behavior can be caused by the differences and similarities between the language pairs. For Spanish-English, the model may rely on inflections of similar words between the languages, such as affixes. On the other hand, transliterated Hindi and Nepali tend to have much less overlap with English words (i.e., words with few characters can overlap with English words), making the distinction more spread across affixes and lemmas.
Attention analysis. Figure 3 shows the tri-gram attention weights in the Spanish-English LID dataset. The model is able to pick up affixes that belong to one or the other language. For instance, the tri-gram -ing is commonly found in English at the end of verbs in present progressive, like in the word coming from the figure, but it also appears in Spanish at different places (e.g., ingeniero) making the position information relevant. On the contrary, the tri-grams aha and hah from the figure do not seem to rely on position information because the attention distribution varies along the words. See more examples in Appendix E.
Error analysis. Morphology is very useful for LID, but it is not enough when words have similar spellings between the languages. We inspect the predictions of the model, and find cases where, for example, miserable is gold-labeled as ambiguous but the model predicts a language (see the top-right tweet in Figure 3). Although we find similar cases for Nepali-English and Hindi-English, it mostly happens for words with few characters (e.g., me, to, use). The model often gets such cases mislabeled due to the common spellings in both languages. Although this should be handled by context, our contribution relies more on morphology than contextualization, which we leave for future work.
Conclusion and Future Work
We present a transfer learning method from English to code-switched languages using the LID task. Our method enables large pre-trained models, such as ELMo, to be adapted to code-switching settings while taking advantage of the pre-trained knowledge. We establish new state of the art on LID for Nepali-English, Spanish-English, and Hindi-English. Additionally, we show the effectiveness of our CS-ELMo model by further fine-tuning it for NER and POS tagging. We outperform multilingual BERT and homologous ELMo models on Spanish-English NER and Hindi-Enlgish POS tagging. In our ongoing research, we are investigating the expansion of this technique to language pairs where English may not be involved. We notice that the CALCS datasets have monolingual tweets, which we detail at the utterancelevel in Table 7. We use the information in this table to measure the rate of code-switching by using the Code-Mixed Index (CMI) (Gambäck and Das, 2014). The higher the score of the CMI, the more code-switched the text is. We show the CMI scores in Table 8.
Labels
Nep-Eng Spa-Eng Hin-Eng
B Parts-of-Speech Label Distribution
C Named Entity Recognition Label Distribution
D Hyperparameters and Fine-tuning
We experiment with our LID models using Adam optimizer with a learning rate of 0.001 and a plateau learning rate scheduler with patience of Figure 4: Visualization of the attention weights at the tri-gram level for the Hindi-English 2018 dataset on the LID task. The boxes contain the tri-grams of the word below them. We also provide the predicted label by the model, and whether it was correct or wrong.
5 epochs based on the validation loss. We train our LID models using this setting for 50 epochs. For the last block of experiments in Table 2, we use a progressive fine-tuning process described below.
Fine-tuning. We fine-tune the model by progressively updating the parameters from the top to the bottom layers of the model. This avoids losing the pre-trained knowledge from ELMo and smoothly adapts the network to the new languages from the code-switched data. We use the slanted triangular learning rate scheduler with both gradual unfreezing and discriminative fine-tuning over the layers (i.e., different learning rates across layers) proposed by Howard and Ruder (2018). We group the non-ELMo parameters of our model apart from the ELMo parameters. We set the non-ELMo parameters to be the first group of parameters to be tuned (i.e., parameters from enhanced character ngrams, CRF, and BLSTM). Then, we further group the ELMo parameters as follows (top to bottom):
1. the second bidirectional LSTM layer, 2. the first bidirectional LSTM layer, 3. the highway network, 4. the linear projection from flattened convolutions to the token embedding space, 5. all the convolutional layers, and 6) the character embedding weights.
Once all the layers have been unfrozen, we update all the parameters together. This technique allows us get the most of our model moving from English to a code-switching setting. We train our fine-tuned models for 200 epochs and a initial learning rate of 0.01 that gets modified during training. Additionally, we use this fine-tuning process for the downstream NLP task presented in the paper (i.e., NER and POS tagging).
E Visualization of Attention Weights for
Hindi-English Figure 4 shows the attention behavior for tri-grams on the Hindi-English dataset. Similar to the cases discussed for Spanish-English in the main content, we observe that the model learns tri-grams like -ing, -ian for English and iye, isi for Hindi.
Figure 2 :
2A) The left figure shows the overall model architecture, which contains CS-ELMo followed by BLSTM and CRF, and a secondary task with a softmax layer using a simplified LID label set. The largest box describes the components of CS-ELMo, including the enhanced character n-gram module proposed in this paper. B) The right figure describes in detail the enhanced character n-gram mechanism inside CS-ELMo. The figure shows the convolutions of a word as input and a single vector representation as output.
Table 1 :
1The distribution of the LID datasets according
to the CALCS LID label set. The label lang1 refers to
English and lang2 is either Nepali, Spanish or Hindi
depending on the corpus. The full label distribution is
in Appendix A.
Table 2 :
2The results of incremental experiments on each LID dataset. The scores are calculated using the weighted F-1 metric across the eight LID labels from CALCS. Within each column, the best score in each block is in bold, and the best score for the whole column is underlined.Note that development scores from subsequent experiments
Table 3 :
3Comparison of our best models with the best published scores for language identification. Scores are calculated with the F1 metric, and WA F1 is the weighted average F1 between both languages.
Table 4 :
4The F1 scores on POS tagging for the Hindi-
English dataset. CS knowledge means that the CS-
ELMo architecture (see
of-Vocabulary Words in Code-Switching Named Entity Recognition. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 110-114, Melbourne, Australia. Association for Computational Linguistics. Zeynep Yirmibeşoglu and Gülşen Eryigit. 2018. Detecting Code-Switching between Turkish-English Language Pair. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy Usergenerated Text, pages 110-115, Brussels, Belgium. Association for Computational Linguistics.Appendix for "From English to
Code-Switching: Transfer Learning with
Strong Morphological Clues"
A Language Identification Distributions
Table 6
6shows the distribution of the language identification labels across the CALCS datasets.Labels
Nep-Eng Spa-Eng Hin-Eng
lang1
71,148
112,579
84,752
lang2
64,534
119,408
29,958
other
45,286
55,768
21,725
ne
5,053
5,693
9,657
ambiguous
126
404
13
mixed
177
54
58
fw
0
30
542
unk
0
325
17
Table 6 :
6Label distribution for LID datasets.
Table 7 :
7Utterance level language distribution for language identification datasets.
Table 9
9shows the distribution of the POS tags for Hindi-English. This dataset correspond to the POS tagging experiments in Section 5.2.Corpus
CMI-all CMI-mixed
Nepali-English 2014
19.708
25.697
Spanish-English 2016 7.685
22.114
Hindi-English 2018
10.094
23.141
Table 8 :
8Code-Mixing Index (CMI) for the language identification datasets. CMI-all: average over all utterances in the corpus. CMI-mixed: average over only code-switched instances.POS Labels Train Dev Test
X
5296
790 1495
VERB
4035
669 1280
NOUN
3511
516 1016
ADP
2037
346 599
PROPN
1996
271 470
ADJ
1070
170 308
PART
1045
145 23
PRON
1013
159 284
DET
799
116 226
ADV
717
100 204
CONJ
571
77
161
PART NEG
333
43
92
PRON WH
294
39
88
NUM
276
35
80
Table 9 :
9The POS tag distribution for Hindi-English.
Table 10
10shows the distribution of the NER labels for Spanish-English. This dataset corresponds to the NER experiments in Section 5.2.NER Classes
Train
Dev
Test
person
6,226
95
1,888
location
4,323
16
803
organization 1,381
10
307
group
1,024
5
153
title
1,980
50
542
product
1,885
21
481
event
557
6
99
time
786
9
197
other
382
7
62
NE Tokens
18,544
219
4,532
O Tokens
614,013 9,364 178,479
Tweets
50,757
832
15,634
Table 10 :
10The distribution of labels for the Spanish-English NER dataset from CALCS 2018.
Note that there are more than two labels in the LID tagset, as explained in Section 4. 2 http://github.com/RiTUAL-UH/cs_elmo
ELMo has seven character convolutional layers, each layer with a kernel size from one to seven characters (n = 7). 4 ELMo varies the output channels per convolutional layer, so the dimensionality of Ej varies as well.
The LID label set uses eight labels (lang1, lang2, ne, mixed, ambiguous, fw, other, and unk), but for the simplified LID label set, we only consider three labels (lang1, lang2 and other) to predict only based on characters.
AcknowledgementsThis work was supported by the National Science Foundation (NSF) on the grant #1910192. We thank Deepthi Mave for providing general statistics of the code-switching datasets and Mona Diab for insightful discussions on the topic.
Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task. Gustavo Aguilar, Fahad Alghamdi, Victor Soto, Mona Diab, Julia Hirschberg, Thamar Solorio, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingMelbourne, AustraliaAssociation for Computational LinguisticsGustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task. In Proceedings of the Third Workshop on Compu- tational Approaches to Linguistic Code-Switching, pages 138-147, Melbourne, Australia. Association for Computational Linguistics.
LILI: A Simple Language Independent Approach for Language Identification. Mohamed Al, - Badrashiny, Mona Diab, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeMohamed Al-Badrashiny and Mona Diab. 2016. LILI: A Simple Language Independent Approach for Lan- guage Identification. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 1211- 1219, Osaka, Japan. The COLING 2016 Organizing Committee.
Neural Machine Translation by Jointly Learning to Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Part-of-Speech Tagging for Code-Switched, Transliterated Texts without Explicit Language Identification. Kelsey Ball, Dan Garrette, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsKelsey Ball and Dan Garrette. 2018. Part-of-Speech Tagging for Code-Switched, Transliterated Texts without Explicit Language Identification. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3084- 3089, Brussels, Belgium. Association for Computa- tional Linguistics.
Enriching Word Vectors with Subword Information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, 10.1162/tacl_a_00051Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing. Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The Hitchhiker's Guide to Testing Sta- tistical Significance in Natural Language Processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392. Association for Computational Linguistics.
On Measuring the Complexity of Code-Mixing. Björn Gambäck, Amitava Das, Proceedings of the 11th International Conference on Natural Language Processing. the 11th International Conference on Natural Language ProcessingGoa, IndiaBjörn Gambäck and Amitava Das. 2014. On Measur- ing the Complexity of Code-Mixing. In Proceedings of the 11th International Conference on Natural Lan- guage Processing, Goa, India, pages 1-7.
. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N Dauphin, Convolutional Sequence to Sequence Learning. CoRR, abs/1705.03122Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Convo- lutional Sequence to Sequence Learning. CoRR, abs/1705.03122.
Universal Language Model Fine-tuning for Text Classification. Jeremy Howard, Sebastian Ruder, 10.18653/v1/P18-1031Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Aus- tralia. Association for Computational Linguistics.
Language Identification in Code-Switching Scenario. Naman Jain, Riyaz Ahmad Bhat, 10.3115/v1/W14-3910Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsNaman Jain and Riyaz Ahmad Bhat. 2014. Language Identification in Code-Switching Scenario. In Pro- ceedings of the First Workshop on Computational Approaches to Code Switching, pages 87-93, Doha, Qatar. Association for Computational Linguistics.
Language Identification and Analysis of Code-Switched Social Media Text. Deepthi Mave, Suraj Maharjan, Thamar Solorio, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingMelbourne, AustraliaAssociation for Computational LinguisticsDeepthi Mave, Suraj Maharjan, and Thamar Solorio. 2018. Language Identification and Analysis of Code-Switched Social Media Text. In Proceed- ings of the Third Workshop on Computational Ap- proaches to Linguistic Code-Switching, pages 51- 61, Melbourne, Australia. Association for Compu- tational Linguistics.
Overview for the Second Shared Task on Language Identification in Code-Switched Data. Giovanni Molina, Fahad Alghamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas Rey-Villamizar, Mona Diab, Thamar Solorio, 10.18653/v1/W16-5805Proceedings of the Second Workshop on Computational Approaches to Code Switching. the Second Workshop on Computational Approaches to Code SwitchingAustin, TexasAssociation for Computational LinguisticsGiovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas Rey- Villamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the Second Shared Task on Language Identification in Code-Switched Data. In Proceedings of the Second Workshop on Computa- tional Approaches to Code Switching, pages 40-49, Austin, Texas. Association for Computational Linguistics.
GloVe: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Deep Contextualized Word Representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
A Universal Part-of-Speech Tagset. Slav Petrov, Dipanjan Das, Ryan Mcdonald, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). the Eighth International Conference on Language Resources and Evaluation (LREC'12)Istanbul, TurkeyEuropean Language Resources Association (ELRASlav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A Universal Part-of-Speech Tagset. In Proceed- ings of the Eighth International Conference on Lan- guage Resources and Evaluation (LREC'12), pages 2089-2096, Istanbul, Turkey. European Language Resources Association (ELRA).
Filtering and Mining Parallel Data in a Joint Multilingual Space. Holger Schwenk, 10.18653/v1/P18-2037Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics2Short Papers)Holger Schwenk. 2018. Filtering and Mining Paral- lel Data in a Joint Multilingual Space. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 228-234, Melbourne, Australia. Asso- ciation for Computational Linguistics.
Learning Joint Multilingual Sentence Representations with Neural Machine Translation. Holger Schwenk, Matthijs Douze, 10.18653/v1/W17-2619Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPVancouver, CanadaAssociation for Computational LinguisticsHolger Schwenk and Matthijs Douze. 2017. Learning Joint Multilingual Sentence Representations with Neural Machine Translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157-167, Vancouver, Canada. Association for Computational Linguistics.
A Corpus for Multilingual Document Classification in Eight Languages. Holger Schwenk, Xian Li, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAHolger Schwenk and Xian Li. 2018. A Corpus for Multilingual Document Classification in Eight Lan- guages. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
A Twitter Corpus for Hindi-English Code Mixed POS Tagging. Kushagra Singh, Indira Sen, Ponnurangam Kumaraguru, 10.18653/v1/W18-3503Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media. the Sixth International Workshop on Natural Language Processing for Social MediaMelbourne, AustraliaAssociation for Computational LinguisticsKushagra Singh, Indira Sen, and Ponnurangam Ku- maraguru. 2018. A Twitter Corpus for Hindi- English Code Mixed POS Tagging. In Proceed- ings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 12- 17, Melbourne, Australia. Association for Computa- tional Linguistics.
A Survey of Code-switched Speech and Language Processing. Sunayana Sitaram, Khyathi Raghavi Chandu, Krishna Sai, Alan W Rallabandi, Black, abs/1904.00784CoRRSunayana Sitaram, Khyathi Raghavi Chandu, Sai Kr- ishna Rallabandi, and Alan W. Black. 2019. A Sur- vey of Code-switched Speech and Language Pro- cessing. CoRR, abs/1904.00784.
Overview for the First Shared Task on Language Identification in Code-Switched Data. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad Alghamdi, Julia Hirschberg, Alison Chang, Pascale Fung, 10.3115/v1/W14-3907Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingDoha, QatarAssociation for Computational LinguisticsThamar Solorio, Elizabeth Blair, Suraj Mahar- jan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Ju- lia Hirschberg, Alison Chang, and Pascale Fung. 2014. Overview for the First Shared Task on Lan- guage Identification in Code-Switched Data. In Pro- ceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72, Doha, Qatar. Association for Computational Linguistics.
Joint Partof-Speech and Language ID Tagging for Code-Switched Data. Victor Soto, Julia Hirschberg, 10.18653/v1/W18-3201Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingMelbourne, AustraliaAssociation for Computational LinguisticsVictor Soto and Julia Hirschberg. 2018. Joint Part- of-Speech and Language ID Tagging for Code- Switched Data. In Proceedings of the Third Work- shop on Computational Approaches to Linguistic Code-Switching, pages 1-10, Melbourne, Australia. Association for Computational Linguistics.
IIT (BHU) Submission for the ACL Shared Task on Named Entity Recognition on Code-switched Data. Shashwat Trivedi, Harsh Rangwani, Anil Kumar Singh, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingMelbourne, AustraliaAssociation for Computational LinguisticsShashwat Trivedi, Harsh Rangwani, and Anil Ku- mar Singh. 2018. IIT (BHU) Submission for the ACL Shared Task on Named Entity Recognition on Code-switched Data. In Proceedings of the Third Workshop on Computational Approaches to Lin- guistic Code-Switching, pages 148-153, Melbourne, Australia. Association for Computational Linguis- tics.
Attention is All you Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
Code-Switched Named Entity Recognition with Embedding Attention. Changhan Wang, Kyunghyun Cho, Douwe Kiela, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingMelbourne, AustraliaAssociation for Computational LinguisticsChanghan Wang, Kyunghyun Cho, and Douwe Kiela. 2018. Code-Switched Named Entity Recognition with Embedding Attention. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 154-158, Mel- bourne, Australia. Association for Computational Linguistics.
Learning Multilingual Meta-Embeddings for Code-Switching Named Entity Recognition. Zhaojiang Genta Indra Winata, Pascale Lin, Fung, 10.18653/v1/W19-4320Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)Florence, ItalyAssociation for Computational LinguisticsGenta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2019. Learning Multilingual Meta-Embeddings for Code-Switching Named Entity Recognition. In Pro- ceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181- 186, Florence, Italy. Association for Computational Linguistics.
Chien-Sheng Genta Indra Winata, Andrea Wu, Pascale Madotto, Fung, Bilingual Character Representation for Efficiently Addressing Out. Genta Indra Winata, Chien-Sheng Wu, Andrea Madotto, and Pascale Fung. 2018. Bilingual Char- acter Representation for Efficiently Addressing Out-
| [
"http://github.com/RiTUAL-UH/cs_elmo"
]
|
[
"Augmented Outcome-weighted Learning for Optimal Treatment Regimes",
"Augmented Outcome-weighted Learning for Optimal Treatment Regimes"
]
| [
"Xin Zhou \nDepartments of Biostatistics and Epidemiology Harvard T.H. Chan School of Public Health Boston\nDepartment of Biostatistics University of North Carolina at Chapel Hill Chapel Hill\n02115, 27599Massachusetts, North CarolinaU.S.A., U.S.A\n",
"Michael R Kosorok \nDepartments of Biostatistics and Epidemiology Harvard T.H. Chan School of Public Health Boston\nDepartment of Biostatistics University of North Carolina at Chapel Hill Chapel Hill\n02115, 27599Massachusetts, North CarolinaU.S.A., U.S.A\n"
]
| [
"Departments of Biostatistics and Epidemiology Harvard T.H. Chan School of Public Health Boston\nDepartment of Biostatistics University of North Carolina at Chapel Hill Chapel Hill\n02115, 27599Massachusetts, North CarolinaU.S.A., U.S.A",
"Departments of Biostatistics and Epidemiology Harvard T.H. Chan School of Public Health Boston\nDepartment of Biostatistics University of North Carolina at Chapel Hill Chapel Hill\n02115, 27599Massachusetts, North CarolinaU.S.A., U.S.A"
]
| []
| Precision medicine is of considerable interest in clinical, academic and regulatory parties. The key to precision medicine is the optimal treatment regime. Recently, Zhou et al. (2017) developed residual weighted learning (RWL) to construct the optimal regime that directly optimize the clinical outcome. However, this method involves computationally intensive non-convex optimization, which cannot guarantee a global solution. Furthermore, this method does not possess fully semiparametrical efficiency. In this article, we propose augmented outcome-weighted learning (AOL). The method is built on a doubly robust augmented inverse probability weighted estimator (AIPWE), and hence constructs semiparametrically efficient regimes. Our proposed AOL is closely related to RWL. The weights are obtained from counterfactual residuals, where negative residuals are reflected to positive and accordingly their treatment assignments are switched to opposites. Convex loss functions are thus applied to guarantee a global solution and to reduce computations. We show that AOL is universally consistent, i.e., the estimated regime of AOL converges the Bayes regime when the sample size approaches infinity, without knowing any specifics of the distribution of the data. We also propose variable selection methods for linear and nonlinear regimes, respectively, to further improve performance. The performance of the proposed AOL methods is illustrated in simulation studies and in an analysis of the Nefazodone-CBASP clinical trial data. | null | [
"https://arxiv.org/pdf/1711.10654v1.pdf"
]
| 88,517,798 | 1711.10654 | 410ad8e3e697e7da4f1d84c389c9bc4001e75e36 |
Augmented Outcome-weighted Learning for Optimal Treatment Regimes
29 Nov 2017 November 30, 2017
Xin Zhou
Departments of Biostatistics and Epidemiology Harvard T.H. Chan School of Public Health Boston
Department of Biostatistics University of North Carolina at Chapel Hill Chapel Hill
02115, 27599Massachusetts, North CarolinaU.S.A., U.S.A
Michael R Kosorok
Departments of Biostatistics and Epidemiology Harvard T.H. Chan School of Public Health Boston
Department of Biostatistics University of North Carolina at Chapel Hill Chapel Hill
02115, 27599Massachusetts, North CarolinaU.S.A., U.S.A
Augmented Outcome-weighted Learning for Optimal Treatment Regimes
29 Nov 2017 November 30, 2017Optimal Treatment RegimeRKHSUniversal consistencyResidualsDouble robustness
Precision medicine is of considerable interest in clinical, academic and regulatory parties. The key to precision medicine is the optimal treatment regime. Recently, Zhou et al. (2017) developed residual weighted learning (RWL) to construct the optimal regime that directly optimize the clinical outcome. However, this method involves computationally intensive non-convex optimization, which cannot guarantee a global solution. Furthermore, this method does not possess fully semiparametrical efficiency. In this article, we propose augmented outcome-weighted learning (AOL). The method is built on a doubly robust augmented inverse probability weighted estimator (AIPWE), and hence constructs semiparametrically efficient regimes. Our proposed AOL is closely related to RWL. The weights are obtained from counterfactual residuals, where negative residuals are reflected to positive and accordingly their treatment assignments are switched to opposites. Convex loss functions are thus applied to guarantee a global solution and to reduce computations. We show that AOL is universally consistent, i.e., the estimated regime of AOL converges the Bayes regime when the sample size approaches infinity, without knowing any specifics of the distribution of the data. We also propose variable selection methods for linear and nonlinear regimes, respectively, to further improve performance. The performance of the proposed AOL methods is illustrated in simulation studies and in an analysis of the Nefazodone-CBASP clinical trial data.
Introduction
Most medical treatments are designed for the "average patient". Such a "one-size-fits-all" approach is successful for some patients but not always for others. Precision medicine, also known as personalized medicine, is an innovative approach to disease prevention and treatment that take into account individual variability in clinical information, genes, environments and lifestyles. Currently, precision medicine is of considerable interest in clinical, academic, and regulatory parties. There are already several FDA-approved treatments that are tailored to specific characteristics of individuals. For example, ceritinib, a recently FDA approved drug for the treatment of lung cancer, is highly active in patients with advanced, ALK-rearranged non-small-cell lung cancer (Shaw et al. 2014).
The key to precision medicine is the optimal treatment regime. Let X = (X 1 , · · · , X p ) T ∈ X be a patient's clinical covariates, A ∈ A = {+1, −1} be the treatment assignment, and R be the observed clinical outcome. Assume without loss of generality that larger values of R are more desirable. A treatment regime d is a function from X to A. An optimal treatment regime is a regime that maximizes the outcome under this regime. Assuming that the data generating mechanism is known, the optimal treatment regime is related to the contrast
δ(x) = µ +1 (x) − µ −1 (x),
where µ +1 (x) = E(R|X = x, A = +1) and µ −1 (x) = E(R|X = x, A = −1). The Bayes optimal regime is d * (x) = 1 if δ(x) > 0 and −1 otherwise.
Most of published optimal treatment strategies estimate the contrast δ(x) by modelling either the conditional mean outcomes or contrast directly based on data from randomized clinical trials or observational studies (see Moodie et al. (2014); Murphy (2003); Robins (2004); Taylor et al. (2015) and references therein). They obtain treatment regimes indirectly by inverting the regression estimates. They are regression-based approaches for treatment regimes. For instance, Qian and Murphy (2011) proposed a two-step procedure that first estimates a conditional mean for the outcome and then determines the treatment regime by comparing conditional mean outcomes across various treatments. The success of these regression-based approaches depends on the correct specification of models and on the high precision of the model estimates. However, in practice, the heterogeneity in population makes the regression model estimate complicated.
Alternatively, Zhao et al. (2012) proposed a classification-based approach, called outcome weighted learning (OWL), to utilize the weighted support vector machines (Vapnik 1995) to estimate the optimal treatment regime directly. Zhang et al. (2012a) also proposed a general framework to make use of classification methods to the optimal treatment regime problem.
Indeed, the classification-based approaches follow Vapnik's main principle (Vapnik 1995): "When solving a given problem, try to avoid solving a more general problem as an intermediate step." As in Figure 1, the aim of optimal treatment regimes is to estimate the form of the decision boundary δ(x) = 0. Regression-based approaches find the decision boundary by solving a general problem that estimates δ(x) for any x ∈ X . For the optimal treatment regime, it is sufficient to find an accurate estimate of δ(x) only near the zeros of δ(x). In general, finding the optimal regime is an easier problem than regression function estimation. Classification-based approaches, which seek the decision boundary directly, provide a flexible framework from a different perspective.
Recently, proposed residual weighted learning (RWL), which uses the residual from a regression fit of outcome as the pseudo-outcome, to improve finite sample performance of OWL. However, this method, involving a non-convex loss function, presents numerous challenges in computations, which hinders its practical use. For non-convex op- timization, a global solution is not guaranteed, and the computation is generally intensive. Athey and Wager (2017) also pointed out that RWL does not possess fully semiparametrical efficiency. In this article, we propose augmented outcome-weighted learning (AOL). The method is built on a doubly robust augmented inverse probability weighted estimator (AIPWE), and hence constructs semiparametrically efficient regimes. Although this article focuses on randomized clinical trials, the double robustness is particularly useful for observational studies. Our proposed AOL is closely related to RWL. The weights are obtained from counterfactual residuals, where negative residuals are reflected to positive and accordingly their treatment assignments are switched to opposites. Convex loss functions are thus applied to reduce computations. AOL inherits almost all desirable properties of RWL. Similar with RWL, AOL is also universally consistent, i.e., the estimated regime of AOL converges the Bayes regime when the sample size approaches infinity, without knowing any specifics of the distribution of the data. The finite sample performance of AOL is demonstrated in numerical simulations.
The remainder of the article is organized as follows. In Section 2.1, we review outcome weighted learning and residual weighted learning. In Section 2.2 and 2.3, we propose augmented outcome-weighted learning. We discover the connection between augmented outcome-weighted learning and residual weighted learning in Section 2.4. We establish universal consistency for the proposed AOL in Section 2.5. The variable selection techniques for AOL are discussed in Section 2.6. We present simulation studies to evaluate finite sample performance of the proposed methods in Section 3. The method is then illustrated on the Nefazodone-CBASP clinical trial in Section 4. We conclude the article with a discussion in Section 5. All the technical proofs are provided in Appendix.
Method
Review of outcome weighted learning and residual weighted learning
In this article, random variables are denoted by uppercase letters, while their realizations are denoted by lowercase letters. Consider a two-arm randomized trial. Let π(a, x) := P (A = a|X = x) be the probability of being assigned treatment a for patients with clinical covariates x. It is predefined in the trial design. We assume π(a, x) > 0 for all a ∈ A and x ∈ X .
We use the potential outcomes framework (Rubin 1974) to precisely define the optimal treatment regime. Let R * (+1) and R * (−1) denote the potential outcomes that would be observed had a subject received treatment +1 or −1. There are two assumptions in the framework. The actually observed outcomes and potential outcomes are connected by the consistency assumption, i.e., R = R * (A). We further assume that conditional on covariates X, the potential outcomes {(R * (+1), R * (−1)} are independent of A, the treatment that has been actually received. This is the assumption of no unmeasured confounders (NUC). This assumption is automatically hold in a randomized clinical trial.
For an arbitrary treatment regime d, we can thus define its potential outcome R * (d(X)) = R * (+1)I(d(X) = +1)+ R * (−1)I(d(X) = −1), where I(·) is the indicator function. It would be the observed outcome if a subject from the population were to be assigned treatment according to regime d. The expected potential outcome under any regime d, defined as V(d) = E(R * (d)), is called the value function associated with regime d. Thus, an optimal regime d * is a regime that maximizes
V(d). Let m(x, d) = µ +1 (x)I(d(x) = +1)+µ −1 (x)I(d(x) = −1).
Under the consistency and NUC assumptions, it is straightforward to show that
V(d) = E m(X, d)) = E R π(A, X) I A = d(X) .(1)
Thus finding d * is equivalent to the following minimization problem:
d * ∈ arg min d E R π(A, X) I A = d(X) .
(2) Zhao et al. (2012) viewed this as a weighted classification problem, and proposed outcome weighted learning (OWL) to apply statistical learning techniques to optimal treatment regimes. However, as discussed in , this method is not perfect. Firstly, the estimated regime of OWL is affected by a simple shift of the outcome R. Hence estimates from OWL are unstable especially when the sample size is small. Secondly, since OWL needs the outcome to be nonnegative to gain computational efficiency from convex programming, OWL works similarly as weighted classification to reduce the difference between the estimated and true treatment assignments. Thus the regime by OWL tends to retain the treatments that subjects actually received. This behavior is not ideal for data from a randomized clinical trial, since treatments are actually randomly assigned to patients.
To alleviate these problems, proposed residual weighted learning (RWL), in which the misclassification errors are weighted by residuals of the outcome R from a regression fit on clinical covariates X. The residuals are calculated as R g = R − g(X). used g 1 (X) = E( R 2π(A,X) |X) as a choice of g(X). Unlike OWL in (2), RWL targets the following optimization problem,
d * ∈ arg min d E R g π(A, X) I A = d(X) .
Suppose that the realization data {(x i , a i , r i ) : i = 1, · · · , n} are collected independently. For any decision function f (x), let d f (x) = sign f (x) be the associated regime. RWL aims to minimize the following regularized empirical risk,
1 n n i=1 r g,i π(a i , x i ) T a i f (x i ) + λ||f || 2 ,(3)
where r g,i = r i − g(x i ), T (·) is a continuous surrogate loss function, ||f || is some norm for f , and λ is a tuning parameter. Since some residuals are negative, convex surrogate loss functions are not appropriate in (3). considered a non-convex loss, the smoothed ramp loss function. However, the non-convexity presents significant challenges for solving the optimization problem (3). Unlike convex functions, non-convex functions may possess local optima that are not global optima, and most of efficient optimization algorithms, such as gradient descent and coordinate descent, are only guaranteed to converge to a local optimum. The theoretical properties of RWL establish on the global optimum. Although applied a difference of convex (d.c.) algorithm to address the non-convex optimization problem by solving a sequence of convex subproblems to increase the likelihood of reaching a global minimum, the global optimization is not guaranteed (Sriperumbudur and Lanckriet 2009). The d.c. algorithm is still computationally intensive. In addition, RWL may connect with AIPWE as discussed in , but it does not have fully semiparametrical efficiency (Athey and Wager 2017).
Augmented outcome-weighted learning (AOL)
Let us come back to equation (1). The first equality is the foundation of regressionbased approaches, while the second inspired outcome weighted learning (Zhao et al. 2012). Zhang et al. (2012a) combined these two perspectives through a doubly robust augmented inverse probability weighted estimator (Bang and Robins 2005, AIPWE) of the value function.
Recall Zhang et al. (2012b), we start from the doubly robust AIPWE:
that µ +1 (x) = E(R|X = x, A = +1), µ −1 (x) = E(R|X = x, A = −1), and m(x, d) = µ +1 (x)I(d(x) = +1) + µ −1 (x)I(d(x) = −1). FollowingAIPWE(d) = 1 n n i=1 r i −m(x i , d) π(a i , x i ) I a i = d(x i ) +m(x i , d) ,
wherem(x, d) is an estimator of m(x, d), which is an estimator of
V(d) = E R − m(X, d) π(A, X) I(A = d(X)) + m(X, d) .
For an observational study, we are also required to estimate π(a, x) by the data. AIPWE(d) is a consistent estimator of V(d) if eitherπ(a, x) orm(x, d) is correctly specified. This is the so-called double robustness. In a randomized clinical trial π(a, x) is known, hence even
ifm(x, d) is inconsistent, AIPWE(d) is still consistent. Noting that R − m(X, d) π(A, X) I A = d(X) + m(X, d) = R −g(X) π(A, X) I A = d(X) + µ −A (X), whereg (x) := π(−1, x)µ +1 (x) + π(+1, x)µ −1 (x),(4)
maximizing AIPWE(d) is asymptotic to the following minimization problem arg min
d E R −g(X) π(A, X) I A = d(X) .(5)
LetR = R −g(X). As explained later in Section 2.4,R is a form of residuals. At this point, we may apply a similar non-convex surrogate loss in the regularization framework as RWL in (3). However, it still suffers from local optimization and intensive computation.
To seek the optimal regime, we apply a finding in Liu et al. (2016) to take advantage of efficient convex optimization. Note that
E |R| π(A, X) I A · sign(R) = d(X) = E R π(A, X) I A = d(X) + E R − π(A, X) ,
whereR − = max(−R, 0). Therefore finding d * in (5) is equivalent to the following optimization problem,
d * ∈ arg min d E |R| π(A, X) I A · sign(R) = d(X) ,
where negative weights are reflected to positive, and accordingly their treatment assignments are switched to opposites. Similar with OWL and RWL, we seek the decision function f by minimizing a regularized surrogate risk,
1 n n i=1 |r i | π(a i , x i ) φ a i · sign(r i )f (x i ) + λ 2 ||f || 2 ,(6)
where φ(·) is a continuous surrogate loss function, ||f || is some norm for f , and λ is a tuning parameter controlling the trade-off between the empirical risk and the complexity of the decision function f . This method is called augmented outcome-weighted learning (AOL) in this article, since the weights are derived from augmented outcomes. As the weights |r i | π(a i ,x i ) are all nonnegative, convex surrogate can be employed for efficient computation. In this article, we apply the Huberized hinge loss function (Wang et al. 2008),
φ(u) = 0 if u ≥ 1, 1 4 (1 − u) 2 if − 1 ≤ u < 1, −u if u < −1.(7)
Other convex loss functions, such as the hinge loss, can be also applied in AOL. Although the Huberized hinge loss has a similar shape with the hinge loss, the Huberized hinge loss is smooth everywhere. Hence it has computational advantages in optimization.
Implementation of AOL
We derive an algorithm for the linear AOL in Section 2.3.1, and then generalize it to the case of nonlinear learning through kernel mapping in Section 2.3.2. Both algorithms solve convex optimization problems, and global solutions are guaranteed.
Linear Decision Rule for AOL
Consider a linear decision function
f (x) = w T x + b.
The associated regime d f will assign a subject with clinical covariates x into treatment 1 if w T x + b > 0 and −1 otherwise. In (6), we define ||f || as the Euclidean norm of w. Then the minimization problem (6) can be rewritten as
min w,b 1 n n i=1 |r i | π(a i , x i ) φ a i · sign(r i ) w T x i + b + λ 2 w T w.(8)
There are many efficient numerical methods for solving this smooth unconstrained convex optimization problem. One example is the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm (Nocedal 1980), a quasi-Newton method that approximates the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm using a limited amount of computer memory. When we obtain the solution (ŵ,b), the decision function isf (x) =ŵ T x +b.
Nonlinear Decision rule for AOL
The nonlinear decision function f (x) can be represented by h(x) + b with h(x) ∈ H K and b ∈ R, where H K is a reproducing kernel Hilbert space (RKHS) associated with a Mercer kernel function K. The kernel function K(·, ·) is a positive definite function mapping from X × X to R. The norm in H K , denoted by || · || K , is induced by the following inner product,
< f, g > K = n i=1 m j=1 α i β j K(x i , x j ),
for f (·) = n i=1 α i K(·, x i ) and g(·) = m j=1 β j K(·, x j ). The most widely used nonlinear kernel in practice is the Gaussian Radial Basis Function (RBF) kernel, that is,
K σ (x, z) = exp − σ 2 ||x − z|| 2 ,
where σ > 0 is a free parameter whose inverse 1/σ is called the width of K σ .
Then minimizing (6) can be rewritten as
min h,b 1 n n i=1 |r g,i | π(a i , x i ) φ a i · sign(r i ) h(x i ) + b + λ 2 ||h|| 2 K .(9)
Due to the representer theorem (Kimeldorf and Wahba 1971), the nonlinear problem can be reduced to finding finite-dimensional coefficients v i , and h(x) can be represented as n j=1 v j K(x, x j ). So the problem (9) is changed to
min v,b 1 n n i=1 |r i | π(a i , x i ) φ a i · sign(r i ) n j=1 v j K(x i , x j ) + b + λ 2 n i,j=1 v i v j K(x i , x j ). (10)
Again, it is a smooth unconstrained convex optimization problem. We apply L-BFGS algorithm to solve (10). When we obtain the solution (v,b), the decision function isf (x) = n j=1v j K(x, x j ) +b.
Connection to residual weighted learning
Note thatg(x) in (4) is a weighted average of µ +1 (x) and µ −1 (x). HenceR = R −g(X) is a form of residuals. The use of residuals in optimal treatment regimes is justified in as follows, for any measurable function g,
E R − g(X) π(A, X) I A = d(X) = E R π(A, X) − g(X) − V(d).
For residual weighted learning in , the corresponding g(·) is
g 1 (x) = E R 2π(A, X) X = x = 1 2 µ +1 (x) + 1 2 µ −1 (x).(11)
Similarly, Liu et al. (2016) applied unweighted regression to calculate residuals, where the corresponding g(·) is
g 2 (x) = E (R|X = x) = π(+1, x)µ +1 (x) + π(−1, x)µ −1 (x).(12)
It is interesting to understand the implication ofg(x) in (4). Under the consistency and NUC assumptions, we can check that
E(R * (−A)|X = x) = π(−1, x)µ +1 (x) + π(+1, x)µ −1 (x) =g(x).
g(x) is the expected outcome for subjects with covariate x had they received the opposite treatments to the ones that they have actually received.g(x) is counterfactual, and cannot be observed. It can be estimated byĝ(x) = π(−1, x)μ +1 (x) + π(+1, x)μ −1 (x), wherê µ +1 (x) andμ −1 (x) are estimates of µ +1 (x) and µ −1 (x), respectively. Noting that
g(x) = E π(−A, X) π(A, X) R X = x = π(−1, x)µ +1 (x) + π(+1, x)µ −1 (x),(13)
g(x) also can be estimated by weighted regression directly, where weights are π(−A,x) π(A,x) . In a randomized clinical trial with usual equal allocation ratio 1:1, g 1 (x), g 2 (x) andg(x) coincide. If the allocation ratio is unequal, they are different. Compared with the regression weights of g 1 (x) in (11) and of g 2 (x) in (12),g(x) in (13) utilizes a more extreme set of weights. For example, in a randomized clinical trial with the allocation ratio 3 : 1, i.e., the number of subjects in arm +1 is three times as that in arm −1, the weights in (12) for two arms are both 1 (unweighted), the weights in (11) are 2/3 and 2, and the weights in (13) are 1/3 and 3.
Our proposed AOL is closely related to RWL, as we just discussed that AOL uses counterfactual residuals. AOL possesses almost all desirable properties of RWL. First, by using residuals, AOL stabilizes the variability introduced from the original outcome. Second, to minimize the empirical risk in (6), for subjects with positive residuals, AOL tends to recommend the same treatment assignments that subjects have actually received; for subjects with negative residuals, AOL is apt to give the opposite treatment assignments to what they have received. Third, AOL is location-scale invariant with respect to the original outcomes. Specifically, the estimated regime from AOL is invariant to a shift of the outcome; it is invariant to a scaling of the outcome with a positive number; the regime from AOL that maximizes the outcome is opposite to the one that minimizes the outcome. These are intuitively sensible. The only nice property of RWL that is not inherited by AOL is the robustness to outliers because of the unbounded convex loss in AOL. However, we may apply an appropriate method or model estimating residuals to reduce the probability of outliers.
Theoretical properties
In this section, we establish theoretical properties for AOL. Recall that for any treatment regime d : X → A, the value function is defined as
V(d) = E R π(A, X) I A = d(X) .
Similarly, we define the risk function of a treatment regime d as
R(d) = E R π(A, X) I A = d(X) .
The regime that minimizes the risk is the Bayes regime d * = arg min d R(d), and the corre-
sponding risk R * = R(d * ) is the Bayes risk. Recall that the Bayes regime is d * (x) = 1 if δ(x) > 0 and −1 otherwise. Let φ : R → R + , where R + = [0, +∞)
, be a convex function. In this section, we investigate a general result, and do not limit φ as the Huberized hinge loss. A few popular convex surrogate examples are listed as follows:
• Hinge loss: φ(u) = (1 − u) + , where (v) + = max(0, v), • Squared hinge loss: φ(u) = [(1 − u) + ] 2 ,
• Least squares loss: φ(u) = (1 − u) 2 ,
• Huberized hinge loss as shown in (7),
• Logistic loss: φ(u) = log(1 + exp(−u)),
• Distance weighted discrimination (DWD) loss:
φ(u) = 1 u if u ≥ 1, 2 − u if u < 1, • Exponential loss: φ(u) = exp(−u).
The hinge loss and squared hinge loss are widely used in support vector machines (Vapnik 1995). The least squares loss is applied to regularization networks (Evgeniou et al. 2000). The loss function in the logistic regression is just the logistic loss. The DWD loss is the loss function in the distance-weighted discrimination (Marron et al. 2007). The exponential loss is used in AdaBoost (Freund and Schapire 1997).
For any measurable function g :
X → R, recall that R g = R − g(x)
. In this section, we do not require g to be a regression fit of R, and g can be any arbitrary function. For a decision function f : X → R, we proceed to define a surrogate φ-risk function:
R φ,g (f ) = E |R g | π(A, X) φ A · sign(R g )f (X) .(14)
Similarly, the minimal φ-risk as R
* φ,g = inf f R φ,g (f ) and f * φ,g = arg min f R φ,g (f ). The performance of the associated regime d f = sign(f ) is measured by the excess risk ∆R(f ) = R(d f ) − R * . Similarly, we define the excess φ-risk as ∆R φ,g (f ) = R φ,g (f ) − R * φ,g . Suppose that a sample D n = {X i , A i , R i } n i=1 is independently drawn from a probability measure P on X × A × R, where X ⊂ R p is compact. Let f Dn,λn ∈ H K + {1}, i.e.
f Dn,λn = h Dn,λn + b Dn,λn , where h Dn,λn ∈ H K and b Dn,λn ∈ R, be a global minimizer of the following optimization problem:
min f =h+b∈H K +{1} 1 n n i=1 |R g,i | π(A i , X i ) φ A i · sign(R g,i )f (X i ) + λ n 2 ||h|| 2 K ,(15)
where R g,i = R i − g(X i ). Here we suppress φ and g from the notations of f Dn,λn , h Dn,λn and b Dn,λn . The purpose of the theoretical analysis is to investigate universal consistency of the associated regime of f Dn,λn . The concept of universal consistency is given in . A universally consistent regime method eventually learns the Bayes regime without knowing any specifics of the distribution of the data when the sample size approaches infinity. Mathematically, a regime d is universally consistent when lim n→∞ R(d) = R * in probability.
Fisher consistency
The first question is whether the loss function used is Fisher consistent. The concept of Fisher consistency is brought from pattern classification (Lin 2002). For optimal treatment regimes, a loss function is Fisher consistent if the loss function alone can be used to identify the Bayes regime when the sample size approaches infinity, i.e., R(sign(f * φ,g )) = R(d * ). We
define η 1 (x) = E(R + g |X = x, A = +1) + E(R − g |X = x, A = −1), η 2 (x) = E(R + g |X = x, A = −1) + E(R − g |X = x, A = +1),(16)
where R + g = max(R g , 0) and R − g = max(−R g , 0). We suppress the dependence on g from the notations. Note that
η 1 (x) − η 2 (x) = E(R|X = x, A = +1) − E(R|X = x, A = −1) = µ +1 (x) − µ −1 (x).
The sign of η 1 (x) − η 2 (x) is just the Bayes regime on x. After some simple algebras, the φ-risk in (14) can be shown as
R φ,g (f ) = E η 1 (X)φ f (X) + η 2 (X)φ − f (X) . Now we introduce the generic conditional φ-risk, Q η 1 ,η 2 (α) = η 1 φ(α) + η 2 φ(−α),
where η 1 , η 2 ∈ R + and α ∈ R. The notation suppresses the dependence on φ and g. We define the optimal conditional φ-risk,
H(η 1 , η 2 ) = Q η 1 ,η 2 (α * ) = min α∈R Q η 1 ,η 2 (α),
and furthermore define,
H − (η 1 , η 2 ) = min α:α(η 1 −η 2 )≤0 Q η 1 ,η 2 (α).
H − (η 1 , η 2 ) is the optimal value of the conditional φ-risk, under the constraint that the sign of the argument α disagrees with the Bayes regime. Fisher consistency is equivalent to
H − (η 1 , η 2 ) > H(η 1 , η 2 ) for any η 1 , η 2 ∈ [0, ∞) with η 1 = η 2 .
The condition is similar with that of classification calibration in Bartlett et al. (2006). When φ is convex, this condition is equivalent to a simpler condition on the derivative of φ at 0.
Theorem 2.1. Assume that φ is convex. Then φ is Fisher consistent if and only if φ ′ (0) exists and φ ′ (0) < 0.
It is interesting to note that the necessary and sufficient condition for a convex surrogate loss function φ to yield a Fisher consistent regime concerns only its local property at 0. All surrogate loss function listed above are Fisher consistent.
Relating excess risk to excess φ-risk
We now turn to the excess risk and show how it can be bounded through the excess φ-risk. It is easy to verify that the excess φ-risk can be expressed as
∆R φ,g (f ) = E Q η 1 (X),η 2 (X) (f (X)) − min α∈R Q η 1 (X),η 2 (X) (α) . Let ∆Q η 1 ,η 2 (f ) = Q η 1 ,η 2 (f ) − min α∈R Q η 1 ,η 2 (α) = Q η 1 ,η 2 (f ) − H η 1 ,η 2 .
Theorem 2.2. Assume φ is convex, φ ′ (0) exists and φ ′ (0) < 0. In addition, suppose that there exist constants C > 0 and s ≥ 1 such that
|η 1 − η 2 | s ≤ C s ∆Q η 1 ,η 2 (0), Then ∆R(f ) ≤ C (∆R φ,g (f )) 1/s .
As shown in the examples below, ∆Q η 1 ,η 2 (0) is often related to η 1 + η 2 . The following theorem handles this situation.
Theorem 2.3. Assume φ is convex, φ ′ (0) exists, and φ ′ (0) < 0. Suppose E |Rg| π(A,X) ≤ M g .
In addition, suppose that there exist a constant s ≥ 2 and a concave increasing function h :
R + → R + such that |η 1 − η 2 | s ≤ h(η 1 + η 2 )∆Q η 1 ,η 2 (0), Then ∆R(f ) ≤ (h(M g )) 1/s (∆R φ,g (f )) 1/s .
We now examine the consequences of these theorems on the examples of loss functions. Here we only present results briefly, and show details in Appendix A. Except for Examples 1 and 6, we assume that
E |Rg| π(A,X) is bounded by M g in all other examples. Example 1 (hinge loss). As shown in Appendix A, H η 1 ,η 2 = 2 min(η 1 , η 2 ), and ∆Q η 1 ,η 2 (0) = |η 1 − η 2 |. By Theorem 2.2, ∆R(f ) ≤ ∆R φ,g (f ).
Example 2 (squared hinge loss). Consider the loss function φ
(α) = [(1 − α) + ] 2 . We have (η 1 − η 2 ) 2 = (η 1 + η 2 )∆Q η 1 ,η 2 (0). By Theorem 2.3, ∆R(f ) ≤ M g (∆R φ,g (f )) 1/2 .
Example 3 (least squares loss). Now consider the loss function φ(α) = (1 − α) 2 . Both H η 1 ,η 2 and ∆Q η 1 ,η 2 (0) are the same as those in the previous example. Hence the bound in the previous example also applies to the least squares loss.
Example 4 (Huberized hinge loss). We can simply obtain that (η 1 − η 2 ) 2 = 4(η 1 + η 2 )∆Q η 1 ,η 2 (0). By Theorem 2.3,
∆R(f ) ≤ 2 M g (∆R φ,g (f )) 1/2 .(17)
Example 5 (logistic loss). We consider the loss function φ(α) = log(1 + exp(−α)). This is a little complicated case. As shown in Appendix A,
(η 1 − η 2 ) 2 ≤ 8(η 1 + η 2 )∆Q η 1 ,η 2 (0). Then by Theorem 2.3, we have ∆R(f ) ≤ 8M g (∆R φ,g (f )) 1/2 .
Example 6 (DWD loss). As shown in Appendix A, we obtain ∆Q η 1 ,
η 2 (0) ≥ |η 1 − η 2 |. Then by Theorem 2.2, ∆R(f ) ≤ ∆R φ,g (f ).
Example 7 (exponential loss). Consider the loss function φ(α) = exp(−α). We have
H η 1 ,η 2 = 2 √ η 1 η 2 , and ∆Q η 1 ,η 2 (0) = ( √ η 1 − √ η 2 ) 2 . Then (η 1 − η 2 ) 2 ≤ 2(η 1 + η 2 )∆Q η 1 ,η 2 (0). By Theorem 2.3, ∆R(f ) ≤ 2M g (∆R φ,g (f )) 1/2 .
Universal consistency
We will establish universal consistency of the regime d f Dn,λn = sign(f Dn,λn ). The following theorem shows the convergence of φ-risk on the sample dependent function f Dn,λn . We apply empirical process techniques to show consistency.
Theorem 2.4. Suppose φ is a Lipschitz continuous function. Assume that we choose a sequence λ n > 0 such that λ n → 0 and nλ n → ∞. For any distribution P for (X, A, R) satisfying
|Rg| π(A,X) ≤ M g < ∞ and | √ λ n b Dn,λn | ≤ M b < ∞ almost everywhere, we have that in probability, lim n→∞ R φ,g (f Dn,λn ) = inf f ∈H K +{1} R φ,g (f ).
When the loss function φ satisfies Theorem 2.2 or 2.3, starting from Theorem 2.4, universally consistent follows if inf f ∈H K +{1} R φ,g (f ) = R * φ,g . This condition requires the concept of universal kernels (Steinwart and Christmann 2008). A continuous kernel K on a compact metric space X is called universal if its associated RKHS H K is dense in C(X ), the space of all continuous functions f : X → R on the compact metric space X endowed with the usual supremum norm. The next Lemma shows that the RKHS H K of a universal kernel K is rich enough to approximate arbitrary decision functions.
Lemma 2.5. Let K be a universal kernel, and H K be the associated RKHS. Suppose that φ is a Lipschitz continuous function, and f * φ,g is measurable and bounded,
|f * φ,g | ≤ M f . For any distribution P for (X, A, R) satisfying |Rg| π(A,X) ≤ M g < ∞ almost everywhere with regular marginal distribution on X, we have inf f ∈H K +{1} R φ,g (f ) = R * φ,g .
Our proposed AOL uses the Huberized hinge loss. Combining all the theoretical results and the excess risk bound in (17) together, the following proposition shows universal consistency of AOL with the Huberized hinge loss.
Proposition 2.6. Let K be a universal kernel, and H K be the associated RKHS. Let φ be the Huberized hinge loss function. Assume that we choose a sequence λ n > 0 such that λ n → 0 and nλ n → ∞. For any distribution P for (X, A, R) satisfying |Rg| π(A,X) ≤ M g < ∞ almost everywhere with regular marginal distribution on X, we have that in probability,
lim n→∞ R(sign(f Dn,λn )) = R * .
In the proof of Proposition 2.7, we provide a bound on b Dn,λn . The similar trick can be applied to hinge loss, squared hinge loss, and least squares loss. Thus for these three loss functions, it is not hard to derive their universal consistency. The exponential loss function is not Lipschitz continuous, so the learning regime with this loss is probably not universally consistent.
For the logistic loss and DWD loss, they do not satisfy Lemma 2.5 since f * φ,g is not bounded. We require stronger conditions for consistency. Firstly, we may assume that both η 1 (x) and η 2 (x) in (16) are continuous. This assumption is plausible in practice. Secondly, we still need an assumption on bounded b Dn,λn as in Theorem 2.4 to exclude some trivial situations, for example, where A · sign(R g ) = 1 almost everywhere. We present the result in the following proposition. The proof is simple and we omit it.
Proposition 2.7. Let K be a universal kernel, and H K be the associated RKHS. Let φ be the logistic loss or the DWD loss. Assume that we choose a sequence λ n > 0 such that λ n → 0 and nλ n → ∞. For any distribution P for (X, A, R) satisfying that (1) both (3) |Rg| π(A,X) ≤ M g < ∞ almost everywhere with regular marginal distribution on X, we have that in probability, lim n→∞ R(sign(f Dn,λn )) = R * .
η 1 (x) and η 2 (x) are continuous, (2) | √ λ n b Dn,λn | ≤ M b < ∞ almost everywhere, and
Variable selection for AOL
As demonstrated in , variable selection is critical for optimal treatment regime when the dimension of clinical covariates is moderate or high. In this section, we apply the variable selection techniques in to AOL.
Variable selection for linear AOL
As in , we apply the elastic-net penalty (Zou and Hastie 2005),
λ 1 ||w|| 1 + λ 2 2 w T w,
where ||w|| 1 = p j=1 |w j | is the ℓ 1 -norm, to replace the ℓ 2 -norm penalty in (8) for variable selection. The elastic-net penalty selects informative covariates through the ℓ 1 -norm penalty, and tends to identify or remove highly correlated variables together, the so-called grouping property, as the ℓ 2 -norm penalty does.
The elastic-net penalized linear AOL minimizes
1 n n i=1 |r i | π(a i , x i ) φ a i · sign(r i ) w T x i + b + λ 1 ||w|| 1 + λ 2 2 w T w,
where λ 1 (> 0) and λ 2 (≥ 0) are regularization parameters. We use projected scaled subgradient (PSS) algorithms (Schmidt 2010), which are extensions of L-BFGS to the case of optimizing a smooth function with an ℓ 1 -norm penalty. The obtained decision function iŝ f (x) =ŵ T x +b, and thus the estimated optimal treatment regime is the sign off (x).
Variable selection for AOL with nonlinear kernels
Similar in , taking the Gaussian RBF kernel as an example, we define the covariates-scaled Gaussian RBF kernel,
K η (x, z) = exp − p j=1 η j (x j − z j ) 2 ,
where η = (η 1 , · · · , η p ) T ≥ 0. The covariate x j is scaled by √ η j . Setting η j = 0 is equivalent to discarding the j'th covariate. The hyperparameter σ in the original Gaussian RBF kernel is discarded as it is absorbed to the scaling factors. We seek (v,b,η) to minimize the following optimization problem:
min v,b,η 1 n n i=1 |r i | π(a i , x i ) φ a i · sign(r i ) n j=1 v j K η (x i , x j ) + b +λ 1 ||η|| 1 + λ 2 2 n i,j=1 v i v j K η (x i , x j ),(18)
subject to η ≥ 0,
where λ 1 (> 0) and λ 2 (> 0) are regularization parameters. There are n + p + 1 variables for the optimization problem. It has an ℓ 1 -norm penalty on scaling factors. It could yield zero solutions for some of the η due to the singularity at η = 0, and hence performs variable selection. Note that the optimization problem (18) is not convex any more, even if the loss function is convex. We apply L-BFGS-B algorithm (Byrd et al. 1995;Morales and Nocedal 2011), an extension of L-BFGS to handle simple box constraints on variables, to solve (18). Then the obtained decision function isf (x) = n i=1v i Kη(x, x i ) +b.
Simulation studies
We carried out extensive simulations to investigate empirical performance of the proposed AOL methods. We first evaluated performance of different residuals in the framework of AOL. In the simulations, p-dimensional vectors of clinical covariates x 1 , · · · , x p were generated from independent uniform random variables U (−1, 1). The response R was normally distributed with mean Q 0 (x, a) and standard deviation 1. We considered two scenarios with linear treatment regimes:
(1) Q 0 (x, a) = (0.5 + 0.5x 1 + 0.8x 2 + 0.3x 3 − 0.5x 4 + 0.7x 5 ) + a(0.2 − 0.6x 1 − 0.8x 2 );
(2) Q 0 (x, a) = exp [(0.5 + 0.5x 1 + 0.8x 2 + 0.3x 3 − 0.5x 4 + 0.7x 5 ) + a(0.2 − 0.6x 1 − 0.8x 2 )].
We evaluated three types of residuals, as discussed in Section 2.4, with respect to the following g(x)'s:
•g(x) = E π(−A,X) π(A,X) R X = x = π(−1, x)µ +1 (x) + π(+1, x)µ −1 (x); • g 1 (x) = E R 2π(A,X) X = x = 1 2 µ +1 (x) + 1 2 µ −1 (x); • g 2 (x) = E (R|X = x) = π(+1, x)µ +1 (x) + π(−1, x)µ −1 (x).
g(x) is used in the proposed AOL. However, g 1 (x) in and g 2 (x) in Liu et al. (2016) also can be applied in AOL to replaceg(x). In a randomized clinical trial with usual equal allocation ratio 1:1, i.e. π(+1, x) = π(−1, x) = 0.5, these g(x)'s are the same. To compare performance of these residuals, we considered unequal allocation ratios (1) 3:1, i.e., π(+1, x) = 3π(−1, x) and (2) 1:3, i.e., 3π(+1, x) = π(−1, x).
The sample sizes were n = 100 and n = 400 for each scenario. We repeated the simulation 500 times. Parameters were tuned through 10-fold cross-validation. A large Table 1: Mean (std) of empirical value functions evaluated on independent test data for AOL with three types of residuals in two simulation scenarios with 5 covariates. The best value function for each scenario and sample size combination is in bold. allocation ratio = 3:1 allocation ratio = 1:3 n = 100 n = 400 n = 100 n = 400 independent test set with 10,000 subjects was used to evaluate performance. The evaluation criterion was the value function of the estimated regime on the test set. For simplicity, we run the first set of simulations using only linear AOL on low dimensional data (p = 5).g(x), g 1 (x) and g 2 (x) are obtained by the underlying true distributions of the data, instead of estimating them from the observed data, to eliminate impacts of regression estimates on evaluation.
The simulation results on low dimensional data (p = 5) are presented in Table 1. From the table, the residuals fromg(x) yielded the best performance for each combination of scenario, allocation ratio and sample size, especially when the sample size is small. The simulation results confirm finite sample performance of our proposed counterfactual residual.
We then compared performance of AOL with other existing methods on usual equal allocation ratio data. The treatment A ∈ A = {−1, 1} was independent of X with π(+1, X) = π(−1, X) = 0.5. The covariate X and the outcome R were generated as previously. We considered two additional scenarios with non-linear treatment regimes:
(3) Q 0 (x, a) = (0.5 + 0.6x 1 + 0.8x 2 + 0.3x 3 − 0.5x 4 + 0.7x 5 ) + a(0.6 − x 2 1 − x 2 2 );
(4) Q 0 (x, a) = exp (0.5 + 0.6x 1 + 0.8x 2 + 0.3x 3 − 0.5x 4 + 0.7x 5 ) + a(0.6 − x 2 1 − x 2 2 ) ; We run simulations for two different dimensions of covariates: low dimensional data (p = 5) and moderate dimensional data (p = 25). On low dimensional data (p = 5), we compared empirical performances of the following seven methods: (1) ℓ 1 -PLS proposed by Qian and Murphy (2011); (2) Q-learning using random forests as described in Taylor using the linear kernel (RWL-Linear); (5) RWL using the Gaussian RBF kernel (RWL-Gaussian); (6) the proposed AOL using the linear kernel (AOL-Linear); (7) the proposed AOL using the Gaussian RBF kernel (AOL-Gaussian). When the dimension was moderate (p = 25), RWL methods were replaced with their variable selection counterparts (RWL-VS-Linear and RWL-VS-Gaussian) , and similarly AOL methods were replaced with AOL-VS-Linear and AOL-VS-Gaussian. ℓ 1 -PLS is a parametric regression-based method. In the simulation studies, ℓ 1 -PLS estimated the conditional outcomes E(R|X, A) by a linear model on (1, X, A, XA), and used the LASSO penalty for variable selection. The obtained regime was the treatment arm in which the conditional mean outcome is larger. Q-RF is a nonparametric regressionbased method. The conditional outcomes E(R|X, A) were approximated using (X, A) as input covariates in the random forests. The number of trees was set to 1000 as suggested in Taylor et al. (2015). For AIPWE-CART, we first obtained the AIPWE version of the contrast function through linear regression, and then we let the propensity score be 0.5 and searched the optimal treatment regime using a CART. The residuals in RWL and AOL were the same, and they were estimated by a linear regression model on X. It was different with the previous simulation. We pretended that we do not know the underlying distribution of the data, and the residuals were estimated purely based on the simulated data. There are tuning parameters for ℓ 1 -PLS, RWL and AOL methods. Parameters were tuned through 10-fold cross-validation.
Again, the sample sizes were n = 100 and n = 400 for each scenario. We repeated the simulation 500 times. A large independent test set with 10,000 subjects was used to evaluate performance.
The simulation results on the low dimensional data (p = 5) are presented in Table 2. For Scenario 1, the optimal regime d * (x) is 1 if 0.6x 1 + 0.8x 2 < 0.2, and −1 otherwise. Both the decision boundary and the conditional outcome were linear. Thus ℓ 1 -PLS performed very well since its model was correctly specified. RWL and AOL methods performed similarly, and they were close to ℓ 1 -PLS especially when the sample size was large. Q-RF and AIPWE-CART, as tree-based methods, were not comparable with other methods, perhaps trees do not work well to detect linear boundary. For Scenario 2, the optimal treatment regime was the same as the one in Scenario 1. Although the boundary was linear, both the conditional outcome and the contrast function were non-linear. ℓ 1 -PLS did not yield the best performance due to model mis-specification. Instead RWL-Linear showed the best performance. AOL-Linear was slightly worse than RWL-Linear. We used a linear model to estimate residuals. In this scenario, the linear model for residuals was mis-specified. RWL is robust to outliers on the residuals . As discussed in Section 2.4, the robustness to outliers is the only nice property that AOL does not inherit from RWL due to unbounded convex loss function. This is perhaps the reason why AOL-Linear was slightly worse than RWL. Both RWL-Gaussian and AOL-Gaussian were similarly performed. For Scenarios 3 and 4 , the decision boundaries were both nonlinear. We show results for Q-RF, AIPWE-CART, RWL-Gaussian, and AOL-Gaussian since the other three methods, ℓ 1 -PLS, RWL-Linear and AOL-Linear, can only detect linear regimes. In Scenario 3, the model to estimating residuals was correctly specified, while in Scenario 4, this model was mis-specified. For both scenarios, AOL-Gaussian yielded the best performance, and was slightly better than RWL-Gaussian. The non-convex optimization with RWL-Gaussian has a complicated objective function with many local minima or stationary points. It is very challenging to find a global minimum. The convex AOL-Gaussian does not have such problem, and hence received better performance than RWL-Gaussian. We also compared running times of RWL and AOL methods. As shown in Table 5 in Appendix C, AOL is about 5-10 times faster than RWL. The convex optimization is much more computationally efficient than non-convex optimization.
We moved to moderate dimension cases (p = 25). The simulation results are shown in Table 3. In Scenario 1, ℓ 1 -PLS outperformed other methods because of correct model specification. When the sample size was large, RWL and AOL methods were all close to ℓ 1 -PLS. In Scenario 2, RWL-VS-Linear presented the best performance, and were slightly better than AOL-VS-Linear. We think the reason is that RWL methods are robust on mis-specified regression models for estimating residuals. In Scenarios 3 and 4, our proposed AOL-VS-Gaussian ranked the first, and was slightly better than RWL-VS-Gaussian. Even though both RWL-VS-Gaussian and AOL-VS-Gaussian involve non-convex optimization, the objective function of AOL-VS-Gaussian is simpler, and is perhaps easier to find a global minimum than RWL-VS-Gaussian. We also compared the computational costs of RWL and AOL, as shown in Table 6 in Appendix C. The cost of AOL was again about 5-10 times cheaper than that of RWL.
Data analysis
We applied the proposed methods to analyze the Nefazodone-CBASP clinical trial data (Keller et al. 2000). The Nefazodone-CBASP trial randomly assigned patients with nonpsychotic chronic major depressive disorder (MDD) in a 1:1:1 allocation ratio to either Nefazodone (NFZ), cognitive behavioral-analysis system of psychotherapy (CBASP), or the combination of Nefazodone and CBASP (COMB). The outcome was the score on the 24-item Hamilton Rating Scale for Depression (HRSD). Lower HRSD is better. We used 50 pre-treatment covariates as in Zhao et al. (2012), and excluded patients with missing covariate values. The data used here consisted of 647 patients, with 216, 220, and 211 patients in three treatment arms.
We performed pairwise comparisons between any two treatment arms. we compared the performance of AOL-VS-Linear and AOL-VS-Gaussian with l 1 -PLS, Q-RF, AIPWE-CART, RWL-VS-Linear and RWL-VS-Gaussian, as in the simulation studies. The outcomes used in the analyses were opposites of HRSD scores. We used a nested 10-fold cross-validation procedure for an unbiased comparison (Ambroise and McLachlan 2002). Specifically, the data were randomly partitioned into 10 roughly equal-sized parts. We used nine parts as training data to predict optimal treatments for patients in the part left out. The parameter tuning was based on inner 10-fold cross-validation on the training data. We repeated the procedure 10 times, and obtained the predicted treatment for each patient. We then computed the estimated value function as P n [RI(A = P red)/π A (X)]/P n [I(A = P red)/π A (X)], where P n denotes the empirical average over the data and P red is the predicted treatment in the cross validation procedure. To obtain reliable estimates, we repeated the nested cross-validation procedure 100 times with different fold partitions. The analysis results are presented in Table 4. For comparison between NFZ and CBASP, RWL-VS-Linear and AOL-VS-Linear performed better than other methods. For comparisons between NFZ and COMB and between CBASP and COMB, all methods produced similar performance. AOL-VS-Linear was among the top two methods for all comparisons. As shown in Table 7 in Appendix C, AOL was at least 10 times faster than RWL.
Discussion
In this article, we have proposed augmented outcome-weighted learning (AOL) to estimate optimal treatment regimes. As a close relative of residual weighted learning (RWL), AOL possesses almost all nice properties of RWL. AOL utilizes a convex loss function to guarantee a global solution. By contrast, the nice theoretical properties, for example, universal consistency, of RWL rely on a global solution, but the non-convex optimization associated with RWL cannot guarantee the global optimization. Furthermore, the convex optimization associated with AOL make it computationally efficient. In the simulation studies and data analysis, AOL is at least 5-10 times faster than RWL. There are two main approaches to estimating optimal treatment regimes. Regressionbased approaches posit regression models for either conditional mean outcomes, µ +1 (x) and µ −1 (x), or the contrast function, δ(x) = µ +1 (x) − µ −1 (x), then the optimal treatment regime is estimated by setting δ(x) = 0. Classification-based approaches directly estimate the optimal regime δ(x) = 0 in a semiparametric or nonparametric model. Compared with regression-based approaches, classification-based approaches are more robust to model misspecification. For example, in Figure 1, the optimal treatment regime δ(x) = 0 is almost linear, but the contrast function δ(x) is a complicated non-linear function. We may use the linear AOL to estimate the regime. However, any regression-based approach with a linear model would be misspecified. Another example is Scenario 2 in the simulation studies, where the optimal regime is linear, but neither the conditional mean outcome nor the contrast function is linear. The linear AOL yielded better performance than ℓ 1 -PLS, although both methods posit a linear model. AOL uses a different form of residuals, as compared to RWL. The residuals are estimated with respect to a counterfactual average outcome where all subjects would receive the opposite treatments to what they have actually received. Apparently, both AOL and RWL can apply with any form of residuals. In this article, we focus on the randomized clinical trial data where π(a, x) is known. According to the theory in Section 2.5, for a randomized clinical trial, AOL with any form of residuals is universally consistent. That is, it would eventually yield the Bayes regime when the sample size approaches infinity. When the sample size is finite, the simulations in Section 3 confirm the better performance of the counterfactual residual over others. The counterfactual residual is derived from a doubly robust AIPWE. The double robustness is quite useful for the observational study. In Appendix D, we develop the double robustness on universal consistency, i.e., the estimated regime of AOL is universal consistent if eitherμ a (x) orπ(a, x) is consistent on the observational data. We pave the way in theory for AOL in observational studies. It is of great interest to apply AOL in observational studies in our future work.
In this article, the outcome R is continuous. proposed RWL as a general framework to deal with continuous, binary, count and rate outcomes. Similarly, AOL can handle all these types of outcomes by calculating residuals from a weighted regression model. The only difference is that the weights of AOL are π(−A,X) π(A,X) , while 1 2π(A,X) for RWL. Variable selection is critical for optimal treatment regimes . Similar with RWL, we have provided variable selection algorithms for both linear and Gaussian RBF kernels. The variable selection with the linear kernel is an important extension of AOL. It seeks the optimal treatment regime semiparametrically, which is suitable for high dimensional data. Unlike the linear kernel, the variable selection with the Gaussian RBF kernel involves computationally intensive non-convex optimization, which cannot guarantee a global solution. A convex extension is still needed for our future investigation. Perhaps we may apply with the Gaussian RBF kernel a similar adaptive metric selection in .
APPENDIX
We investigate loss functions in Appendix A. The proofs of theorems in the main paper are given in Appendix B. We present additional simulation results in Appendix C. The doubly robustness of AOL on observational data is proved in Appendix D.
APPENDIX A. Loss functions
Example 1 (hinge loss). Consider the loss function φ(α) = (1 − α) + . This is the surrogate loss function used in Zhao et al. (2012) and Liu et al. (2016). Q η 1 ,η 2 (α) is piecewise-linear. For η 1 = 0, any α ≤ −1 makes Q η 1 ,η 2 (α) vanish. The same holds for α ≥ 1 for η 2 = 0. For η 1 , η 2 ∈ (0, ∞), any minima lie in [−1, 1]. Since Q η 1 ,η 2 (α) is linear on [−1, 1], the minimum must be attained at 1 for η 1 > η 2 , −1 for η 1 < η 2 , and anywhere in [−1, 1] for η 1 = η 2 . We have argued that α * = sign(η 1 − η 2 ). It is easy to verify that H(η 1 , η 2 ) = 2 min(η 1 , η 2 ). Similar argument gives H − (η 1 , η 2 ) = η 1 + η 2 . H − (η 1 , η 2 ) is strictly greater than H(η 1 , η 2 ) when η 1 = η 2 , so the hinge loss is Fisher consistent. Since H η 1 ,η 2 = 2 min(η 1 , η 2 ), we have ∆Q η 1 ,η 2 (0) = |η 1 − η 2 |. From Theorem 2.2, ∆R(f ) ≤ ∆R φ,g (f ).
Example 2 (squared hinge loss). Consider the loss function φ(α) = [(1 − α) + ] 2 . This function is convex, differentiable, and decreasing at zero, and thus is Fisher consistent. If η 1 = 0, any α ≤ −1 makes Q η 1 ,η 2 (α) vanish. Similarly, any α ≥ 1 makes the conditional φ-risk vanish when η 2 = 0. For η 1 , η 2 ∈ (0, ∞), Q η 1 ,η 2 (α) is strictly convex with a unique minimum, and solving for it yields α * = (η 1 − η 2 )/(η 1 + η 2 ). Simple calculation gives that H η 1 ,η 2 is 0 when η 1 = η 2 = 0, and otherwise 4η 1 η 2 /(η 1 + η 2 ). Then for either case, we have (η 1 − η 2 ) 2 = (η 1 + η 2 )∆Q η 1 ,η 2 (0). If we further assume that E |Rg| π(A,X) is bounded by M g , by Theorem 2.3, ∆R(f ) ≤ M g (∆R φ,g (f )) 1/2 .
Example 3 (least squares loss). Now consider the loss function φ(α) = (1 − α) 2 . From Theorem 2.1, it is Fisher consistent. Simple algebraic manipulations show that H η 1 ,η 2 and ∆Q η 1 ,η 2 (0) are the same as those in the previous example. Hence the bound in the previous example also applies to the least squares loss.
Example 4 (Huberized hinge loss). If η 1 = 0, any α ≤ −1 makes Q η 1 ,η 2 (α) vanish.
Similarly, when η 2 = 0 any α ≥ 1 makes the conditional φ-risk vanish. For η 1 > 0 and η 2 > 0, Q η 1 ,η 2 (α) is strictly convex with a unique minimum. Solving by differentiation, the minimum is obtained at α * = (η 1 −η 2 )/(η 1 +η 2 ). Then we have H η 1 ,η 2 is 0 when η 1 = η 2 = 0, and otherwise η 1 η 2 /(η 1 +η 2 ). Then for either case, we have (η 1 −η 2 ) 2 = 4(η 1 +η 2 )∆Q η 1 ,η 2 (0).
If we further assume that E |Rg| π(A,X) is bounded by M g , by Theorem 2.3,
∆R(f ) ≤ 2 M g (∆R φ,g (f )) 1/2 .
Example 5 (logistic loss). We consider the loss function φ(α) = log(1 + exp(−α)). This loss function is convex, differentiable, and decreasing at zero, and thus is Fisher consistent. We first consider the case that η 1 = 0 and η 2 = 0. Simple calculation gives that Q η 1 ,η 2 (α) attains its minimum at α * = log(η 1 /η 2 ), and ∆Q η 1 ,η 2 (0) = η 1 log 2η 1 η 1 + η 2 + η 2 log 2η 2 η 1 + η 2 .
We fix η 2 , and see ∆Q η 1 ,η 2 (0) as a function of η 1 . Using Taylor expansion around η 1 = η 2 , we have ∆Q η 1 ,η 2 (0) = 1 2
η 2 η 1 (η 1 + η 2 ) (η 1 − η 2 ) 2 ,
whereη 1 is between η 1 and η 2 . Similarly, fix η 1 , and again use Taylor expansion around η 2 = η 1 ,
∆Q η 1 ,η 2 (0) = 1 2 η 1 η 2 (η 1 +η 2 ) (η 1 − η 2 ) 2 ,
whereη 2 is between η 1 and η 2 . By summing these two equations, we obtain 2∆Q η 1 ,η 2 (0) = 1 2
η 2 η 1 (η 1 + η 2 ) (η 1 − η 2 ) 2 + 1 2 η 1 η 2 (η 1 +η 2 ) (η 1 − η 2 ) 2 ≥ 1 2 η 2 2(η 1 + η 2 ) 2 (η 1 − η 2 ) 2 + 1 2 η 1 2(η 1 + η 2 ) 2 (η 1 − η 2 ) 2 = (η 1 − η 2 ) 2 4(η 1 + η 2 ) . So, we have (η 1 − η 2 ) 2 ≤ 8(η 1 + η 2 )∆Q η 1 ,η 2 (0).
It is easy to verify that when η 1 = 0 or η 2 = 0, the above bound holds. If we further assume that E |Rg| π(A,X) is bounded by M g , by Theorem 2.3,
∆R(f ) ≤ 8M g (∆R φ,g (f )) 1/2 .
Example 6 (DWD loss). The DWD loss is convex, differentiable and decreasing at zero. Hence this loss function is Fisher consistent. When η 1 , η 2 ∈ (0, ∞), consider three cases,
(1) η 1 > η 2 ; (2) η 2 > η 1 ; (3) η 1 = η 2 . Simple differentiation yields that the minimizer is
α * = η 1 η 2 if η 1 > η 2 > 0, any point ∈ [−1, 1] if η 1 = η 2 > 0, − η 2 η 1 if η 2 > η 1 > 0.
When η 1 > η 2 > 0, we have H η 1 ,η 2 = 2η 2 + 2 √ η 1 η 2 . Then,
∆Q η 1 ,η 2 (0) = 2η 1 − 2 √ η 1 η 2 = 2 √ η 1 √ η 1 + √ η 2 (η 1 − η 2 ) ≥ |η 1 − η 2 |.(19)
Similarly, when η 2 > η 1 > 0, (19) holds. It is easy to verify that when η 1 = η 2 or at least one of η 1 and η 2 is zero, the inequality (19) holds too. By Theorem 2.2, ∆R(f ) ≤ ∆R φ,g (f ). Example 7 (exponential loss). Consider the loss function φ(α) = exp(−α). Again, this function is convex, differentiable, and decreasing at zero, and thus is Fisher consistent. For η 1 , η 2 ∈ (0, ∞), solving for the stationary point yields the unique minimizer α * = argmin α∈R Q η 1 ,η 2 (α) = 1 2 log (η 1 /η 2 ). Then H η 1 ,η 2 = 2 √ η 1 η 2 , and ∆Q η 1 ,η 2 (0) = (
√ η 1 − √ η 2 ) 2 . Note that ( √ η 1 + √ η 2 ) 2 ≤ 2(η 1 + η 2 ), then we have, (η 1 − η 2 ) 2 ≤ 2(η 1 + η 2 )∆Q η 1 ,η 2 (0).
It is easy to verify that when η 1 = 0 or η 2 = 0, the above inequality holds. With an additional assumption that E |Rg| π(A,X) is bounded by M g , from Theorem 2.3,
∆R(f ) ≤ 2M g (∆R φ,g (f )) 1/2 .
APPENDIX B. Proofs
Proof of Theorem 2.1
Proof. Recall that Q η 1 ,η 2 (α) = η 1 φ(α) + η 2 φ(−α). It is easy to check that Q η 1 ,η 2 is convex. We consider the 'if ' part of the proof first. Suppose that φ is differentiable at 0 and has φ ′ (0) < 0. Assume without loss of generality that η 1 > η 2 . We need to prove that Q η 1 ,η 2 (α) is not minimized by any α ∈ (−∞, 0], i.e. α * > 0. Because φ is convex, it follows that for any h > 0
φ(0) + hφ ′ (0) ≤ φ(h) φ(0) − hφ ′ (0) ≤ φ(−h).
Therefore, noting that Q η 1 ,η 2 (0) = φ(0)(η 1 + η 2 ), it is derived that
Q η 1 ,η 2 (−h) − Q η 1 ,η 2 (0) = η 1 (φ(−h) − φ(0)) + η 2 (φ(h) − φ(0)) ≥ −(η 1 − η 2 )φ ′ (0)h,
that is, given φ ′ (0) < 0, for any h > 0, Q η 1 ,η 2 (−h)−Q η 1 ,η 2 (0) > 0. Consequently, α * ≥ 0 because it is a minimum. To prove the strict inequality, note that given that φ is differentiable at zero, by definition, for any ǫ > 0 there exists a δ(ǫ) > 0 such that
δ −1 (φ(δ) − φ(0)) ≤ φ ′ (0) + ǫ δ −1 (φ(0) − φ(−δ)) ≥ φ ′ (0) − ǫ.
This implies that
Q η 1 ,η 2 (δ) − Q η 1 ,η 2 (0) = η 1 (φ(δ) − φ(0)) + η 2 (φ(−δ) − φ(0)) ≤ η 1 δ φ ′ (0) + ǫ + η 2 δ ǫ − φ ′ (0) = δ φ ′ (0)(η 1 − η 2 ) + ǫ(η 1 + η 2 ) ,
thus, making ǫ small enough, φ ′ (0)(η 1 − η 2 ) + ǫ(η 1 + η 2 ) < 0. It follows that α * > 0.
We proceed now with the 'only if ' part of the proof. Suppose φ is Fisher consistent. Note that if α * minimizes Q η 1 ,η 2 (α), it follows that Q η 1 ,η 2 (α * )−Q η 1 ,η 2 (0) < 0 when η 1 = η 2 . Note that
Q η 1 ,η 2 (α * ) − Q η 1 ,η 2 (0) = η 1 (φ(α * ) − φ(0)) + η 2 (φ(−α * ) − φ(0))(20)
We need to prove that φ ′ (0) < 0. Let [a, b] be the subderivative of φ at zero. By definition, if h > 0,
φ(h) − φ(0) ≥ bh φ(−h) − φ(0) ≥ −ah.(21)
First we are going to prove that b < 0. Suppose by contradiction that b ≥ 0. If η 1 > η 2 then α * > 0 from the definition of Fisher consistency. By (21), φ(α * ) ≥ φ(0), and replacing in (20), it is necessary that φ(−α * ) < φ(0) in order to keep the optimality property of α * . By (21) again, we have that 0 < a ≤ b. Consequently, by replacing (21)
with h = α * into (20), Q η 1 ,η 2 (α * ) − Q η 1 ,η 2 (0) ≥ α * (b(η 1 − η 2 ) − η 2 (a − b)) > 0,
which contradicts that α * is the minimum. Therefore, it is concluded that b < 0. It remains to prove that a = b. To do so, suppose by contradiction that a < b < 0. This implies that it is possible to have a distribution such that η 1 > η 2 and η 1 b > η 2 a, and therefore α * > 0. By replacing (21) with h = α * into (20) again,
Q η 1 ,η 2 (α * ) − Q η 1 ,η 2 (0) ≥ α * (η 1 b − η 2 a) > 0,
which contradicts the fact that α * minimizes Q η 1 ,η 2 . It follows that φ is differentiable at zero and φ ′ (0) < 0.
Proof of Theorem 2.2
Proof.
E R π(A, X) I(A = sign(f (X))) X − E R π(A, X) I(A = d * (X)) X = E(R|X, A = 1) − E(R|X, A = −1) I d * (X) = 1 − I sign(f (X)) = 1 ≤ η 1 (X) − η 2 (X) I sign(f (X)) η 1 (X) − η 2 (X) < 0 .
Then taking expectation on both sides we have,
∆R(f ) ≤ E η 1 (X) − η 2 (X) I sign(f (X)) η 1 (X) − η 2 (X) < 0 ≤ E η 1 (X) − η 2 (X) s I sign(f (X)) η 1 (X) − η 2 (X) < 0 1/s ≤ C E ∆Q η 1 (X),η 2 (X) (0)I sign(f (X)) η 1 (X) − η 2 (X) < 0 1/s .
The second inequality follows from the Jensen's inequality. By the conditions of φ, φ is Fisher consistent. Let α * minimizes Q η 1 ,η 2 (α). When sign(f )·(η 1 −η 2 ) < 0, by the definition of Fisher consistent, 0 is between f and α * . The convexity of φ, and hence of Q η 1 ,η 2 , implies that
Q η 1 ,η 2 (0) ≤ max(Q η 1 ,η 2 (f ), Q η 1 ,η 2 (α * )) = Q η 1 ,η 2 (f ).
So we have,
∆R(f ) ≤ C E ∆Q η 1 (X),η 2 (X) (f )I sign(f (X)) η 1 (X) − η 2 (X) < 0 1/s ≤ C(∆R φ,g (f )) 1/s .
Proof of Theorem 2.3
Proof. Following the similar arguments in the proof of Theorem 2.2, we have
∆R(f ) = E η 1 (X) − η 2 (X) I sign(f (X)) η 1 (X) − η 2 (X) < 0 ≤ E η 1 (X) − η 2 (X) s/2 I sign(f (X)) η 1 (X) − η 2 (X) < 0 2/s ≤ E h(η 1 (X) + η 2 (X)) 1/2 ∆Q η 1 (X),η 2 (X) (0) 1/2 I sign(f (X)) η 1 (X) − η 2 (X) < 0 2/s ≤ E h(η 1 (X) + η 2 (X)) 1/2 ∆Q η 1 (X),η 2 (X) (f (X)) 1/2 2/s ≤ E h(η 1 (X) + η 2 (X)) E ∆Q η 1 (X),η 2 (X) (f (X)) 1/s ≤ h(E(η 1 (X) + η 2 (X))) 1/s (∆R φ,g (f )) 1/s .
The second and sixth inequalities follows from the Jensen's inequality, and the fifth follows Cauchy-Schwarz inequality. Notice that
E |R g | π(A, X) X = E(|R g ||X, A = 1) + E(|R g ||X, A = −1) = E(R + g + R − g |X, A = 1) + E(R + g + R − g |X, A = −1) = η 1 (X) + η 2 (X).
So E |Rg| π(A,X) = E(η 1 (X) + η 2 (X)). The desired result follows through the monotonicity of h.
Proof of Theorem 2.4
Proof. Let L(h, b) = |R g |φ(Asign(R g )(h(X) + b))/π(A, X). For simplicity, we denote f Dn,λn , h Dn,λn and b Dn,λn by f n , h n and b n , respectively. By the definition of h Dn,λn and b Dn,λn , we have, for any h ∈ H K and b ∈ R,
P n (L(h n , b n )) ≤ P n (L(h n , b n )) + λ n 2 ||h n || 2 K ≤ P n (L(h, b)) + λ n 2 ||h|| 2 K ,
where P n denotes the empirical measure of the observed data. Then, lim sup n P n (L(h n , b n )) ≤ P(L(h, b)) = R φ,g (h + b) with probability 1. This implies lim sup
n P n (L(h n , b n )) ≤ inf h∈H K ,b∈R R φ,g (h + b) ≤ P(L(h n , b n ))
with probability 1. It suffices to show P n (L(h n , b n )) − P(L(h n , b n )) → 0 in probability. We have a bound for |b n |, | √ λ n b n | ≤ M b , as a condition. We next obtain a bound for ||h n || K . Since P n (L(h n , b n )) + λ n ||h n || 2 K /2 ≤ P n (L(h, b)) + λ n ||h|| 2 K /2, for any h ∈ H K and b ∈ R, we can choose h = 0 and b = 0 to obtain, P n (L(h n , b n )) + λ n ||h n || 2 K /2 ≤ φ(0)P n (|R g |/π(A, X)). We thus have, λ n ||h n || 2 K ≤ 2φ(0)P n (|R g |/π(A, X)) ≤ 2φ(0)M g .
Let M h = 2φ(0)M g . Then the H K norm of √ λ n h n is bounded by M h . Note that the class { √ λ n h : || √ λ n h|| K ≤ M h } is a Donsker class. So { √ λ n (h + b) : || √ λ n h|| K ≤ M h , | √ λ n b| ≤ M b } is also P-Donsker. Consider a function φ λn (u) = √ λ n φ(u/ √ λ n ). φ λn (u)
is a Lipschitz continuous function with the same Lipschitz constant as φ(u). Note that
λ n L(h, b) = |R g | π(A, X) φ λn (A λ n · sign(R g )(h(X) + b)).
Since φ λn (u) is Lipschitz continuous and
|Rg| π(A,X) is bounded, the class { √ λ n L(h, b) : || √ λ n h|| K ≤ M h , | √ λ n b| ≤ M b } is also P-Donsker. Therefore nλ n (P n − P)L(h n , b n ) = O p (1).
Consequently, from nλ n → ∞, P n (L(h n , b n )) − P(L(h n , b n )) → 0 in probability.
Proof of Lemma 2.5
Proof. Fix any 0 < ǫ < 1. Suppose φ is Lipschitz continuous with Lipschitz constant C. Let µ be the marginal distribution of X. Since µ is regular and f * φ,g is measurable, using Lusin's theorem in measure theory, we know that f * φ,g can be approximated by a continuous function
f ′ (x) ∈ C(X ) such that µ(f ′ (x) = f * φ,g (x)) ≤ ǫ 4CM f Mg . Since f * φ,g is between [−M f , M f ], we may limit f ′ (x) ∈ [−M f , M f ] (otherwise, truncate f ′ (x) with upper bound M f and lower bound −M f ). Thus E |R g | π(A, X) φ A · sign(R g )f ′ (X) X = x − E |R g | π(A, X) φ Asign(R g )f * φ,g (X) X = x = η 1 (x) φ(f ′ (x)) − φ(f * φ,g (x)) + η 2 (x) φ(−f ′ (x)) − φ(−f * φ,g (x)) ≤ C η 1 (x) + η 2 (x) |f ′ (x) − f * φ,g (x)|.
The last inequality is due to the fact that φ is Lipschitz continuous. Then, we have,
R φ,g (f ′ ) − R * φ,g = |R φ,g (f ′ ) − R * φ,g | ≤ C η 1 (x) + η 2 (x) |f ′ (x) − f * φ,g (x)|µ(dx) = C E |R g | π(A, X) X = x |f ′ (x) − f * φ,g (x)|I(f ′ (x) = f * φ,g (x))µ(dx) Since |Rg| π(A,X) ≤ M g and both f ′ (x) and f * φ,g (x) are between [−M f , M f ], we have R φ,g (f ′ ) − R * φ,g < ǫ/2.
Since K is universal, there exists a function f ′′ ∈ H K such that ||f ′′ − f ′ || ∞ < ǫ 2CMg . Similarly,
|R φ,g (f ′′ ) − R φ,g (f ′ )| ≤ C η 1 (x) + η 2 (x) |f ′′ (x) − f ′ (x)|µ(dx) = C E |R g | π(A, X) X = x |f ′′ (x) − f ′ (x)|µ(dx) < ǫ/2.
By combining the two inequalities, we have
R T,g (f ′′ ) − R * φ,g < ǫ.
Noting that f ′′ ∈ H K and letting ǫ → 0, we obtain the desired result.
Proof of Proposition 2.7
Proof. When |Rg| π(A,X) is bounded, the excess risk is bounded as argued in Section 2.4.2,
∆R(f ) ≤ 2 M g (∆R φ,g (f )) 1/2 .(22)
Next, we obtain a bound for b f Dn,λn . We use the notations in the proof of Theorem 2.4.
We claim that there is a solution (h n , b n ) such that h n (x i )+b n ∈ [−1, 1] for some i. Suppose that there is another solution (h ′ n , b ′ n ) such that |h n (x i ) + b n | > 1 for all i. Let D 1 = {i :
A i sign(R g,i ) = 1, h ′ n (X i ) + b ′ n < −1} and D 2 = {i : A i sign(R g,i ) = −1, h ′ n (X i ) + b ′ n > 1}. Denote α 1 = i∈D 1 |R g,i | π(A i , X i ) , and α 2 = i∈D 2 |44R g,i | π(A i , X i ) .
We show that α 1 = α 2 . Otherwise, when α 1 > α 2 , let
δ = min i:h ′ n (X i )+b ′ n <−1 |h ′ n (X i ) + b ′ n |.
Then set h n = h ′ n and b n = b ′ n + (δ − 1). It is easy to check that (h n , b n ) is a better solution than (h ′ n , b ′ n ), which is contradicted with the fact that (h ′ n , b ′ n ) is a solution. Similarly, when α 1 < α 2 , let δ = min
i:h ′ n (X i )+b ′ n >1 |h ′ n (X i ) + b ′ n |.(23)
Then set h n = h ′ n and b n = b ′ n − (δ − 1). Thus (h n , b n ) is a better solution than (h ′ n , b ′ n ). It is a contradiction again. So we have α 1 = α 2 . However, when we set δ as in (23)
, h n = h ′ n and b n = b ′ n − (δ − 1). (h n , b n )
is a solution and satisfies our claim. Now if a solution (h n , b n ) satisfies our claim for subject i 0 , we then have,
|b n | ≤ 1 + |h n (X i 0 )| ≤ 1 + ||h n || ∞ .
Note that ||h|| ∞ ≤ C K ||h|| K . We have, | λ n b n | ≤ λ n + C K λ n ||h n || K .
As in the proof of Theorem 2.4, √ λ n ||h n || K is bounded. Since λ n → 0, and C K is bounded, we have | √ λ n b n | is bounded too. So by Theorem 2.4,
lim n→∞ R φ,g (f Dn,λn ) = inf f ∈H K +{1} R φ,g (f ).(24)
By the argument in Appendix A, the optimal function
f * φ,g (x) = 0 if η 1 (x) = η 2 (x) = 0, η 1 (x) − η 2 (x) η 1 (x) + η 2 (x)
otherwise.
Clearly, f * φ,g is measurable, and |f * φ,g (x)| ≤ 1. By Lemma 2.5
inf f ∈H K +{1} R φ,g (f ) = R * φ,g .(25)
Combining (22), (24) and (25), we have the desired result.
APPENDIX C. Additional results in the simulation study and data analysis
In this section, we compared the computational cost of AOL with RWL. In the simulation studies, the average running times of the first 10 runs of AOL and RWL with tuned parameters listed in Table 5 for low dimensional data (p = 5) and listed in Table 5 for moderate dimensional data (p = 25). AOL is about 5-10 times faster than RWL.
For the real data analysis in Section 4, we averaged the running times in the 10 runs of the first fold partition of cross-validation with tuned parameters. AOL is at least 10 times faster than RWL.
APPENDIX D. Double robustness of AOL on observational data
In the main paper, AOL is mainly applied to randomized clinical trial data. This method also can be used on observational data. In an observational study, we first estimate π(a, x) by, for example,π(a, x). Then we need to estimateg(x) bŷ
g(x) =π(−1, x)μ +1 (x) +π(+1, x)μ −1 (x),
whereμ +1 (x) andμ −1 (x) are estimators of µ +1 (x) and µ −1 (x), respectively.g(x) also can be estimated by weighted regression with weightsπ(−a, x)/π(a, x).
Here we suppress the dependence on observed data D n for notationsπ(a, x),μ +1 (x), and µ −1 (x). Suppose that when n approaches infinity,π(a, x) p →π(a, x),μ +1 (x) p →μ +1 (x), andμ −1 (x) p →μ −1 (x). Whenμ +1 (x) andμ −1 (x) are consistent,μ +1 (x) = µ +1 (x) and µ −1 (x) = µ −1 (x). Whenπ(a, x) is consistent,π(a, x) = π(a, x).
Again, for finite sample observational data D n , let f Dn,λn ∈ H K + {1}, i.e. f Dn,λn = h Dn,λn + b Dn,λn , where h Dn,λn ∈ H K and b Dn,λn ∈ R, be a global minimizer of the following optimization problem:
min f =h+b∈H K +{1} 1 n n i=1 |Rĝ ,i | π(A i , X i ) φ A i · sign(Rĝ ,i )f (X i ) + λ n 2 ||h|| 2 K ,(26)
where Rĝ ,i = R i −ĝ(X i ). Here we suppress φ and g from the notations of f Dn,λn , h Dn,λn and b Dn,λn . Note that |r|φ(sign(r)f ) is continuous with respect to r. By the law of large numbers and the continuous mapping theorem,
1 n n i=1 |Rĝ ,i | π(A i , X i ) φ A i · sign(Rĝ ,i )f (X i ) p → E |R g | π(A, X) φ A · sign(R g )f (X) ,
where R g = R − g(X) and g(x) =π(−1, x)μ +1 (x) +π(+1, x)μ −1 (x) in this section. As in the case of the randomized clinical trial in the main paper, we define the risk function of a treatment regime d as R(d) = E R π(A, X) I A = d(X) . For observational data, we define the surrogate φ-risk function:
R φ,g (f ) = E |R g | π(A, X) φ A · sign(R g )f (X) .(27)
Similarly, the minimal φ-risk as R * φ,g = inf f R φ,g (f ) and f * φ,g = arg min f R φ,g (f ). The purpose of this section is to investigate the universal consistency of the associated regime of f Dn,λn on observational data. We check the theoretical properties in Section 2.5 for observational data in parallel.
First, let us investigate the conditions for R(sign(f * φ,g )) = R(d * ). Definẽ
η 1 (x) = E(R + g |X = x, A = +1) π(+1, x) π(+1, x) + E(R − g |X = x, A = −1) π(−1, x) π(−1, x) , η 2 (x) = E(R + g |X = x, A = −1) π(−1, x) π(−1, x) + E(R − g |X = x, A = +1) π(+1, x) π(+1, x) .(28)
The φ-risk can be expressed as
R φ,g (f ) = E η 1 (X)φ f (X) +η 2 (X)φ − f (X) .
The condition in Theorem 2.1, i.e., φ ′ (0) exists and φ ′ (0) < 0, only guarantees that f * φ,g (x) has the same sign asη 1 (x) −η 2 (x).
Whenπ(a, x) is consistent, it is obvious thatη 1 (x) −η 2 (x) = µ +1 (x) − µ −1 (x). When µ +1 (x) andμ −1 (x) are consistent, we have g(x) =π(−1, x)µ +1 (x) +π(+1, x)µ −1 (x), and η 1 (x) −η 2 (x) = [µ +1 (x) − g(x)] π(+1, x) π(+1, x) − [µ −1 (x) − g(x)] π(−1, x) π(−1, x) = µ +1 (x) − µ −1 (x).
The conditions for R(sign(f * φ,g )) = R(d * ) are (i) eitherμ a (x) orπ(a, x) is consistent; and (ii) φ ′ (0) exists and φ ′ (0) < 0.
For the excess risk bound, we can easily verify that Theorem 2.2 holds for observational data with an additional condition as follows.
Theorem D.1. Assume eitherμ a (x) orπ(a, x) is consistent, φ is convex, φ ′ (0) exists and φ ′ (0) < 0. In addition, suppose that there exist constants C > 0 and s ≥ 1 such that |η 1 −η 2 | s ≤ C s ∆Qη 1 ,η 2 (0),
Then ∆R(f ) ≤ C (∆R φ,g (f )) 1/s . Theorem 2.3 can be modified for observational data as follows by noticing that η 1 (x) +η 2 (x) = E |R g | π(A, X) X = x .
Theorem D.2. Assume eitherμ a (x) orπ(a, x) is consistent, φ is convex, φ ′ (0) exists, and φ ′ (0) < 0. Suppose E |Rg| π(A,X) ≤ M g . In addition, suppose that there exist a constant s ≥ 2 and a concave increasing function h : R + → R + such that |η 1 −η 2 | s ≤ h(η 1 +η 2 )∆Qη 1 ,η 2 (0),
Then ∆R(f ) ≤ (h(M g )) 1/s (∆R φ,g (f )) 1/s .
Thus, all excess risk bounds provided in Section 2.5.2 apply to observational data when eitherμ a (x) orπ(a, x) is consistent.
For universal consistency, the following theorem is just the counterpart of Theorem 2.4 on observational data. Theorem D.3. Suppose φ is a Lipschitz continuous function. Assume that we choose a sequence λ n > 0 such that λ n → 0 and nλ n → ∞. For any distribution P for (X, A, R) satisfying |Rĝ| π(A,X) ≤ M < ∞ and | √ λ n b Dn,λn | ≤ M b < ∞ almost everywhere, we have that in probability, lim n→∞ R φ,g (f Dn,λn ) = inf
f ∈H K +{1} R φ,g (f ).
The proof follows the idea in the proof of Theorem 2.4. We only show a sketch.
Proof. Let L n (h, b) = |Rĝ|φ(Asign(Rĝ)(h(X)+b))/π(A, X) and L(h, b) = |R g |φ(Asign(R g )(h(X)+ b))/π(A, X). By the continuous mapping theorem, L n (h, b) p → L(h, b). So PL n (h, b)→PL(h, b). For any h ∈ H K and b ∈ R, we have Following the similar arguments in Section 2.5.3, for the Huberized hinge loss, we have universal consistency of AOL on observational data.
Proposition D.4. Assume eitherμ a (x) orπ(a, x) is consistent. Let K be a universal kernel, and H K be the associated RKHS. Let φ be the Huberized hinge loss function. Assume that we choose a sequence λ n > 0 such that λ n → 0 and nλ n → ∞. For any distribution P for (X, A, R) satisfying |Rĝ| π(A,X) ≤ M < ∞ almost everywhere with regular marginal distribution on X, we have that in probability, lim n→∞ R(sign(f Dn,λn )) = R * .
We may achieve universal consistency for other loss functions as we did in Section 2.5.3. AOL is doubly robust on universal consistency for observational data.
Figure 1 :
1Example of contour of δ(x) on x = (x 1 , x 2 ) T . The decision boundary is δ(x) = 0.The optimal treatment is 1 if δ(x) > 0, and −1 otherwise. The decision boundary can be approximated by a linear function, although the contrast function δ(x) is not linear on x.
et al. (2015) (Q-RF); (3) Doubly robust augmented inverse probability weighted estimator (AIPWE) with CART proposed by Zhang et al. (2012a) (AIPWE-CART); (4) RWL proposed in
Table 2 :
2Mean (std) of empirical value functions evaluated on independent test data for 4 simulation scenarios with 5 covariates. The best value function for each scenario and sample size combination is in bold.n = 100
n = 400
n = 100
n = 400
Scenario 1
Scenario 2
(Optimal value 1.001)
(Optimal value 3.659)
ℓ 1 -PLS
0.974 (0.021) 0.993 (0.006)
3.537 (0.069)
3.549 (0.041)
Q-RF
0.889 (0.053)
0.952 (0.014)
3.459 (0.137)
3.588 (0.023)
AIPWE-CART
0.855 (0.078)
0.917 (0.034)
3.307 (0.211)
3.503 (0.061)
RWL-Linear
0.930 (0.068)
0.978 (0.018)
3.565 (0.109) 3.640 (0.027)
RWL-Gaussian
0.909 (0.077)
0.973 (0.023)
3.516 (0.126)
3.621 (0.042)
AOL-Linear
0.946 (0.051)
0.985 (0.014)
3.546 (0.125)
3.620 (0.030)
AOL-Gaussian
0.907 (0.082)
0.977 (0.023)
3.517 (0.121)
3.621 (0.037)
Scenario 3
Scenario 4
(Optimal value 0.848)
(Optimal value 3.237)
Q-RF
0.619 (0.053)
0.730 (0.028)
2.898 (0.151)
3.127 (0.039)
AIPWE-CART
0.620 (0.086)
0.740 (0.047)
2.909 (0.171)
3.118 (0.056)
RWL-Gaussian
0.638 (0.070)
0.763 (0.041)
2.894 (0.130)
3.125 (0.061)
AOL-Gaussian 0.650 (0.070) 0.784 (0.041)
2.918 (0.135) 3.152 (0.054)
Table 3 :
3Mean (std) of empirical value functions evaluated on independent test data for 4 simulation scenarios with 25 covariates. The best value function for each scenario and sample size combination is in bold.n = 100
n = 400
n = 100
n = 400
Scenario 1
Scenario 2
(Optimal value 1.001)
(Optimal value 3.659)
ℓ 1 -PLS
0.960 (0.036) 0.992 (0.007)
3.423 (0.057)
3.531 (0.036)
Q-RF
0.785 (0.081)
0.926 (0.027)
3.070 (0.375)
3.529 (0.047)
AIPWE-CART
0.794 (0.108)
0.904 (0.038)
3.307 (0.211)
3.503 (0.061)
RWL-VS-Linear
0.869 (0.085)
0.973 (0.020)
3.450 (0.177) 3.632 (0.033)
RWL-VS-Gaussian
0.846 (0.110)
0.963 (0.038)
3.399 (0.234)
3.611 (0.049)
AOL-VS-Linear
0.878 (0.082)
0.976 (0.018)
3.434 (0.153)
3.591 (0.044)
AOL-VS-Gaussian
0.861 (0.106)
0.975 (0.039)
3.421 (0.209)
3.616 (0.037)
Scenario 3
Scenario 4
(Optimal value 0.848)
(Optimal value 3.237)
Q-RF
0.541 (0.041)
0.646 (0.038)
2.678 (0.185)
3.006 (0.074)
AIPWE-CART
0.542 (0.062)
0.705 (0.068)
2.724 (0.198)
3.094 (0.066)
RWL-VS-Gaussian
0.559 (0.067)
0.766 (0.061)
2.716 (0.205)
3.166 (0.057)
AOL-VS-Gaussian 0.560 (0.067) 0.774 (0.070)
2.735 (0.209) 3.168 (0.078)
Table 4 :
4Mean score (standard deviation) on HRSD from the cross-validation procedure using different methods. Lower HRSD score is better. The two best scores for each comparison is in bold.NFZ vs CBASP NFZ vs COMB CBASP vs COMBℓ 1 -PLS
16.30 (0.39)
11.20 (0.16)
10.95 (0.09)
Q-RF
16.27 (0.44)
11.05 (0.18)
10.93 (0.09)
AIPWE
16.45 (0.41)
10.97 (0.15)
10.96 (0.14)
RWL-VS-Linear
15.45 (0.37)
11.09 (0.29)
10.88 (0.05)
RWL-VS-Gaussian
16.29 (0.44)
11.33 (0.25)
11.07 (0.28)
AOL-VS-Linear
15.77 (0.37)
11.03 (0.18)
10.90 (0.06)
AOL-VS-Gaussian
16.32 (0.36)
11.21 (0.23)
11.02 (0.16)
Table 5 :
5Running times (in seconds) of AOL and RWL for 4 simulation scenarios on 5covariate data.n = 100
n = 400
AOL RWL
AOL RWL
Linear kernel
Scenario 1 0.010 0.046
0.012 0.068
Scenario 2 0.011 0.062
0.013 0.073
Gaussian kernel
Scenario 1 0.040 0.479
0.123 0.731
Scenario 2 0.078 0.657
0.191 1.012
Scenario 3 0.104 0.440
0.098 0.888
Scenario 4 0.099 0.432
0.410 3.557
Table 6 :
6Running times (in seconds) of AOL and RWL with variable selection for 4 simulation scenarios on 25-covariate data.n = 100
n = 400
AOL RWL
AOL
RWL
Linear kernel
Scenario 1 0.006 0.042
0.010
0.055
Scenario 2 0.008 0.073
0.012
0.111
Gaussian kernel
Scenario 1 0.298 1.169
5.660 35.651
Scenario 2 0.422 2.045
5.561 23.265
Scenario 3 0.752 2.544
9.487 24.264
Scenario 4 0.365 1.896
11.831 50.391
Table 7 :
7Running times (in seconds) of AOL and RWL with variable selection on the Nefazodone-CBASP clinical trial data.NFZ vs CBASP
NFZ vs COMB
CBASP vs COMB
AOL
RWL
AOL
RWL
AOL
RWL
Linear kernel
0.008
0.638
0.011
0.373
0.007
0.084
Gaussian kernel 25.334 390.773
21.598 283.986
10.445
96.627
P n (L n (h n , b n )) ≤ P n (L n (h n , b n )) + λ n 2 ||h n || 2 K ≤ P n (L n (h, b)) + λ n 2 ||h|| 2 K .Then, lim sup n P n (L(h n , b n )) ≤ P(L(h, b)) = R φ,g (h + b) with probability 1. This implies lim sup n P n (L n (h n , b n )) ≤ infwith probability 1. It suffices to show P n (L n (h n , b n )) − P(L n (h n , b n )) → 0 in probability since PL n (h, b)→PL(h, b). Similar as in the proof of Theorem 2.4, we derive a bound for h n as λ n ||h n || 2 K ≤ 2φ(0)P n (|R g |/π(A, X)) ≤ 2φ(0)M.Since φ λn (u) is Lipschitz continuous and} is also P-Donsker. Therefore nλ n (P n − P)L(h n , b n ) = O p (1).Consequently, from nλ n → ∞, P n (L(h n , b n )) − P(L(h n , b n )) → 0 in probability.
Selection bias in gene extraction on the basis of microarray gene-expression data. C Ambroise, G J Mclachlan, Proc. Natl. Acad. Sci. 9910Ambroise, C. and McLachlan, G. J. "Selection bias in gene extraction on the basis of microarray gene-expression data." Proc. Natl. Acad. Sci., 99(10):6562-6566 (2002).
Efficient policy learning. S Athey, S Wager, Arxiv , 1702.02896Athey, S. and Wager, S. "Efficient policy learning." Arxiv , 1702.02896 (2017).
Doubly Robust Estimation in Missing Data and Causal Inference Models. H Bang, J Robins, Biometrics. 614Bang, H. and Robins, J. M. "Doubly Robust Estimation in Missing Data and Causal Inference Models." Biometrics, 61(4):962-973 (2005).
Convexity, classification, and risk bounds. P L Bartlett, M I Jordan, J Mcauliffe, Journal of the American Statistical Association. 101473Bartlett, P. L., Jordan, M. I., and McAuliffe, J. D. "Convexity, classification, and risk bounds." Journal of the American Statistical Association, 101(473):138-156 (2006).
A Limited Memory Algorithm for Bound Constrained Optimization. R H Byrd, P Lu, J Nocedal, C Zhu, SIAM Journal on Scientific and Statistical Computing. 165Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. "A Limited Memory Algorithm for Bound Constrained Optimization." SIAM Journal on Scientific and Statistical Computing, 16(5):1190-1208 (1995).
Regularization Networks and Support Vector Machines. T Evgeniou, M Pontil, T Poggio, Advanced In Computational Mathematics. 131Evgeniou, T., Pontil, M., and Poggio, T. "Regularization Networks and Support Vector Machines." Advanced In Computational Mathematics, 13(1):1-50 (2000).
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Y Freund, R E Schapire, Journal of Computer and System Sciences. 551Freund, Y. and Schapire, R. E. "A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting." Journal of Computer and System Sciences, 55(1):119- 139 (1997).
A comparision of Nefazodone, the cognitive behavioral-analysis system of psychotherapy, and their combination for the treatment of chronic depression. M Keller, J Mccullough, D Klein, B Arnow, D Dunner, A Gelenberg, J Markowitz, C Nemeroff, J Russell, M Thase, M Trivedi, J Zajecka, The New England Journal of Medicine. 34220Keller, M., Mccullough, J., Klein, D., Arnow, B., Dunner, D., Gelenberg, A., Markowitz, J., Nemeroff, C., Russell, J., Thase, M., Trivedi, M., and Zajecka, J. "A comparision of Nefazodone, the cognitive behavioral-analysis system of psychotherapy, and their combi- nation for the treatment of chronic depression." The New England Journal of Medicine, 342(20):1462-1470 (2000).
Some results on Tchebycheffian spline functions. G Kimeldorf, G Wahba, Journal of Mathematical analysis and applications. 33Kimeldorf, G. and Wahba, G. "Some results on Tchebycheffian spline functions." Journal of Mathematical analysis and applications, 33:82-95 (1971).
Support Vector Machines and the Bayes Rule in Classification. Y Lin, Data Mining and Knowledge Discovery. 6Lin, Y. "Support Vector Machines and the Bayes Rule in Classification." Data Mining and Knowledge Discovery, 6:259-275 (2002).
Robust Hybrid Learning for Estimating Personalized Dynamic Treatment Regimens. Y Liu, Y Wang, M Kosorok, Y Zhao, D Zeng, 1611.02314Arxiv. ManuscriptLiu, Y., Wang, Y., Kosorok, M., Zhao, Y., and Zeng, D. "Robust Hybrid Learning for Estimating Personalized Dynamic Treatment Regimens." Arxiv , 1611.02314 (2016). Manuscript.
Distance-Weighted Discrimination. J S Marron, M J Todd, Ahn , J , Journal of the American Statistical Association. 102480Marron, J. S., Todd, M. J., and Ahn, J. "Distance-Weighted Discrimination." Journal of the American Statistical Association, 102(480):1267-1271 (2007).
Q-Learning: Flexible Learning About Useful Utilities. E E M Moodie, N Dean, Y R Sun, Statistics in Biosciences. 6Moodie, E. E. M., Dean, N., and Sun, Y. R. "Q-Learning: Flexible Learning About Useful Utilities." Statistics in Biosciences, 6:223-243 (2014).
Remark on 'Algorithm 778: L-BFGS-B: Fortran Subroutines for Large-scale Bound Constrained Optimization. J L Morales, J Nocedal, 7:1-7:4ACM Transactions on Mathematical Software. 381Morales, J. L. and Nocedal, J. "Remark on 'Algorithm 778: L-BFGS-B: Fortran Subroutines for Large-scale Bound Constrained Optimization'." ACM Transactions on Mathematical Software, 38(1):7:1-7:4 (2011).
Optimal dynamic treatment regimes. S A Murphy, Journal of the Royal Statistical Society: Series B. 652Murphy, S. A. "Optimal dynamic treatment regimes." Journal of the Royal Statistical Society: Series B , 65(2):331-355 (2003).
Updating Quasi-Newton Matrices with Limited Storage. J Nocedal, Mathematics of Computation. 35151Nocedal, J. "Updating Quasi-Newton Matrices with Limited Storage." Mathematics of Computation, 35(151):773-782 (1980).
Performance guarantees for individualized treatment rules. M Qian, S A Murphy, The Annals of Statistics. 392Qian, M. and Murphy, S. A. "Performance guarantees for individualized treatment rules." The Annals of Statistics, 39(2):1180-1210 (2011).
Optimal Structural Nested Models for Optimal Sequential Decisions. J Robins, Proceedings of the Second Seattle Symposium in Biostatistics. Lin, D. and Heagerty, P.the Second Seattle Symposium in BiostatisticsNew YorkSpringer179Robins, J. M. "Optimal Structural Nested Models for Optimal Sequential Decisions." In Lin, D. and Heagerty, P. (eds.), Proceedings of the Second Seattle Symposium in Bio- statistics, volume 179 of Lecture Notes in Statistics, 189-326. Springer New York (2004).
Estimating causal effects of treatments in randomized and nonrandomized studies. D B Rubin, Journal of Educational Psychology. 665Rubin, D. B. "Estimating causal effects of treatments in randomized and nonrandomized studies." Journal of Educational Psychology, 66(5):688-701 (1974).
Graphical model structure learning with l1-regularization. M Schmidt, The University of British ColumbiaPh.D. thesisSchmidt, M. "Graphical model structure learning with l1-regularization." Ph.D. thesis, The University of British Columbia (2010).
Ceritinib in ALK-Rearranged Non-Small-Cell Lung Cancer. A T Shaw, D.-W Kim, R Mehra, D S Tan, E Felip, L Q Chow, D R Camidge, J Vansteenkiste, S Sharma, T De Pas, G J Riely, B J Solomon, J Wolf, M Thomas, M Schuler, G Liu, A Santoro, Y Y Lau, M Goldwasser, A L Boral, J A Engelman, New England Journal of Medicine. 37013Shaw, A. T., Kim, D.-W., Mehra, R., Tan, D. S., Felip, E., Chow, L. Q., Camidge, D. R., Vansteenkiste, J., Sharma, S., De Pas, T., Riely, G. J., Solomon, B. J., Wolf, J., Thomas, M., Schuler, M., Liu, G., Santoro, A., Lau, Y. Y., Goldwasser, M., Boral, A. L., and Engelman, J. A. "Ceritinib in ALK-Rearranged Non-Small-Cell Lung Cancer." New England Journal of Medicine, 370(13):1189-1197 (2014).
On the Convergence of the Concave-Convex Procedure. B K Sriperumbudur, G R Lanckriet, Advances in Neural Information Processing Systems. Bengio, Y., Schuurmans, D., Lafferty, J. D., Williams, C. K. I., and Culotta, A.Curran Associates, Inc22Sriperumbudur, B. K. and Lanckriet, G. R. "On the Convergence of the Concave-Convex Procedure." In Bengio, Y., Schuurmans, D., Lafferty, J. D., Williams, C. K. I., and Culotta, A. (eds.), Advances in Neural Information Processing Systems 22 , 1759-1767. Curran Associates, Inc. (2009).
Support Vector Machines. I Steinwart, A Christmann, SpringerSteinwart, I. and Christmann, A. Support Vector Machines. Springer (2008).
Reader reaction to "a robust method for estimating optimal treatment regimes. J M Taylor, W Cheng, J Foster, Biometrics. Zhang et al.711Taylor, J. M., Cheng, W., and Foster, J. C. "Reader reaction to "a robust method for estimating optimal treatment regimes" by Zhang et al. (2012)." Biometrics, 71(1):267- 271 (2015).
The Nature of Statistical Learning Theory. V N Vapnik, SpringerNew YorkVapnik, V. N. The Nature of Statistical Learning Theory. New York: Springer (1995).
Hybrid huberized support vector machines for microarray classification and gene selection. L Wang, J Zhu, H Zou, Bioinformatics. 243Wang, L., Zhu, J., and Zou, H. "Hybrid huberized support vector machines for microarray classification and gene selection." Bioinformatics, 24(3):412-419 (2008).
Estimating optimal treatment regimes from a classification perspective. B Zhang, A A Tsiatis, M Davidian, M Zhang, E Laber, Stat. 1Zhang, B., Tsiatis, A. A., Davidian, M., Zhang, M., and Laber, E. "Estimating optimal treatment regimes from a classification perspective." Stat, 1:103-114 (2012a).
A Robust Method for Estimating Optimal Treatment Regimes. B Zhang, A A Tsiatis, E B Laber, M Davidian, Biometrics. 684Zhang, B., Tsiatis, A. A., Laber, E. B., and Davidian, M. "A Robust Method for Estimating Optimal Treatment Regimes." Biometrics, 68(4):1010-1018 (2012b).
Estimating Individualized Treatment Rules Using Outcome Weighted Learning. Y Zhao, D Zeng, A J Rush, M R Kosorok, Journal of the American Statistical Association. 107499Zhao, Y., Zeng, D., Rush, A. J., and Kosorok, M. R. "Estimating Individualized Treat- ment Rules Using Outcome Weighted Learning." Journal of the American Statistical Association, 107(499):1106-1118 (2012).
Causal nearest neighbor rules for optimal treatment regimes. X Zhou, M R Kosorok, ArXiv:1711.08451Zhou, X. and Kosorok, M. R. "Causal nearest neighbor rules for optimal treatment regimes." (2017). ArXiv:1711.08451.
Residual Weighted Learning for Estimating Individualized Treatment Rules. X Zhou, N Mayer-Hamblett, U Khan, M R Kosorok, Journal of the American Statistical Association. 112517Zhou, X., Mayer-Hamblett, N., Khan, U., and Kosorok, M. R. "Residual Weighted Learning for Estimating Individualized Treatment Rules." Journal of the American Statistical Association, 112(517):169-187 (2017).
Regularization and variable selection via the Elastic Net. H Zou, T Hastie, Journal of the Royal Statistical Society, Series B. 67Zou, H. and Hastie, T. "Regularization and variable selection via the Elastic Net." Journal of the Royal Statistical Society, Series B , 67:301-320 (2005).
| []
|
[
"Algebraic Solution of the Harmonic Oscillator With Minimal Length Uncertainty Relations",
"Algebraic Solution of the Harmonic Oscillator With Minimal Length Uncertainty Relations"
]
| [
"K Gemba \nDepartment of Physics and Astronomy\nCalifornia State University\n90840Long BeachCalifornia\n",
"Z T Hlousek \nDepartment of Physics and Astronomy\nCalifornia State University\n90840Long BeachCalifornia\n",
"Z Papp \nDepartment of Physics and Astronomy\nCalifornia State University\n90840Long BeachCalifornia\n"
]
| [
"Department of Physics and Astronomy\nCalifornia State University\n90840Long BeachCalifornia",
"Department of Physics and Astronomy\nCalifornia State University\n90840Long BeachCalifornia",
"Department of Physics and Astronomy\nCalifornia State University\n90840Long BeachCalifornia"
]
| []
| In quantum mechanics with minimal length uncertainty relations the Heisenberg-Weyl algebra of the onedimensional harmonic oscillator is a deformed SU (1, 1) algebra. The eigenvalues and eigenstates are constructed algebraically and they form the infinite-dimensional representation of the deformed SU (1, 1) algebra. Our construction is independent of prior knowledge of the exact solution of the Schrödinger equation of the model. The approach can be generalized to the D-dimensional oscillator with non-commuting coordinates. | null | [
"https://arxiv.org/pdf/0712.2078v1.pdf"
]
| 117,822,478 | 0712.2078 | 954cca2eba0682289362aeeb9abc5acb6cecee56 |
Algebraic Solution of the Harmonic Oscillator With Minimal Length Uncertainty Relations
13 Dec 2007 (Dated: February 2, 2008)
K Gemba
Department of Physics and Astronomy
California State University
90840Long BeachCalifornia
Z T Hlousek
Department of Physics and Astronomy
California State University
90840Long BeachCalifornia
Z Papp
Department of Physics and Astronomy
California State University
90840Long BeachCalifornia
Algebraic Solution of the Harmonic Oscillator With Minimal Length Uncertainty Relations
13 Dec 2007 (Dated: February 2, 2008)numbers: 0220Uw0240Gh0365Ca0440-m0460Ds
In quantum mechanics with minimal length uncertainty relations the Heisenberg-Weyl algebra of the onedimensional harmonic oscillator is a deformed SU (1, 1) algebra. The eigenvalues and eigenstates are constructed algebraically and they form the infinite-dimensional representation of the deformed SU (1, 1) algebra. Our construction is independent of prior knowledge of the exact solution of the Schrödinger equation of the model. The approach can be generalized to the D-dimensional oscillator with non-commuting coordinates.
I. INTRODUCTION
Uncertainty relations are one of the pillars of quantum physics. They are directly related to the basic commutator relations and to quantum equations of motion. In ordinary quantum mechanics the basic commutator between the position and momentum operators in one dimension is given by (we use units such that = 1),
[x, p] = i .(1)
In this paper we shall consider the quantum mechanics where Eq. (1) is modified or, in modern language, deformed. Modified uncertainty relations appear in many different areas of physics, sometimes directly and sometimes in disguise. For example, in a system such as a complex molecule, there are length scales below which the physics is complicated and some effective description is sufficient. It is possible to capture some of the effective physics by a modification of the uncertainty relations. Rotational and vibrational states of molecules and deformed nuclei can be described using models with deformed basic commutators. Similar applications also appear in the physics of deformed heavy nuclei. Another area where modified basic commutators play some role is quantum optics. Various entangled and squeezed coherent states are modeled successfully using this approach. Even more, such states can be experimentally realized. Entangled states are also of importance in quantum computing. From our perspective it is most exciting that quantum theory of gravity requires that basic commutators of the quantum mechanics be altered. This seems to be the case in both, the loop quantum gravity and in the string theory. Very energetic test particles for probing very small scales on the order of the Planck length disturb gravitationally the very space-time they are probing. The effect is captured as a modification of the position-momentum uncertainty relation. Modified uncertainty relations require modified basic commutators and imply the existence of minimal length and minimal momentum. There are also examples of modified special relativity theory with invariant minimal length or minimal momentum, or both. Uncertainty relations have a profound consequence on physics. For example, position-momentum uncertainty relations in the ordinary quantum mechanics,
∆x∆p ≥ 1 2(2)
reflect directly the basic commutator relation, Eq. (1). In the ordinary quantum mechanics it is possible to construct states with zero uncertainty in position or momentum (of course, not simultaneously). In other words, the space-time is a sharp continuum. Within the framework of the ordinary quantum mechanics the usual uncertainty relations imply that it should be possible to measure, at least in principle, the position and the momentum with absolute certainty, of course not at the same time! On the other hand if the theory is endowed, for example, with minimal length by modifying uncertainty relations, then the position is no longer a viable observable and is called fuzzy. We loose the Schrödinger equation as a differential or integral equation in spatial coordinates. Coordinate representation of ordinary quantum mechanics becomes some kind of effective description valid at sufficiently large scale.
There is a profound effect on the spectrum of states and on the scattering properties of systems in the modified quantum theory. If the theory is endowed with both, minimal length and minimal momentum we loose both, coordinate and the momentum representations, so we are left with representationfree operator methods. Both spectral and scattering properties of systems deviate greatly from that described by the ordinary quantum mechanics. Perhaps some experiments can be devised to look for and to measure such discrepancies an to determine the size of deformation parameters. The purpose of this paper is to study the energy eigenvalues and eigenvectors of the one-dimensional and D-dimensional isotropic harmonic oscillator model in the quantum mechanics with minimal length uncertainty relations. In Ref. [1], the energy eigenvalues of the one-dimensional harmonic oscillator with minimal length uncertainty relations were calculated by solving the Schrödinger equation in momentum space. In Ref. [2] it was shown, again by solving the Schrödinger equation in momentum space, that the wave-functions are given by Gegenabuer polynomials [3]. In Ref. [4] ladder operators for the model were constructed by using the knowledge of the exact wave functions and energy eigenvalues and the recursion relations of the Gegenbauer polynomials.
In this paper we present a complete solution of the onedimensional harmonic oscillator in quantum theory with minimal length uncertainties. We make no use of the knowledge of exact energy eigenvalues and wave-functions. We show that the Heisenberg-Weyl algebra of the model is a deformed SU (1, 1) algebra. We arrive at this algebra by showing that the model is equivalent to a symmetric Pöschl-Teller model.
Operators that realize this deformed SU (1, 1) serve as the ladder operators for the harmonic oscillator model in the minimal length quantum theory. We can repeat the construction for the isotropic oscillator in D-dimensions. It is worth mentioning that the D-dimensional quantum mechanics with minimal length also features non-commuting coordinates.
The paper is organized as follows. In Section II, following [1], we give a brief overview of the modified uncertainty relations with minimal length. We also define the harmonic oscillator model in this framework. We show that a straightforward factorization method, well familiar from quantum mechanics textbooks, does not work because the Heisenberg-Weyl algebra of the model is not closed (it requires an infinite number of operators for closure). In Section III we are inspired by the transformation found in Ref. [1] that maps the particle momentum into the particle wave-vector and demonstrates explicitly that plane waves have minimal wavelength. Using this transformation we calculate the Green's operator of the harmonic oscillator in the minimal length quantum mechanics and find that it exactly equals to the Green's operator of the Pöschl-Teller model. This demonstrates that two systems are equivalent. Next, as explained in Ref. [5], we transform the Pöschl-Teller model into its natural coordinates. Described in natural coordinates, the particle moving in the symmetric Pöschl-Teller potential appears as if it is exhibiting harmonic oscillation of the ordinary theory but with energy dependent frequency. It is essential that natural coordinates are such that the Heisenberg-Weyl algebra is closed. In Section IV we construct a pair of mutually adjoint ladder operators for the symmetric Pöschl-Teller model in its natural coordinates. We use no knowledge of the exact solution of the model. The ladder operators for the Pöschl-Teller model were constructed previously in Ref. [6] but the knowledge of the exact solution of the Schrödinger equation was an essential part of the construction. Construction of operators presented in Refs. [6] and [4] are essentially identical. The ladder operators of the symmetric Pöschl-Teller model satisfy a deformed version of the Heisenberg-Weyl algebra that also happens to be some particular deformation of SU (1, 1) algebra. We use this deformed algebra to calculate energy eigenstates and eigenvalues exactly. The energy spectrum and wave-functions agree with previous results. In Section V we discuss the physics behind the construction of ladder operators. In particular we explain where the prominent features such as minimal length of the deformed quantum mechanics wind up in the framework of the Pöschl-Teller model. In Section VI we compare our results with prior works. We show that the algebra of ladder operators we constructed can be thought as a deformation of a simple ordinary Bose oscillator algebra or as a deformation of some SU (1, 1), that itself is a deformation of an ordinary Bose oscillator algebra. We also point out that the dynamical symmetry group of the system is just the dynamical SU (1, 1) algebra of the ordinary Bose oscillator constructed from the quadratic combination of Bose oscillators. The reason that the dynamical algebra is unchanged is related to the fact that the deformations do not mix states of different parity. In Section VII we consider the D-dimensional isotropic harmonic oscillator model [2]. We show that it can be ana-lyzed the same way as the one-dimensional model and that the D-dimensional isotropic harmonic oscillator model in a noncommutative quantum mechanics with minimal length uncertainty relations is in fact equivalent to a generalized Pöschl-Teller model. The appearance of the Pöschl-Teller potential is related to the quadratic form of the non-relativistic kinetic energy operator. Finally, in Section VIII we summarize our results, describe possible generalizations and consider some future directions.
II. THE MINIMAL LENGTH UNCERTAINTY RELATIONS IN ONE-DIMENSION AND THE HARMONIC OSCILLATOR MODEL
As described in the introduction, there are number of reasons to consider modified uncertainty relations in quantum mechanics. Following [1], we consider a simple deformation of the basic commutator (1), that implies the existence of minimal length uncertainty. Let x and p be the position and momentum operators, respectively, and let us assume that they obey the basic commutator
[x, p] = i 1 + βp 2 .(3)
On dimensional grounds, √ β is measured in units of length. The ordinary quantum mechanics can be considered as a limit of the deformed theory where β tends to zero. Formally, operators x and p are hermitian but, as shown in [1], x is not self-adjoint. The operator x cannot be diagonalized but it does have real expectation values. It also has a one-parameter class of self-adjoint extensions. To obtain information on the position, the best thing we can do is to construct states such that for these states the uncertainty of the operator x is minimal. A symmetric operator with minimal uncertainty states and real expectation values is called a fuzzy observable and the corresponding minimal uncertainty states are some coherent states.
The modified commutator (3) implies a modification of the uncertainty relations. They are given by
∆x∆p ≥ 1 2 [x, p] ,(4)
where angle bracket denotes the expectation value. Uncertainties are defined by the usual expressions. For an op-
erator O we have, O = ψ|O|ψ and (∆O) 2 = ψ|(O − O ) 2 |ψ = O 2 − O 2 .
With the modified basic commutators it follows
∆x∆p ≥ 1 2 1 + β∆p 2 + β p 2 .(5)
The uncertainty relation (5) is saturated when the two sides are equal. The quadratic term present on the right hand side implies that there exists a minimal uncertainty in the position. The smallest possible uncertainty in the position occurs for sates that have zero average momentum, p = 0. Then,
∆x min = β .(6)
We will explore the implications of the presence of absolutely the smallest possible resolution of distance within the context of a harmonic oscillator model. The harmonic oscillator is probably the most widely studied and used example in all of physics, quantum and classical. It is a feature-rich model, still simple enough that it can be solved exactly by a variety of methods. The fact that it is also a model of the 0 + 1 dimensional field theory makes it even more attractive in the present context. It is defined by the Hamiltonian
H = 1 2 p 2 + ω 2 2 x 2 .(7)
In the ordinary quantum mechanics, the spectrum and states can be constructed, for example, by representing the Heisenberg-Weyl algebra using a pair of operators, a and a † , that satisfy the commutator relation
SU (1, 1) algebra. We have C = S 2 0 − 1 2 (S + S − + S − S + ) = k(k − 1)
. For the ordinary oscillator C = −3/16 and it corresponds to k = 1/4 and k = 3/4. The space of states of the oscillator forms a reducible representation of the dynamical symmetry group and splits into two subspaces each forming an infinite-dimensional, irreducible representation of SU (1, 1). Even parity oscillator sates correspond to Bargman index k = 1/4 and odd parity oscillator states correspond to Bargman index k = 3/4. This construction and characterization of the oscillator states is possible due to three facts:
1) The states |n ∝ (a † ) n |0 form a basis of the Hilbert space on which the commutator [a, a † ] is diagonal.
2) The Heisenberg-Weyl algebra of the oscillator, given by commutators [H, x] = −ip and [H, p] = iω 2 x is closed. Closure of the algebra means that multiple commutators involving the Hamiltonian do not involve any new operators in addition to x and p.
3) Operators x and p and operators a and a † are related by a linear transformation.
A few comments are in order here. In the ordinary quantum mechanics the basic commutator [x, p] = i is diagonal so it is always possible to choose [a, a † ] to be diagonal. This means that 1) can be trivially satisfied. It is sufficient that operators a and a † in 1) be related to operators a and a † in 3) by a SU (1, 1) transformation. The SU (1, 1) here is the dynamical symmetry of the model and it is realized linearly. In the case of the harmonic oscillator the commutator in 1) takes the simplest possible form, because it is a unit operator. This is achieved by using the simple factorization based on the trans-
formation x = 1 √ 2ω (a + a † ), p = −i ω 2 (a − a † ).
We would like to arrive at the description of the oscillator in the quantum theory with minimal uncertainty relations that is parallel to that of the oscillator in ordinary quantum mechanics. As we show in this paper, this is possible, but highly nontrivial.
The quantization of the system is essentially equivalent to finding a basis set in the Hilbert space that simultaneously diagonalizes the commutator and the Hamiltonian. In the ordinary quantum mechanics the basic commutator is diagonal in any basis because it is proportional to a unit operator, Eq. (1). Hence, any basis that diagonalizes the Hamiltonian will do. In the deformed quantum mechanics this is no longer the case.
The deformation of the basic commutator has a profound effect on the Heisenberg-Weyl algebra. Consider two com-
mutators, [H, x] = −ip − iβp 3 and [H, p] = iω 2 x + ω 2 βp + iω 2 βxp 2 + ω 2 β 2 p 3
, valid for the oscillator in the deformed quantum mechanics. Clearly, this algebra is not a closed algebra. In addition to operators x and p, new operators that are quadratic and cubic in x and p appear. These higher powers of operators appear in the algebra precisely because of the modification of the basic commutator in Eq. (
3). Computing additional commutators such as [H, [H, x]] and [H, [H, p]]
we find that higher an higher powers of basic operators enter. This simply means that the Heisenberg-Weyl algebra of the model is not closed. This makes the application of the operator factorization method highly nontrivial.
III. EQUIVALENCE OF THE ONE-DIMENSIONAL HARMONIC OSCILLATOR WITH MINIMAL LENGTH UNCERTAINTY RELATION AND THE PÖSCHL-TELLER MODEL
The modified basic commutator (3) implies that there is minimal length below which it is not possible to resolve distances. The free particle states are still plane waves but they exhibit minimal wavelength, λ min = 4 √ β. The dispersion relation of the free particle of mass m found in Ref. [1] is given by
E free particle = 1 2mβ tan 2π √ β λ 2 .(8)
The transformation that maps the free particle energy eigenvalue E = 1 2m p 2 into the wave-vector representation is given by
p = 1 √ β tan βρ ,(9)
where ρ = 2π/λ is the particle wave-vector. It was also shown in Ref. [1] that the commutator relation (3) can be realized in the momentum representation by taking the position operator in the momentum representation as
x = 1 + βp 2 i d dp + iγp .(10)
The momentum operator p acts simply as multiplication, and γ is some parameter that can be chosen freely [2]. Operators x and p are formally hermitian with respect to a measure
dµ(p) = dp (1 + βp 2 ) 1−γ/β(11)
on the (−∞ ≤ p ≤ ∞) interval. In Refs. [1] and [2] the representation given by Eq. (10) was used to formulate and solve the Schrödinger equation for the problem,
Hψ(p) = Eψ(p) .(12)
They found that the states are labeled by a single quantum number n = 0, 1, . . . , . The wave functions ψ(p) are essentially given by Gegenbauer polynomials and E is a quadratic function of n.
In momentum space representation the kinetic energy in the Schrödinger equation appears as some kind of potential energy term. In fact, Eqs. (9) and (8) are very suggestive. The kinetic energy term in the Schrödinger equation has an appearance of the potential of the symmetric Pöschl-Teller model. To exploit this relationship we calculate the Green's function G(z) of the oscillator and show that it equals exactly to the Green's function of the Pöschl-Teller model. This establishes the the equivalence of the two models.
Let z be a complex number. The Green's function is given formally by
G(z) = (z − H) −1 .(13)
Let Ψ(p) and Φ(p) be two arbitrary state functions of the oscillator in the deformed quantum mechanics. We now calculate the matrix element of G −1 (z)
Φ|G −1 (z)|Ψ = ∞ −∞ dµ(p) Φ * (p) z − 1 2 p 2 + ω 2 x 2 Ψ(p) .(14)
Making first the variable change given by Eq. (9) and then performing a similarity transformation that removes the parameter γ, Ψ = Jψ with J = (cos √ βρ) γ/β , we obtain
Φ|G −1 (z)|Ψ = π/2 √ β −π/2 √ β dρ φ * (ρ) (z − H ′ ) ψ(ρ) ,(15)
where
H ′ i d dρ , ρ =J −1 H i d dρ , ρ J = − ω 2 2 d 2 dρ 2 + 1 2β tan 2 βρ .(16)
The Hamiltonian H ′ is the Hamiltonian of the symmetric Pöschl-Teller model. We note that the deformed commutator (3) becomes simply the commutator of the ordinary quantum mechanics.
i d dρ , ρ = i .(17)
We can bring Hamiltonian H ′ to a standard form of Pöschl-Teller model by rescaling a variable and defining new constants. Define
βρ = αx , α 2 = ω 2 β 1 β = ω 2 α 2 = α 2 ν(ν − 1)(18)
and the measure becomes 17). Then the Hamiltonian reads
π/2 √ β −π/2 √ β dρ = α √ β π/2α −π/2α dx. We also introduce p = −i d dx such that [x, p] = i -this is compat- ible with Eq. (H ′ (p, x) = 1 2 p 2 + α 2 2 ν(ν − 1) cos 2 αx − 1 2β = H SPT (x, p) − 1 2β ,(19)
where H SPT is the Hamiltonian of the symmetric Pöschl-Teller model in standard form.
The symmetric Pöschl-Teller model has been extensively studied before. It is exactly solvable by a variety of methods. In Ref. [6] it was shown that its Heisenberg-Weyl algebra is a deformed SU (1, 1); see also [7].
In Ref. [5], it was shown that a very general potential that supports bound states can always be reformulated in terms of some so-called natural variables with the property that motion looks like the motion in the harmonic oscillator potential with energy dependent frequency. In these natural coordinates coherent states considered in [5] obey essentially a classical equation of motion of the harmonic oscillator. These natural coordinates no longer satisfy canonical commutator relations of the ordinary quantum mechanics. However, the basic commutator between the natural coordinate and the conjugate momenta does not imply any limit on uncertainties because its character is different from deformation given by Eq. (3). The Heisenberg-Weyl algebra of the model expressed in natural coordinates is essentially the Heisenberg-Weyl algebra of oscillator in ordinary quantum mechanics and it is closed.
The natural coordinates for the Pöschel-Teller model used in [5] are defined as follows:
y = sin αx , k = α 2 {cos αx, p} = α cos αx p + i α 2 2 sin αx .(20)
In natural coordinates the matrix elements of the operator G −1 (z) is given by
Φ|G −1 (z)|Ψ = 1 √ β 1 −1 dµ(y) φ * (y) [z ′ − H SPT (k, y)] ψ(y) ,(21)
where the new measure is dµ(y) = dy √ 1−y 2 . We have also made a shift of the energy variable, z ′ = z + 1 2β . In natural variables the Hamiltonian reads
H SPT (k, y) = 1 2α 2 1 1 − y 2 k − i α 2 2 2 + α 2 2 ν(ν − 1) 1 − y 2 .(22)
In the next section we will show that the Heisenberg-Weyl algebra of the symmetric Pöschl-Teller model in natural coordinates is a closed algebra and that it can be used to construct states and spectral energies algebraically.
IV. THE HEISENBERG-WEYL ALGEBRA OF THE SYMMETRIC PÖSCHL-TELLER MODEL
In this section we construct the spectral algebra for the Hamiltonian H SPT (k, y), Eq. (22). A straightforward calculation yields (for the ease of writing we drop the subscript on H SPT in what follows): Note that the right-hand side of Eq. (23) depends on the Hamiltonian H reflecting the energy dependence of oscillating frequency. This also means that the resulting algebra is deformed. To find the correct combination of operators y and k, that serve as spectral operators, is nontrivial. The essence of the structure of ladder operators can be guessed from the work in Ref. [5]. We can formalize the calculation as follows. Note that the last two equations of (23) can be written in a matrix form,
[y, k] =iα 2 (1 − y 2 ) [H, y] = − ik [H, k] =iα 2 2yH − α 2 4 y − ik .(23)H y k = y k H iα 2 (2H − α 2 /4) −i H + α 2 .(24)
Matrix on the right hand side can be diagnoalized by a simple similarity transformation,
M = JM d J −1 where J = −iα( √ 2H + α/2) iα( √ 2H − α/2) 1 1 .(25)
The diagonal matrix M d reads,
M d = H + α 2 /2 − α √ 2H 0 0 H + α 2 /2 + α √ 2H .(26)
The spectral operators are essentially two combinations given by y k J. It follows immediately that the following two operators can serve as spectral ladder operators:
a = 1 α 2 y α √ 2H + α 2 2 + ik a † = 1 α 2 α √ 2H + α 2 2 y − ik .(27)
The operators a and a † obey an algebra that can be used to construct the spectrum of the system, The calculation of the commutator and the anti-commutator of a and a † is straightforward but lengthy. It is the best to calculate the following two products first
[H, a] = −a α √ 2H − α 2 2 = − α √ 2H + α 2 2 a H, a † = α √ 2H − α 2 2 a † = a † α √ 2H + α 2 2 .(28)aa † = − ν(ν − 1) √ 2H α + 1 √ 2H α + √ 2H α + 1 2 a † a = − ν(ν − 1) √ 2H α √ 2H α − 1 + √ 2H α 2 .(29)
Then we obtain a, a † =f (H)
f (H) =1 + 2 √ 2H α + ν(ν − 1) √ 2H α ( √ 2H α − 1) a, a † = 1 2 − ν(ν − 1) 2 √ 2H α 2 − 1 √ 2H α ( √ 2H α − 1) + 1 2 2 √ 2H α + 1 2 .(30)
The algebra also has a quadratic invariant Casimir operator C, [8] and [9]. It equals C = ν(ν − 1) and it codes the strength of the potential.
The spectral algebra of the model is given by Eqs. (28) and (30). It is a two-function deformed SU (1, 1) algebra. If we make the identifications
a † ↔ S + , a ↔ S − , H ↔ S 0 ,(31)
we have
[S + , S − ] =f (S 0 ) [S 0 , S + ] =g(S 0 )S + [S 0 , S − ] = − S − g(S 0 ) ,(32)
where g(S 0 ) = α √ 2S 0 − α 2 2 . Now we show that the complete spectrum of the system can be constructed from the algebra alone. A representation is characterized by a parameter, see Eq. (18),
ν = 1 2 1 + 1 + 4 β 2 ω 2 .(33)
The ground state is defined by
a|ψ 0 ; ν = 0 , H|ψ 0 ; ν = E 0 |ψ 0 , ν .(34)
Using Eq. (27) we can convert this relation into a first order differential equation and we can obtain the ground state wavefunction in y-space representation
ψ 0 (y) = y|ψ 0 ; ν = αΓ (ν + 1) √ πΓ ν + 1 2 (1 − y 2 ) ν/2 . (35)
The easiest way to determine the ground state energy is to evaluate the expectation value of a † a in the ground state. Then, we get
E 0 = α 2 ν 2 2 .(36)
From the spectral algebra (28) it is clear that operators a and a † act as energy-state lowering and raising operators, respectively. Excited states are obtained by applying powers of the creation operator a † on the ground state, |ψ n ; ν = N n a † n |ψ 0 ; ν ,
where N n is a normalization constant. Let the state |ψ n ; ν be an eigenstate of the Hamiltonian H with energy E n . Then the state a † |ψ n ; ν is also an eigenstate of H but with energy E n+1 and the state a|ψ n ; ν is an eigenstate of H with energy E n−1 : H|ψ n ; ν = E n |ψ n ; Ha † |ψ n ; ν = a † H + α √ 2H + α 2 /2 |ψ n ; ν = E n + α 2E n + α 2 /2 a † |ψ n ; ν = E n+1 |ψ n ; ν Ha|ψ n ; ν = a H − α √ 2H + α 2 /2 |ψ n ; ν = E n − α 2E n + α 2 /2 a|ψ n ; ν = = E n−1 a|ψ n ; ν .
Equation (38) can be rearranged to read
2E n+1 = 2E n + α .(39)
By iterating this relation, starting from the ground-state energy, we get the energy spectrum. The result for the Pöschl-Teller model reads
E SPT n = α 2 2 (n + ν) 2 , n = 0, 1, 2, . . . ,(40)
and it is in agreement with the well known result [10].
Recall that the energy of the oscillator with minimal length uncertainty relations is shifted relative to that of the symmetric Pöschl-Teller model. Therefore
E OSC n = α 2 2 (n + ν) 2 − 1 2β = ω n + 1 2 1 + ω 2 β 2 4 + ω 2 β 2 n + 1 2 2 + 1 4 .(41)
This is also in agreement with the previous results in [1] and [2]. It is not difficult to derive the explicit relations between states |ψ n ; ν and states |ψ n±1 ; ν . We have a † |ψ n = κ n+1 |ψ n+1 , a|ψ n = κ n |ψ n−1 .
Taking a diagonal matrix element of [a, a † ] = f (H) and using the explicit expression for energy eigenvalue we obtain the recursion relation for the coefficients κ n ψ n ; ν| a, a † |ψ n ; ν = |κ n+1 | 2 − |κ n | 2 = f (E n ) = 1 + 2(n + ν) + ν(ν − 1) (n + ν)(n + ν − 1) .
It is easy to find the solution (we take κ n to be real)
κ n = (n + ν) 2 − ν(ν − 1) n + ν n − 1 + ν = n + ν n − 1 + ν n(n + 2ν − 1) .(43)
The normalization constant N n can also be computed. From Eq. (37), we have
|ψ n ; ν =N n (a † ) n |0; ν N n N n−1 a † |ψ n−1 ; ν = N n N n−1 κ n |ψ n ; ν .(44)
Then, using 0; ν|0; ν = 1, and iterating, we calculate N n = 1 κ n κ n−1 . . . κ 1 = νΓ(2ν) (ν + n)n!Γ(2ν + n)
.
It is not too difficult to calculate the wave function in the yspace. Let ψ ν n (y) = y|ψ n ; ν .
From Eq. (44), using the creation operator written in terms of k and y operators, Eq. (27), we find
ψ ν n (y) = 1 α 2 κ n yg(E n−1 ) − α 2 (1 − y 2 ) d dy + α 2 2 y 2E n−1 + α 2E n−1 ψ ν n−1 (y) .(47)
Starting from the explicit wave function for the ground state and making use of the differential equation satisfied by Gegenbauer polynomials, see Ref. [3], we find ψ ν n (y) = 2 ν Γ(ν)
αn!(n + ν) 2πΓ(n + ν) (1 − y 2 ) ν/2 C ν n (y) ,(48)
where C ν n (y) is a Gegenbauer polynomial. From this we can obtain immediately the wave function of an oscillator in the minimal length quantum mechanics [2]
Ψ OSC n (y) = 2 ν Γ(ν) αn!(n + ν) 2πΓ(n + ν) (1 − y 2 ) ( ν+γ/β 2 ) C ν n (y) .
(49) In this section we have constructed, by using algebraic factorization methods, and without the explicit knowledge of the exact solution, a pair of creation and annihilation operators for the model. The two operators obey a deformed SU (1, 1) algebra. Then we have calculated the exact energy eigenvalues and the energy eigenstates. The states are characterized by a single parameter ν that is determined by the strength of the potential in the symmetric Pöschl-Teller model case or by the parameter β that measures deformation of the uncertainty relation of the quantum mechanics and gives the fundamental, minimal possible resolution of length.
V. THE PHYSICS BEHIND THE CONSTRUCTION
In this section we offer some insight into the problem of quantization in the deformed quantum mechanics.
The deformed commutator (3) can be written as
[x, p] = i(1 + 2mβH 0 ) , H 0 = 1 2m p 2 ,(50)
where H 0 is the Hamiltonian of a free particle of mass m. The Heisenberg-Weyl algebra of the oscillator reads
[H, x] = − ip(1 + 2mβH 0 ) [H, p] =iω 2 x(1 + 2mβH 0 ) + ω 2 mβp(1 + 2mβH 0 ) .(51)
However, these equations are incomplete because there are, in fact, four relevant operators, x, p, H 0 and H, at the start. We need additional commutators
[H 0 , p] =0 [H 0 , x] = − ip(1 + 2βH 0 ) [H, H 0 ] =i ω 2 2 (xp + px)(1 + 2βH 0 ) +2βω 2 H 0 (1 + 2βH 0 ) .(52)
Eqs. (50), (51) and (52) clearly highlight the source of the problem. In the deformed theory, the algebra contains two Hamilton operators. Together, they fail to close the algebra. Their commutator generates new operators such as xp + px and H 2 0 . We can try to modify the algebra from the start by adding all these new operators but this is of no help. New operators when added will generate more new operators and cycle will never end.
Note however, that H 0 , p and x form a closed subalgebra. This offers a possibility to find the quantum theory of some H ′ 0 ∝ H 0 in H 0 deformed quantum theory. This will select a basis in the Hilbert space such that the basic commutator [x, p] is diagonal. This basis can then be used to compute the matrix elements of H. Of course, in this basis H is not diagonal. The remaining problem then is to find the unitary transformation U in the Hilbert space such that H ′ = U † HU is diagonal. In general, this can be difficult. Final states that diagonalize H are linear combinations of states that diagonalize H 0 . Possibly, these are some coherent states. As a result, solving the quantum problem in the deformed theory doubles the work as we have to solve more than just one quantization problem. The reason why the ordinary quantum theory is much easier is simply a consequence of the fact that in the ordinary quantum theory the basic commutator is already diagonal and requires no additional work.
In fact, the two transformations we used in Sections III and IV to quantize the oscillator
p = 1 √ β tan βρ , & y = sin βρ ,(53)
carry out the program we just described. For another possibility see [11] and [15]. In the present case there are also some lucky circumstances related to the fact that the potential energy of the harmonic oscillator is simply a quadratic operator. Any other potential energy function would be more complicated. The situation then reminds us of the case of the Klein-Gordon relativistic equation where the Hamiltonian is given by H = c 2 p 2 + m 2 c 4 and c stands for the speed of the light. The solution is to formulate the equation based on H 2 , or to linearize with the price of introducing multi-component wave-functions. It is not clear at present how far one can carry out such enterprize for interesting potentials. Another possibility would be to expand the more complicated potential energy V (x) around the oscillator but this is likely to run into convergence issues. Perhaps the answer is to study more closely the natural coordinates of [5] for other potentials.
VI. COMPARISON WITH PRIOR WORK
In this section we show the connection of our results with prior works.
We begin by taking a closer look at the Heisenberg-Weyl algebra realized in terms of operators a and a † in Eq. (28). We observe that it can be written in a simpler form that closely resembles the algebra of the harmonic oscillator in the ordinary quantum mechanics. Define a new operator
N = 1 α √ 2H − c ,(54)
where c is some constant we will fix shortly. The part of the algebra involving the Hamiltonian takes on the form satisfied by the number operator,
[N, a † ] = a † , [N, a] = −a .(55)
However, this is still a deformed algebra and not the algebra of the oscillator in the ordinary quantum mechanics because the commutator of a and a † is deformed
[a, a † ] =1 + 2(N + c) + ν(ν − 1) (N + c)(N + c − 1) =φ(N + 1) − φ(N ) , (56) where a † a =φ(N ) = (N + c) 2 − ν(ν − 1) N + c − 1 − ν(ν − 1) aa † =φ(N + 1) .(57)
Using Eq. (57) we can understand the spectrum a bit better. Let |0 be a normalized ground state defined by
a|0 = 0 .(58)
We want to interpret operator N as a number operator. This means that we expect N |0 = 0. It then follows, φ(N )|0 = φ(0)|0 = 0, or
φ(0) = 0 .(59)
With this choice the function
φ(n) = (n + ν) 2 − ν(ν − 1) n + ν − 1 − ν(ν − 1)(61)
has no zeros for any positive integer [16]. There is an infinite tower of states of the form |n ∝ a † n |0 . These are precisely the states we have constructed in Section IV.
We can now establish the relation to prior art. We can relate operators a and a † to a pair of ordinary Bose operators b and b † as follows. Let us define
N = b † b , b, b † = 1 [N, b] = −b , N, b † = b † .(62)
The function φ(N ) can be written in a factorized form
φ(N ) = N + ν N + ν − 1 N (N + 2ν − 1) .(63)
The mapping to Bose operators is given as follows, [9],
a † = φ(N ) N b † = b † φ(N + 1) N + 1 a =b φ(N ) N = φ(N + 1) N + 1 b .(64)
Then the Hamiltonian takes the form
H SPT = α 2 2 b † b + ν 2 .(65)
This is precisely the results found in Ref. [6] for the Pöschl-Teller model. For the oscillator in the minimal uncertainty length quantum mechanics we must include the additive constant 1/2β
H OSC = βω 2 2 b † b + ν 2 − 1 2β = β 2 N 2 + βν 2 N + 1 2 .(66)
This is precisely the result obtained in Ref. [4]. We can also view our result as a deformation of SU (1, 1) algebra. Let operators K ± and K 0 be the generators of an ordinary undeformed SU (1, 1) algebra. They satisfy commutator relations [K 0 , K ± ] = ±K ± and [K − , K + ] = 2K 0 . It is well known that the SU (1, 1) algebra can be constructed by a suitable deformation of the ordinary single boson oscillator algebra. For example,
K + = √ N + 2ν − 1 b † = b † √ N + 2ν , K − =b √ N + 2ν − 1 = √ N + 2ν b , K 0 =N + ν .(67)
We note that the algebra of K operators is characterized uniquely by the parameter ν that determines the strength of the Pöschl-Teller potential, because the Casimir operator is given by C K = ν(ν − 1). We can write Eq. (64) in terms of the SU (1, 1) generators
a † = N + ν N + ν − 1 K + = K + N + ν + 1 N + ν a † =K − N + ν N + ν − 1 = N + ν + 1 N + ν K − H = α 2 2 K 2 0 .(68)
This the result obtained in [6]. We must stress again that SU (1, 1) appearing here is not a dynamical symmetry group! The dynamical symmetry group is simply the dynamical SU (1, 1) of the Bose oscillator b and b † . We have S + = 1 2 b † 2 , S − = 1 2 b 2 and S 0 = 1 4 (bb † + b † b). States are divided into two infinite dimensional representations of this SU (1, 1), as explained earlier. Even n states belong to the Bargman index k = 1/4 representation and odd n states belong to the k = 3/4 representation. Deformation does not mix these two representations. We can also define a deformed algebra quadratic in operators a and a † as follows. Let T − = 1 2 a 2 , T + = 1 2 a † 2 and T 0 = S 0 . Operators T ± and T 0 satisfy commutators of the form [T 0 , T ± ] = ±T ± and [T − , T + ] = G(T 0 ). It is not hard to work out the explicit form of T ± and G(T 0 ). However, the corresponding quadratic Casimir operator is zero and offers no new information.
In a way this completes the story of how to quantize the system in the deformed quantum mechanics. Simply search for an undeformed system that is acceptable both to the deformed commutator [a, a † ] = 1 + 2H 0 and to the Hamiltonian H of the model studied. This task however may not be easy to carry out. The moral of the story is that the deformed theory should probably considered as a constrained system and the quantization must then be carried according to the rules for quantization of constrained systems explained by Dirac [14].
VII. THE D-DIMENSIONAL ISOTROPIC HARMONIC OSCILLATOR
In this section we consider the D-dimensional extension of the minimal length uncertainty quantum mechanics. The problem is quite interesting because the extension to higher dimensions implies that spatial coordinates do not commute [12]. It is a lucky circumstance that rotational symmetry is preserved. This means that isotropic systems can be reduced to a quantization of some effective one-dimensional model on the positive real line.
In D-spatial dimensions the deformed basic commutators are given by
[x i , p j ] =i 1 + βp 2 δ ij + iβ ′ p i p j [p i , p j ] =0 [x i , x j ] = − i (2β − β ′ ) + (2β + β ′ )p 2 L ij ,(69)
where L ij = 1 1+βp 2 (x i p j − x j p i ) are the components of the angular momentum tensor [1,2] . The third commutator in Eq. (69) implies that the spatial coordinates are noncommutative. Here we follow the notation and conventions of [2] where the the Schrödinger equation for the oscillator model was solved.
The momentum space representation is available. In the momentum space the momentum operators can be represented as simple multiplication. The rotational symmetry implies that the radial variable p = D i=1 p 2 i is a good variable. The position operator is represented by
x i = i(1 + βp 2 ) ∂ ∂p i + iβ ′ p i p j ∂ ∂p j + iγp i .(70)
Variables p i , i = 1, 2, . . . , D, run from −∞ to ∞. The measure is given by
dµ = V D−1 (1 + (β + β ′ )p 2 ) α−1 p D−1 dp ,(71)
where V D−1 is a volume of the D − 1 dimensional sphere and 0 ≤ p ≤ ∞. The constant α in the measure is given by
α = γ β+β ′ − β ′ β+β ′ D−1 2 .
Because of the rotational symmetry the square of the operator D i=1
x 2 i will involve the Ddimensional Laplace operator that can be expressed in spherically symmetric coordinates ∇ 2
p = ∂ 2 ∂p 2 + D−1 p ∂ ∂p − l(l+D−2) p 2
, where angular momentum quantum number is an integer, l = 0, 1, . . . ,. There is a usual degeneracy in the magnetic quantum number. The wave functions is factorized into a radial and angular part according to Ψ(p i ) = Ψ(p)Y (Ω) where Y (Ω) is a D-dimensional generalization of spherical harmonics. In writing the decomposition of the wave function we have suppressed the angular quantum number l. The Hamiltonian of the isotropic D-dimensional oscillator is given by
H = D i=1 1 2m p 2 i + mω 2 2 x 2 i = 1 2m p 2 + mω 2 2 x 2 ,(72)
where the expression after the second equality sign is given in radial coordinates. The operator x 2 reads explicitly (we use the shorthand notation L 2 = l(l + D − 2)),
−x 2 = " (1 + (β + β ′ )p 2 ) d dp « 2 − L 2 p 2 + (γD − 2βL 2 ) + " D − 1 p + ((D − 1)β + 2γ)p «`1 + (β + β ′ )p 2´d dp +`γ(βD + β ′ + γ) − β 2 L 2´p2 .(73)
As in the one-dimensional case we work with the Green's function G(z). The transformation from the momentum space to wave-vector is given by
p = 1 √ β + β ′ tan β + β ′ ρ,(74)
see Ref. [2]. The wave-vector now runs over positive values,
0 ≤ ρ ≤ π 2 √ β+β ′ . The similarity transformation Ψ(ρ) = Jψ(ρ) with J = (cos √ β + β ′ ρ) γ/(β+β ′ )
removes all dependence on the parameter γ. The Green's operator matrix element reads
G −1 (z) = π 2 √ β+β ′ Z 0 dµ(ρ)φ * (ρ) " z −`t an √ β + β ′ ρ´2 2m(β + β ′ ) − mω 2 2 x ′2 #ψ (ρ) ,(75)
where the measure is
dµ(ρ) = tan β + β ′ ρ D−1 (β + β ′ ) (1−D)/2 dρ cos √ β + β ′ ρ 2δ(76)and δ = − β ′ β+β ′ D−1 2 .
The operator x ′ 2 is given by
−x ′2 = d 2 dρ 2 − 2βL 2 − L 2 (β + β ′ ) tan 2 √ β + β ′ ρ − β 2 L 2 β + β ′ tan 2 p β + β ′ ρ + (D − 1) √ β + β ′ tan √ β + β ′ ρ " 1 + β β + β ′ tan 2 p β + β ′ ρ « d dρ .(77)
Operator x ′ 2 involves the term with the first derivative. This is typical for higher dimensional theories. Such term can be eliminated by another similarity transformation,ψ(ρ) =
Jψ(ρ), where J = (cos √ β+β ′ ρ) β/(β+β ′ ) sin √ β+β ′ ρ (D−1)/2
. Finally, we arrive at the Green's function written in terms of the Hamiltonian of the equivalent Pöschl-Teller model
G −1 (z) = constant × π/2α 0 dx φ * (x) (z ′ − H PT ) ψ(x) ,(78)where H PT = 1 2 p 2 + (2α) 2 8 ν(ν − 1) cos 2 αx + (2α) 2 8 µ(µ − 1) sin 2 αx .(79)
In writing the Hamiltonian in equation (79) we have rescaled the variable ρ and have defined several new constants, and have also performed a shift in the energy variable z. The following definitions apply:
β + β ′ ρ =αx , (β + β ′ )mω 2 = α 2 j =l + D − 3 2 , l = 0, 1, · · · , p = i d dρ ν(ν − 1) = β 2 (β + β ′ ) 2 j(j + 1) − ββ ′ (β + β ′ ) 2 D − 1 2 + 1 m 2 ω 2 (β + β ′ ) 2 µ(µ − 1) =j(j + 1) z ′ =z + 1 2m(β + β ′ ) − mω 2 β ′ (β ′ + 2β) 2(β + β ′ ) j(j + 1) + D − 1 2 .(80)
We should note the following about Eq. (80). In the definition of the parameter ν, the first two terms originate from the oscillator potential energy. The third term originates from the oscillator kinetic energy. The parameter µ also originates completely from the oscillator potential energy term. In the limit, D = 1, l = 0 and β ′ = 0, the result reduces to the onedimensional case; in the same limit the parameter µ becomes zero and the energy parameter z ′ become the same as in the one-dimension. The natural coordinates for the Pöschl-Teller model are
y = cos 2αx k = − α{p, sin 2αx} .(81)
After some calculation we arrive at the Heisenberg-Weyl algebra in natural coordinates
[y, k] =i(2α) 2 (1 − y 2 ) [H, y] = − ik [H, k] =i(2α) 2 2yH − ik + (2α) 2 4 y + (2α) 2 4 (ν(ν − 1) − µ(µ − 1)) .(82)
This algebra is closed and similar to that of the symmetric Pöschl-Teller model of Section IV. However, there is an important difference, the central term in the [H, k] commutator. This means that ladder operators will contain an extra term in addition to a linear combination of y and k operators. Following the construction outlined in Section IV we obtain
a = 1 4α 2 y(2α √ 2H + 2α 2 ) + ik − 4α 4 C 2α √ 2H − 2α 2 a † = 1 4α 2 (2α √ 2H + 2α 2 )y − ik − 4α 4 C 2α √ 2H − 2α 2 ,(83)
where
C = ν(ν − 1) − µ(µ − 1) = ν − 1 2 2 − µ − 1 2 2 .(84)
Ladder operators satisfy the commutator relations
[H, a] = − a 2α √ 2H − 2α = − 2α √ 2H + 2α a [H, a † ] = 2α √ 2H − 2α a † = a † 2α √ 2H + 2α .(85)
The second set of equalities follows from commutator relations also satisfied by ladder operators
[ √ 2H, a] = − 2αa [ √ 2H, a † ] =2αa † ,(86)
which indicates that the square root of the Hamiltonian behaves as a natural number operator for a and a † . The central term makes appearance in operator products a † a and aa † , or in the commutator [a, a † ] and the anticommutator {a, a † }. After tedious calculations we find
a † a = φ( √ 2H) = − Q 2 √ 2H 2α √ 2H 2α − 1 + √ 2H 2α 2 + C 2 4 √ 2H 2α √ 2H 2α − 1 2 √ 2H 2α − 1 2 aa † =φ( √ 2H + 2α) ,(87)
where
Q = ν(ν − 1) + µ(µ − 1) = ν − 1 2 2 + µ − 1 2 2 − 1 2 .(88)
The ground state of the model is defined by
a|0; ν, µ = 0 .(89)
The existence of the ground state solution to equation (89) implies some restrictions on identification of the square root of the Hamiltonian with the number operator. Using Eq. (86) we define
√ 2H 2α = N + d ,(90)
where N is an ordinary number operator for operators a and a † defined by [N, a] = −a and [N, a † ] = a † and d is some constant. The existence of the ground state solution determines the possible values of parameter d. We have . With d = 0, the first excited state is infinite. The third, the fourth and the fifth solutions appear to correspond to different parameterizations of the potential of the model. We work with d = ν+µ 2 . In general, if the combination (ν + µ)/2 is not a positive integer, then there will be an infinite tower of states. The function φ(
φ(N = 0) = 0 = − Q 2 d d − 1 + d 2 + C 2 4 d (d − 1)(2d − 1) 2 .√ 2H) factorizes ( √ 2H = 2α N + ν+µ 2 ), φ(N ) =N (N + ν + µ − 1)× (N + 2ν−1 2 )(N + 2µ−1 2 )(N + ν+µ 2 ) (N + ν+µ−1 2 ) 2 (N + ν+µ−2 2 )
.
The energy spectrum is given by E n =2α 2 n + ν + µ 2 2 , n = 0, 1, 2, · · · , |n;ν, µ ∝ a † n |0; ν, µ ,
where we assumed that the ground state is normalized to unity. One can show, using the explicit expression for ladder operators in terms of natural operators y and k, that energy eigenfunctions are in fact Jacobi polynomials P (ν−1/2,µ−1/2) (n−l)/2 (y) [3]. We also reproduce the energy formula given in Ref. [2] for the D-dimensional isotropic oscillator with minimal length uncertainty quantum mechanics
E n,l = ω " n + D 2 « × s 1 + m 2 ω 2 » β 2 j(j + 1) + β 2 + β ′ 2 − 2ββ ′ (D − 3) 4 - + mω 2 (β + β ′ ) 2 " n + D 2 « 2 + mω 2 β ′ D 4 + mω 2 (β − β ′ ) 2 " j(j + 1) − (D − 1)(D − 3) 4 « .(94)
Just like in one-dimension, the model can be described by a deformation of a single constrained boson of ordinary onedimensional quantum mechanics. The constraint comes in the form of the boson coordinate space being restricted to only a segment of the real line. It is also interesting that the Ddimensional model appears to allow finite dimensional representations, see [16]. These deserve to be explored in more detail.
VIII. SUMMARY AND OUTLOOK
In this paper we have studied harmonic oscillator models in the quantum theory with minimal length uncertainty relations. Such models may be relevant to quantum gravity at the Planck scale or may appear as an effective theory where modified uncertainty relations are introduced to capture certain features of physics below some scale. Examples of the second kind are models of rotational and vibrational spectra of molecules in molecular and chemical physics and in heavy deformed nuclei in nuclear physics.
We have focused on using of operator techniques because modification of the basic commutator of the quantum theory implies appearance of minimal length and more generally minimal momentum. This then means that the position and the momentum operators cannot be diagonalized any more. Consequently, the Schrödinger equation as a differential or integral equation becomes unavailable. We have shown that that operator techniques work and that complete knowledge of the system can be obtained. The next obvious step would be make generalizations and to consider models more complicated than the oscillator. The Coulomb problem is the first important candidate. Also, application to the field theory is desired too.
It is also of interest to learn how operator techniques extensions work when both, the position and momentum are limited by minimal uncertainties. This problem is in part related to q-oscillators. There exists a great deal of literature on q-deformed oscillators. However, the present problem is more general than the typical q-oscillators. First, the symmetry principles place constraints on the form that the basic commutator can take and this selects the applicable q-oscillators. For example, in Ref. [13], an extension of the special relativity that incorporates the minimal invariant length and the minimal invariant momentum was constructed and in this particular extension of the basic commutator of quantum mechanics is deformed roughly as, αx 2 + βp 2 + γ(xp + px). The second part involves of the present problem then involves the diagonalization of an arbitrary system in the basis defined by a q-oscillator. This is a difficult problem.
Another interesting fact that follows from the present work is that in both cases we can understand the quantization in the deformed theory as a deformed ordinary Bose oscillator. Perhaps this means that the deformed quantum mechanics is in fact an ordinary quantum mechanics with very complicated constraints. In that case the Dirac's theory of quantization with constraints, [14] may be an answer.
In closing let us also mention that the problem of the oscillator in a constant external field can also be incorporated in the formalism. The constant external field simply adds a term of the form H int = gx to the Hamiltonian. Once the models are transformed to Pöschl-Teller form there will be an extra term with the first derivative present. Such term can always be removed by an appropriate similarity transformation. Once this transformation is carried out, the analysis goes through as described in this paper.
[a, a † ] = 1. This commutator relation is equivalent to the basic commutator [x, p] = i. Then, [H, a] = −ωa and [H, a † ] = ωa † . The spectral energies are given by E n = ω(n + 1/2) where n = 0, 1, . . . , and states are constructed as |n ∝ (a † ) n |0 . The ground state is defined by a|0 = 0. Furthermore, the states are classified by the dynamical SU (1, 1) symmetry algebra constructed as follows. Let S + = 1 2 a † 2 , S − = S † + and S 0 = 1 4 (aa † +a † a). Operators S ± and S 0 satisfy the algebra, [S + , S − ] = −2S 0 and [S 0 , S ± ] = ±S ± . The Hamiltonian is given by H = 2ωS 0 . The spectrum is characterized by a Bargman index k which determines the quadratic Casimir operator C of the dynamical
An important side benefit of this construction is that we can also evaluate any commutator of the form [F (H), y] and [F (H), k] where F (H) is an analytic function of the Hamiltonian. We simply read of the needed relations from the matrix equation F (H) y k J = y k JF (M d ). Alternatively, we can also use this result to evaluate any commutators of the form [F (H), a] and [F (H), a † ]. In fact, the second set of equalities in (28) was determined this way.
This equation determines the constant c. There are three possible solutions, c = 0, ν and 1 − ν. The solution c = 0 is not acceptable because it implies φ c=0 (1) = ∞. The solution c = 1 − ν, at least on the surface, appears to be equivalent to the solution c = ν, because it amounts to a redefinition of the parameterization of the strength of Pöschl-Teller potential, see Eq. (18). We choose the solution c = ν .
AcknowledgmentsThis work has been supported in part by the Research Corporation.uate School of Science, Yeshiva University, New York, 1964; reprinted by Dover Publications, 2001.[15] We should note that in principle we can start with H in place of H0. Of course, this changes the physics drastically because the basic commutator is now given as [x, p] = i(1 + 2γH), (γ is some scale constant). The basic commutator with H implies the existence of minimal momentum in addition to the minimal length. However, it may be possible that this new problem is easier to solve. In the new theory we first diagonalize H with itself as a deformation. Then we quantize H again but with altered parameters. Perhaps the shape invariance of the supersymmetric quantum mechanics can be useful in this case,[11]. Then, we can carry out the limit that removes minimal momentum uncertainty to recover the minimal length uncertainty theory.[16] In fact it is possible for the function φ(n) = 0 to have a zero for a special value ν = 1−k 2 where k is a fixed integer. In that case the state |k would be zero norm state and we would have the finite dimensional representation. It is interesting to note that this possibility is not incompatible with minimal length uncertainty assumption. Using Eq. (18) we find that in that caseThis is an intriguing possibility that implies quantization of parameters β or ω, or both. This situation deserves more study.
. A Kempf, G Mangano, R B Mann, Phys. Rev. D. 521108A. Kempf, G. Mangano and R. B. Mann, Phys. Rev. D 52, 1108 (1995).
. L N Chang, D Minic, N Okamura, T Takeuchi, Phys. Rev. D. 65125027L. N. Chang, D. Minic, N. Okamura and T. Takeuchi, Phys. Rev. D 65, 125027 (2002).
I S Gradshteyn, I M Ryzhik, Table of integrals, series and products. A. Jeffery and D. ZwillingerAcademic PressI. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series and products, 7 th ed.; A. Jeffery and D. Zwillinger, editors, Elsevier, Academic Press, 2007.
. I Dadic, L Jonke, S Meljanac, Phys. Rev. D. 6787701I. Dadic, L. Jonke and S. Meljanac, Phys. Rev. D 67, 087701 (2003).
. M M Nieto, L M Simmons, Jr , Phys. Rev. D. 201332M. M. Nieto and L. M. Simmons, Jr., Phys. Rev. D 20, 1332 (1979).
. S-H Dong, R Lemus, Int. J. Quantum Chem. 86265S-H. Dong and R. Lemus, Int. J. Quantum Chem. 86, 265 (2002).
. C Daskaloyannis, J. Phys. A: Math. Gen. 252261C. Daskaloyannis, J. Phys. A: Math. Gen. 25, 2261 (1991).
. A P Polychronakos, Mod. Phys. Lett. A. 52325A. P. Polychronakos, Mod. Phys. Lett. A 5, 2325 (1990).
. S Meljanac, M Milekovic, S Pallua, Phys. Lett. B. 32855S. Meljanac, M. Milekovic and S. Pallua, Phys. Lett. B 328, 55 (1994).
D Ter Haar, Problems in quantum mechanics, 3. Pionrd ed.D. ter Haar, Problems in quantum mechanics, 3 rd ed., Pion,1975
D Spector, arXiv:0707.1028v1Minimal length uncertainty relations and new shape invariant models. D. Spector, Minimal length uncertainty relations and new shape invariant models, arXiv:0707.1028v1
. A Kempf, J. Phys. A: Math. Gen. 302093A. Kempf,J. Phys. A: Math. Gen. 30, 2093 (1997);
. A Kempf, J. Math. Phy. 381347A. Kempf, J. Math. Phy. 38, 1347 (1997).
. J Kowalski-Glikman, L Smolin, Phys. Rev. D. 7065020J. Kowalski-Glikman and L. Smolin, Phys. Rev. D 70, 065020 (2004).
P A M Dirac, Lectures on quantum mechanics. Belfer GradP. A. M. Dirac, Lectures on quantum mechanics, Belfer Grad-
| []
|
[
"CROSS-CULTURAL POLARITY AND EMOTION DETECTION USING SENTIMENT ANALYSIS AND DEEP LEARNING -A CASE STUDY ON COVID-19 A PREPRINT",
"CROSS-CULTURAL POLARITY AND EMOTION DETECTION USING SENTIMENT ANALYSIS AND DEEP LEARNING -A CASE STUDY ON COVID-19 A PREPRINT"
]
| [
"Ali Shariq Imran [email protected] \nDept. of Computer Science\nDept. of Computer Science\nNorwegian University of Science & Technology (NTNU)\nSukkurNorway\n",
"Sher Muhammad Doudpota \nDept. of Computer Science and Media Technology\nIBA University\nPakistan\n",
"Zenun Kastrati [email protected] \nDept. of Computer Science\nLinnaeus University\nSukkurSweden\n",
"Rakhi Bhatra [email protected] \nIBA University\nPakistan\n"
]
| [
"Dept. of Computer Science\nDept. of Computer Science\nNorwegian University of Science & Technology (NTNU)\nSukkurNorway",
"Dept. of Computer Science and Media Technology\nIBA University\nPakistan",
"Dept. of Computer Science\nLinnaeus University\nSukkurSweden",
"IBA University\nPakistan"
]
| []
| How different cultures react and respond given a crisis is predominant in a society's norms and political will to combat the situation. Often the decisions made are necessitated by events, social pressure, or the need of the hour, which may not represent the will of the nation. While some are pleased with it, others might show resentment. Coronavirus (COVID-19) brought a mix of similar emotions from the nations towards the decisions taken by their respective governments. Social media was bombarded with posts containing both positive and negative sentiments on the COVID-19, pandemic, lockdown, hashtags past couple of months. Despite geographically close, many neighboring countries reacted differently to one another. For instance, Denmark and Sweden, which share many similarities, stood poles apart on the decision taken by their respective governments. Yet, their nation's support was mostly unanimous, unlike the South Asian neighboring countries where people showed a lot of anxiety and resentment. This study tends to detect and analyze sentiment polarity and emotions demonstrated during the initial phase of the pandemic and the lockdown period employing natural language processing (NLP) and deep learning techniques on Twitter posts. Deep long short-term memory (LSTM) models used for estimating the sentiment polarity and emotions from extracted tweets have been trained to achieve state-of-the-art accuracy on the sentiment140 dataset. The use of emoticons showed a unique and novel way of validating the supervised deep learning models on tweets extracted from Twitter. A PREPRINT -AUGUST 25, 2020 Figure 1: Abstract Model of the Proposed Tweets' Sentiment and Emotion Analyser. | null | [
"https://arxiv.org/pdf/2008.10031v1.pdf"
]
| 221,266,089 | 2008.10031 | 2b7bf7d309d5b7e2c16feb83d0ebdcb0d3363d76 |
CROSS-CULTURAL POLARITY AND EMOTION DETECTION USING SENTIMENT ANALYSIS AND DEEP LEARNING -A CASE STUDY ON COVID-19 A PREPRINT
August 25, 2020 23 Aug 2020
Ali Shariq Imran [email protected]
Dept. of Computer Science
Dept. of Computer Science
Norwegian University of Science & Technology (NTNU)
SukkurNorway
Sher Muhammad Doudpota
Dept. of Computer Science and Media Technology
IBA University
Pakistan
Zenun Kastrati [email protected]
Dept. of Computer Science
Linnaeus University
SukkurSweden
Rakhi Bhatra [email protected]
IBA University
Pakistan
CROSS-CULTURAL POLARITY AND EMOTION DETECTION USING SENTIMENT ANALYSIS AND DEEP LEARNING -A CASE STUDY ON COVID-19 A PREPRINT
August 25, 2020 23 Aug 2020A PREPRINT -AUGUST 25, 2020 Figure 1: Abstract Model of the Proposed Tweets' Sentiment and Emotion Analyser.Behaviour Analysis · COVID-19 · Crisis · Deep Learning · Emotion Detection · LSTM · Natural Language Processing · Neural Network · Outbreak · Opinion mining · Pandemic · Polarity Assessment · Sentiment Analysis · Tweets · Twitter · Virus
How different cultures react and respond given a crisis is predominant in a society's norms and political will to combat the situation. Often the decisions made are necessitated by events, social pressure, or the need of the hour, which may not represent the will of the nation. While some are pleased with it, others might show resentment. Coronavirus (COVID-19) brought a mix of similar emotions from the nations towards the decisions taken by their respective governments. Social media was bombarded with posts containing both positive and negative sentiments on the COVID-19, pandemic, lockdown, hashtags past couple of months. Despite geographically close, many neighboring countries reacted differently to one another. For instance, Denmark and Sweden, which share many similarities, stood poles apart on the decision taken by their respective governments. Yet, their nation's support was mostly unanimous, unlike the South Asian neighboring countries where people showed a lot of anxiety and resentment. This study tends to detect and analyze sentiment polarity and emotions demonstrated during the initial phase of the pandemic and the lockdown period employing natural language processing (NLP) and deep learning techniques on Twitter posts. Deep long short-term memory (LSTM) models used for estimating the sentiment polarity and emotions from extracted tweets have been trained to achieve state-of-the-art accuracy on the sentiment140 dataset. The use of emoticons showed a unique and novel way of validating the supervised deep learning models on tweets extracted from Twitter. A PREPRINT -AUGUST 25, 2020 Figure 1: Abstract Model of the Proposed Tweets' Sentiment and Emotion Analyser.
Introduction
The world is seeing a paradigm shift the way we conduct our daily activities amidst ongoing coronavirus (COVID-19) pandemic -be it online learning, the way we socialize, interact, conduct businesses or do shopping. Such global catastrophes have a direct effect on our social life; however, not all cultures react and respond in the same way given a crisis. Even under normal circumstances, research suggests that people across different cultures reason differently [1]. For instance, Nisbett in his book "The geography of thought: How Asians and Westerners think differently... and why" stated that the East Asians think on the basis of their experience dialectically and holistically, while Westerners think logically, abstractly, and analytically [2]. This cultural behavior and attitude are mostly governed by many factors, including the socio-economic situation of a country, faith and belief system, and lifestyle. In fact, the COVID-19 crisis showed greater cultural differences between countries that seem alike with respect to language, shared history and culture. For example, even though Denmark and Sweden are two neighboring countries that speak almost the same language and share a lot of culture and history, they stand at extreme ends of the spectrum when it comes to the way how they reacted to coronavirus [3]. Denmark and Norway imposed more robust lockdown measures closing borders, schools, restaurants, and restricting gathering and social contact, while on the other side, Sweden has taken a relaxed approach to the corona outbreak keeping its schools, restaurants, and borders open.
Social media platforms play an essential role during the extreme crisis as individuals use these communication channels to share ideas, opinions, and reactions with others to cope with and react to crises. Therefore, in this study, we will focus on exploring collective reactions to events expressed in social media. Particular emphasis will be given to analyzing people's reactions to global health-related events especially the COVID-19 pandemic expressed in Twitter's social media platform because of its widespread popularity and ease of access using the API. To this end, tweets collected from thousands of Twitter users communicated within four weeks after the corona crisis are analyzed to understand how different cultures were reacting and responding to coronavirus. Additionally, an extended version of publicly available tweets dataset was also used. A new model for sentiment and emotion analysis is proposed. The model takes advantage of natural language processing (NLP) and deep neural networks and comprises two main stages. The first stage involves sentiment polarity classifier that classifies tweets as positive and negative. The output of the first stage is then used as input to an emotion classifier that aims to assign a tweet to either one of positive emotions classes (joy and surprise) or one of the negative emotions classes (sad, disgust, fear, anger). Figure 1 shows the abstract model of proposed system of sentiment and emotion analysis on tweets' text.
Study Objective & Research Questions
Our primary objective with this study is to understand how different cultures behave and react given a global crisis. The state of the questions addressed about the cultural differences as a techno-social system reveals potentialities in societal attitudinal, behavioral, and emotional predictions.
In the present investigation, to examine those behavioral and emotional factors that describe how societies react under different circumstances, the general objective is to analyze the potential of utilizing NLP-based sentiment and emotional analysis techniques in finding answers to the following research questions (RQ).
Contribution
The major contributions of this article are as following:
• A supervised deep learning sentiment detection model for Twitter feeds concerning the COVID-19 pandemic.
• Proposed a multi-layer LSTM assessment model for classifying both sentiment polarity and emotions.
• Achieved state-of-the-art accuracy on Sentiment140 polarity assessment dataset.
• Validation of the model for emotions expressed via emoticons.
• Provide interesting insights into collective reactions on coronavirus outbreak on social media.
The rest of the article is organized as follows. Section 2 presents the research design and study dimensions. Related work is presented in section 3. Data collection procedure and data preparation steps are described in section 4, whereas, sentiment and emotion analysis model is presented in section 5. Section 6 entails the results followed by discussion and analysis in section 7. Lastly, section 8 concludes the paper with potential future research directions.
Material & Methods
Research Design
The study is conducted using quantitative (experimental) research methodology on users' tweets posted post corona crisis. The investigation required collecting users' posts on Twitter from early February 2020 until the end of April 2020, when the first few cases were reported worldwide and in a respective country for ten to twelve weeks. The reason for using only the initial few weeks is that people usually get accustomed to the situation over time and an initial phase is enough to grasp the general/overall behavior of the masses towards a crisis and the policies adopted by respective governments. Several measurements have been taken in this study during data collection that requires cataloging for training deep learning models and for further analysis. These are discussed in the next subsection.
Study Dimensions
Following dimensions are used to facilitate the interpretation of the results:
• Demography-(d): country / region under study. This study focuses on two neighbouring countries from South Asia, two from Nordic, and two from North America.
• Timeline-(t): the day from the initial reported cases in the country up to 4-12 weeks.
• Culture-(c): East (South-East Asia) vs. West (Nordic/America)
• Polarity-(p): sentiment classified as either positive or negative.
• Emotions-(e): Feelings expressed as joy, surprise (astonished), sad, disgust, fear and anger.
• Emoticons-(et): emotions expressed through graphics for emotions listed above i.e., , , , , .
Tools & Instrument
Python scripts are used to query Tweepy Twitter API 1 for fetching users' tweets and extracting feature set for cataloging. NLTK 2 is used to preprocess the retrieved tweets. NLP-based deep learning models are developed to predict sentiment polarity and users' emotions using Tensorflow and Keras as a back-end deep learning engine. Sentiment140 and Emotional Tweets datasets are used to train classifier A and Classifier B/C respectively, as discussed in section 5. Visualization and LSTM model prediction as an instrument to analyze the results in addition to correlation are used. The results of sentiment and emotion recognition are validated through an innovative approach to exploiting emoticons extracted from the Tweets, which is a widely accepted feature of expressing one's feelings.
Deep Learning Models
Deep learning models for sentiment detection are employed in this study. A deep neural network (DNN) consists of an input, output, and a set of hidden layers with multiple nodes. The training process of a DNN consists of a pre-trainig and a fine-tuning steps.
The pre-training step consists of weight initialization in an unsupervised manner via a generative deep belief networks (DBN) on the input data [4], followed by network training in a greedy way by taking two layers at a time as a restricted Boltzmann machine (RBM), given as:
E(v, h) = − K k=1 L l=1 v k σ k h l w kl − K k=1 (v k − a k ) 2 2σ 2 k − L l=1 h l b l ,(1)
where σ k is the standard deviation, w kl is the weight value connecting visible units v k and the hidden units h l , a k and b l are the bias for visible and hidden units, respectively. The Equation 1 represents the energy function for the Gaussian-Bernoulli RMB.
The hidden and visible units' joint-probability are defined as:
p(v, h) = e −E(v,h) v,k e −E(v,h) .(2)
Whereas, a contrastive divergence algorithm is used to estimate the trainable parameters by maximizing the expected log probability [4], given as:
θ = argmax θ E[log h p(v, h)],(3)
where θ represents the weights, biases and standard deviation.
The network parameters are adjusted in a supervised manner using back-propagation technique in the fine-tuning step. The back-propagation is an expression for the partial derivative ∂C ∂w of the cost function C with respect to any weight w (or bias b) in the network. The quadratic cost function can be defined as:
C = 1 2n x y(x) − α L (x) 2 ,(4)
where n is the total number of training examples, x is the training samples, y = y(x) is the corresponding desired output, L denotes the number of layers in the network, and α L = α L (x) is the vector of activations output from the network when x is input.
The proposed sentiment assessment model employs LSTM, which is a variant of a recurrent neural network (RNN). LSTMs help preserve the error that can be back-propagated through time and layers. They allow RNN to learn continuously over many time steps by maintaining a constant error. RNN maintains memory which distinguish itself from the feedforward networks. LSTMs contain information outside the normal flow of the RNN in a gated cell. The process of carrying memory forward can be expressed mathematically as:
h t = φ(W x t + U h t−1 ),(5)
where h t is the hidden state at time t. W is the weight matrix, and U is the transition matrix. φ is the activation function.
Related Work
Reactions to Events in Social Media
There is a large body of literature concerning people's reactions to events expressed in social media, which generally can be distinguished by the type of the event the response is related to and by the aim of the study [5]. Types of events cover natural disasters, health-related events, criminal and terrorist events, and protests, to name a few. Studies have been conducted for various purposes including examining the spreading pattern information on Twitter on Ebola [6] and on coronavirus outbreak [7], tracking and understanding public reaction during pandemics on twitter [8,9], investigating insights that Global Health can draw from social media [10], conducting content and sentiment analysis of tweets [11].
Sentiment Polarity Assessment
Sentiment analysis on Twitter data has been an area of wide interest for more than a decade. Researchers have performed sentiment polarity assessment on Twitter data for various application domains such as for donations and charity [12], students' feedback [13], on stocks [14][15][16], predicting elections [17], and understanding various other situations [18]. Most approaches found in the literature have performed lexicon-based sentiment polarity detection via a standard NLP-pipeline (pre-processing steps) and POS tagging steps for SentiWordNet, MPQA, SenticNet or other lexicons. These approaches compute a score for finding polarity of the Tweet's text as the sum of the polarity conveyed by each of the micro-phrases m which compose it [19], given as:
P ol(m i ) = k j=1 score(term j ) * w pos ( termj ) |m i | ,(6)
where w pos ( termj ) is greater than 1 if pos(term j ) = adverbs, verbs, adjectives, otherwise 1.
The abundance of literature on the subject cited led Kharde et al. [20] and others [21][22][23] to present a survey on conventional machine learning-/ lexicon-based methods to deep learning-based technique respectively, to analyze tweets for polarity assessment, i.e., positive, negative, and neutral.
The authors in [24] address the issue of spreading public concern about epidemics using Twitter data. A sentiment classification approach comprising two steps is used to measure people's concerns. The first step distinguishes personal tweets from the news, while the second step separates negative from non-negative tweets. To achieve this, two main types of methods were used: 1) an emotion-oriented, clue-based method to automatically generate training data, and 2) three different Machine Learning (ML) models to determine the one which gives the best accuracy.
Exploratory sentiment classification in the context of COVID-19 tweets is investigated in the study conducted by Samuel et al. [25]. Two machine learning techniques, namely Naïve Bayes and Logistic Regression, are used to classifying positive and negative sentiment in tweets. Moreover, the performance of these two algorithms for sentiment classification is tested using two groups of data containing different lengths of tweets. The first group comprises shorter tweets with less than 77 characters, and the second one contains longer tweets with less than 120 characters. Naïve Bayes achieved an accuracy of 91.43% for shorter tweets and 57.14% for longer tweets, whereas, a worse performance is obtained by Logistic Regression, with an accuracy of 74.29% for shorter tweets and 52% for longer tweets, respectively. After the lockdown on the COVID-19 outbreak, Twitter sentiment classification of Indians is explored by the authors in [26]. A total of 24,000 tweets collected from March 25 th to March 28 th , 2020 using the two prominent keywords: #IndiaLockdown and #IndiafightsCorona are used for analysis. The results revealed that even though there were negative sentiments expressed about the lockdown, tweets containing positive sentiments were quite present.
Emotion Classification
Hassan et al. [27] utilized the Circumplex model that characterizes affective experience along two dimensions: valence and arousal for detecting emotions in Twitter messages. The authors build the lexicon dictionary of emotions from emotional words from LIWC 3 (Linguistic Inquiry & Word Count). They extracted uni-grams, emoticons, negations and punctuation as features to train conventional machine learning classifiers in a supervised manner. They achieved an accuracy of 90% on tweets. The study conducted by Fung et al. [28] examines how people reacted to the Ebola outbreak on Twitter and Google. A random sample of tweets are examined, and the results showed that many people expressed negative emotions, anxiety, anger, which were higher than those expressed for influenza. The findings also suggested that Twitter can provide valuable information on people's anxiety, anger, or negative emotions, which could be used by public authorities and health practitioners to provide relevant and accurate information related to the outbreak.
The authors in [29] investigate people's emotional response during the Middle East Respiratory Syndrome (MERS) outbreak in South Korea. They used eight emotions to analyze people's responses. Their findings revealed that 80% of the tweets were neutral, while anger and fear dominated the tweet concerning the disease. Moreover, the anger increased over time, mostly blaming the Korean government while there was a decline in fear and sadness responses over time. This observation, as per the authors, was understandable as the government was taking strict actions to prevent the infection, and the number of new MERS cases decreased as time went by. The important finding was that the surprise, disgust, and happiness were more or less constant. A similar study is conducted by the researchers in [7]. The study focuses on emotional reactions during the COVID-19 outbreak by exploring the tweets. A random sample of 18,000 tweets is examined for positive and negative sentiment along with eight emotions, including anger, anticipation, disgust, fear, joy, sadness, surprise, trust. The findings showed that there exists an almost equal number of positive and negative sentiments, as most of the tweets contained both panic and comforting words. The fear among the people was the number one emotion that dominated the tweets, followed by the trust of the authorities. Also, emotions such as sadness and anger of people were prevalent.
Dataset
We used two tweets' datasets in this study to detect sentiment polarity and emotion recognition. Trending hashtag # data explained in subsection 4.1 that we collected ourselves and the Kaggle dataset presented in subsection 4.2.
We additionally used the Sentiment140 [30] and Emotional Tweets dataset [31] to train our proposed deep learning models. The reason for using these two particular datasets for training the model is: (i) the availability of manually labeled state-of-the-art dataset and (ii) the lack of labeled tweets extracted from Twitter. The focus of our study is six neighboring countries from three continents having similar cultures and circumstances. These include Pakistan, India, Norway, Sweden, USA, and Canada. We specifically opted for these six countries for cross-cultural analysis due size, approach adopted by respective governments, popularity and cultural similarity.
Trending Hashtag Data
The study employs retrieving and collecting trending hashtag # tweets ourselves due to the lack of publicly available datasets for the initial period of COVID-19 outbreak. For instance, #lockdown was trending across the globe during February 2020; #StayHome was trending in Sweden, while COVID-19 was trending throughout the period February -April 2020. Figure 2 shows the total number of tweets per country for trending hashtags # between 3 rd February to 29 th February 2020. We only retrieved the trending hashtag # tweets from across six countries mentioned earlier for the initial phase of the pandemic for this study.
Data collection procedure
A standard Twitter search API, known as Tweepy, is used to fetch users' tweets. Multiple queries are executed via Tweepy containing trending keywords #Coronavirus, #COVID_19, #COVID19, #COVID19Pandamic, #Lockdown, #StayHomeSaveLives, and #StayHome for the period
T p = {S d , E d },
where S d is the starting date, i.e. when the first case of the corona patient is reported in a given country/region and E d is the end date. The keywords are chosen based upon the trending keywords during T p . Only tweets in English for a given region are cataloged for further processing containing Tweet ID, text, user name, time, and location.
Data preparation
PRISMA 4 approach is adopted in this study to query COVID-19 related tweets and to filter out the irrelevant ones. Following pre-processing steps are applied to clean the retrieved tweets:
1. Removal of mentions and colons from tweet text.
2. Replacement of consecutive non-ASCII characters with space.
3. Tokenization of tweets.
4. Removal of stop-words and punctuation via NLTK library.
5. Tokens are appended to obtain cleaned tweets.
6. Extraction of emoticons from tweets.
Kaggle Dataset
We further went on to include Tweets for the period of March to April 2020 from the publically available dataset since after data-preparation, we were left with a small number of tweets from Nordic countries. Table 1 shows the number of tweets per country under consideration for the Kaggle dataset 5 from 12 th of March to 30 th April 2020. The total number of tweets is 460,286, out of which USA tweets contribute 73%. The hashtags # applied to retrieve Kaggle dataset tweets include #coronavirus, #coronavirusoutbreak, #coronavirusPandemic, #covid19, #covid_19. From 17 th March till the end of the April two more hashtags were included, i.e., # epitwitter, #ihavecorona.
Sentiment140 Dataset
We used the Sentiment140 dataset from Stanford [30] for training our sentiment polarity assessment classifier -A, presented in section 5.1. This dataset contains an overwhelming number of positive and negative tweets. Each category contains 0.8 million tweets, a staggering number of a total of 1.6 million tweets. We particularly opted for this dataset to train our deep learning models in a supervised manner due to the unavailability of the labeled tweets related to COVID-19.
Emotional Tweets Dataset
Emotional Tweets dataset is utilized in this study to train classifier B and classifier C for emotions recognition, described in 5.2 and 5.3, respectively. The tagging process of this dataset is reported by Saif et al. in [31]. The dataset contains six classes as summarized in Table 2. The first two labels, joy and surprise, are positive emotions, whereas the remaining four, sadness, fear, anger, and disgust, are negative emotions. The dataset comprises of 21,051 total number of labeled tweets.
Model for Sentiment and Emotion Analysis
Literature suggests many attempts of tweets' sentiment analysis, but very few attempts of emotions' classification. Sentiment analysis on tweets refers to the classification of an input tweet text into sentiment polarities, including positive, negative and neutral, whereas emotions' classification refers to classifying tweet text in emotions' label including joy, surprise, sadness, fear, anger and disgust.
Sentiment polarity certainly conveys meaningful information about the subject of the text; however, the emotion classification is the next level. It suggests if the sentiment about the subject is negative, then to what extent it is negative -being negative with anger is a different state of mind than being negative and disgusted. Therefore, it is important to extend the task of sentiment polarity classification to the next level and identify emotion in negative and positive sentiment polarities. The rest of this section explains the working of each of the components in the abstract model, depicted in Figure 1. All the models and Jupyter Notebooks developed for this paper are available on paper's GitHub repository 6 .
Sentiment Assessment -Classifier A
The first stage classifier in our model classifies an input tweet text in either positive or negative polarity. For this, we employed the Sentiment140 dataset explained in section 4.3 -the most popular dataset for such polarity classification tasks. For developing our first stage model, we padded each input tweet to ensure a uniform size of 280 characters, which is standard tweet maximum size.
To establish a baseline model, a simple deep neural network based on an embedding layer, max-pooling layer, and three dense layers of 128, 64, and 32 outputs were developed. The last layer uses sigmoid as an activation function, as it performs better in binary classification, whereas all intermediate layers use ReLU as an activation function. This baseline model splits 1.6 million tweets in training and test sets with 10% tweets (160,000 tweets) spared for testing the model. The remaining 90% tweets were further divided into a 90/10 ratio for training and model validation, respectively. The model training and validation was set to ten epochs; however, the model over fits immediately after two epochs, therefore, it was retrained on two epochs to avoid overfitting. The training and validation accuracy on the baseline models was 96% and 81%, respectively. Table 3 summarizes training and validation accuracy for each of the five proposed models along with model structures. Figure 3 shows structure of the best performing model i.e. LSTM with FastText model. Table 4 shows the F1 and the accuracy scores on test set -10% of the dataset comprising of 160,000 tweets equally divided into positive and negative polarities. The table also presents the previously best-reported accuracy and F1 score on the dataset, as reported in [32]. The model proposed in this paper based on FastText outperforms all other models, including previously best-reported accuracy. Therefore, we choose this model as our first stage classifier to classify tweets in positive and negative polarities.
Emotion Recognition -Classifier B
Once the polarity from Classifier A is positive, the next step is to identify positive emotions in the tweet. In order to extract tweet emotions, we use the Emotional Tweets dataset presented in section 4.4. If the label from first stage Classifier A is positive, the text is applied to classifier A to determine exact positive emotions -joy or surprise.
In order to extract positive emotions from the positive tweets, the negative emotions' labels were removed at Classifier B, leaving only two positive labels -joy and surprise. Repeating the same experiments as in Classifier A, the performance of five models was tested for this classification task. The test accuracy for each of these models is reported in Table 5. The model based on Glove.twitter.27B.300d pre-trained embedding with LSTM outperforms the other four models; therefore, we use LSTM with GloVe embedding at this stage.
Emotion Recognition -Classifier C
The final classifier at the second stage is Classifier C, which classifies negative polarity tweets in negative emotions. As reported in Table 2, although there are four labels in negative emotion category, however, we drop the forth categorydisgust as it has very few instances and causes performance degradation for the dataset being imbalance. We performed experiments of remaining three labels on our five models. Table 6 summarises models' performance on 10% test data. Once again, the classifier based on LSTM with pre-trained embedding Glove.twitter.27B.300d outperforms the other four models; therefore, we use it for classifying negative polarity tweets in negative emotions -sadness, anger and fear. Figure 4 shows the structure of model for classifier B and C. and on 42 billion tokens of web data, from Common Crawl. The process of learning GloVe word embedding is explained in [33].
Similarly, FastText word embedding used in our Classifier A is an extension to word2vec model. FastText represents words as n-gram of characters. For example, to represent word computer with n = 3, the FastText representation is <co, com, omp, mpu, put, ute, ter, er>. A more detailed information on integration of general-purpose word embeddings like GloVe and FastText, and deep learning within a classification system can be found in [34].
Validation Criteria
The lack of ground truth i.e. labeled Tweets for queried test dataset for sentiment assessment concerning COVID-19, required the use of emoticons as a mechanism to validate the detected results into positive and negative polarities, as well as for emotions. We, therefore, propose the use of emoticons extracted from tweets to check whether a tweet's polarity and emotions reflect the sentiments depicted via emoticons the same or no. It may not be a perfect system, but a way to assess the accuracy of more than a million tweets via our proposed classifiers in a weakly supervised manner.
The use of emoticons in sentiment analysis is not something new. In fact, there is an abundance of literature that supports the notion of utilizing emoticons in sentiment analysis [35,36]. However, rather than using emoticons for sentiment detection, we use them for validating our model's performance. The emoticons were grouped into six categories, as described in Table 2. The type and description of emoticons used are depicted in Table 7 for each group category. We had a total number of 460,286 tweets from six selected countries in the English language. Out of these tweets, 443,670 tweets did not contain any emoticon, whereas 11,110 tweets used positive emoticon (joy, surprise), and 5,674 used negative emoticon (sad, disgust, anger, fear). The remaining tweets used a mix of emoticons like joy with disgust, anger with surprise, etc.; therefore, these usages of emoticons were considered sarcastic expressions of emotion, thus being excluded in the validation process.
We tested our Model #2 presented in Table 4 (based on LSTM + FastText trained on Sentiment140) on the remaining 16,784 positive and negative tweets. We used these 16,784 tweets as test data to assess model accuracy. The emoticons were considered actual labels and the model predicted the labels on the tweet text. The model achieved an accuracy of 76% and an F1 score of 78%. This indicates that our model is reasonably consistent with the users' sentiments expressed in terms of emoticons.
The reason of good accuracy achieved in the validation phase is that our process of validation is indeed the same as the process used in preparing the Sentiment140 dataset -the dataset on which our model is based upon for sentiment polarity assessment.
Results on Trending Hashtag # Data
The proposed Model #2, which achieved state-of-the-art polarity assessment accuracy on the Sentiment140 dataset, was used to detect polarity and emotions on the trending hashtag # data. Figure 5 shows the side-by-side country-wise comparison of sentiment polarity detection for the initial period of four weeks. The sentiments are normalized to 0 -1 as the sum of tweets per day/total number of tweets for a given country. As can be seen from the graphs illustrated in Figure 5, there were only a few tweets concerning the coronavirus outbreak posted over almost all the month of February. There were also few days where no tweets have been posted, especially in Pakistan and India. It is interesting to note that the number of tweets is rapidly increased only in the last 2-3 days of February, and all six countries see this growing trend among Twitter users for sharing their attitudes, i.e., positive and negative about coronavirus.
The graphs between neighboring Sweden and Norway (top-row) and that of Canada and USA (middle-row) have a similar pattern of tweets' emotions, unlike Pakistan and India (bottom-row). In India, people's reaction seems quite strong, as evident from the average number of positive and negative posts (yellow and blue horizontal line). The reason could be the early outbreak of COVID-19 in India, i.e., 30 th of January 2020. A similar pattern was observed for Canada probably because they had their first positive case reported during the same time as well.
7 Discussion & Analysis 7.1 Polarity Assessment Analysis between Neighbouring Countries Figure 6 gives an overview of the side-by-side country-wise sentiment for both negative and positive polarity. The sentiments are normalized to 0 -1. As can be seen in Figure 6 There is an equal number of negative sentiments for both Norway and Sweden over the entire period, whereas the average polarity for positive sentiments is higher in the case of Sweden compared to Norway. A gradual decline in positive trends for Norway can be observed in (top-right) plot in the figure. Till May 1 st , 2020, the positive sentiments (blue line) for Norway were above the average (orange line), after which it started to decline. Figure 7 shows the actual number of persons tested positive in Norway during the same period (data source 7 ). The percentage of positive cases in the chart is based upon the total number of persons tested each day. The number of positive registered cases started increasing the second week of March 2020 till the first week of April, after which it dropped, which is in line with the sentiments expressed by the users which started to decline during the same week ( Figure 6, top-left and -right).
The trend between the positive and negative sentiments between Pakistan and India and that of the USA and Canada are very similar, as evident from the middle and bottom charts in Figure 6. A closer look at the average sentiments between Pakistan and India reveals that the Indians expressed higher negative sentiments than Pakistanis (middleleft). Also, a significant number of positive posts appeared for Pakistan (middle-right), which showed that the people showed some trust in Government's decision. It partially is attributed to Pakistan's Prime Minister address to the nation on coronavirus on multiple occasions (March 17 th and March 22 nd ) before the lockdown. It is worthy to note that the first case in India was reported on 30 th January 2020 and for Pakistan on 26 th February 2020, however, both countries went into the lockdown around the same time, i.e., 21 st of March for Pakistan and 24 th of March for India. Table 8 shows when the first COVID-19 case was reported in the given country and the day it went into the lockdown. It is also worth mentioning here that at the beginning of April, the number of tweets declined, and so does the sentiments representation, which dropped below the average for all the countries except Sweden, where still a significant number of positive sentiments can be observed (top-right). Moreover, Pakistan had the least negative sentiments (i.e., 7 https://www.fhi.no/en/id/infectious-diseases/coronavirus/daily-reports/daily-reports-COVID19/ ). This could be attributed to the fact that most of the businesses run as usual in Sweden. In the case of Pakistan, the number of cases during the initial period was still low, as anticipated by the Government. Additionally, people did not observe the standard operating procedures enforced by the state much, despite the country was in lockdown. A similar trend was observed in India; however, the Government there had a much strict shutdown, though it came quite late since the first case was reported late January, which may have triggered more negative posts than positive.
Emotion Assessment Analysis Between Neighbouring Countries
We observed that there was a visible difference between the sentiments expressed by the people of Norway and Sweden ( Figure 6). We further analyze these two countries in detail in our study of emotional behavior. The results are depicted in Figure 8. Positive emotions are presented in the left figures and negative emotions on the right -the graph shows which emotions are dominated over a period of time. The graph is scaled between 0 to 25 for better readability. It represents the accumulative emotions stacked on top of each other.
As we can see from Figure 8 (top-left), in both countries, the joy dominates the positive tweets whereas sad and fear are the most commonly shared negative emotions, with anger being less shared. The pattern, in particular, for Norway is in line with the actual statistics for positive cases reported by the Norwegian Institute of Public Health (NIPH) (Figure 7).
Correlation Analysis
Additionally, we analyze the Pearson correlation between neighboring countries to see the sentiment polarity and emotion trend during the COVID-19 lockdown. As can be seen in Table 9, there is a high correlation between USA and Canada (US-CA), and Pakistan and India (PK-IN), unlike between Norway and Sweden (NO-SW). The correlation between (NO-SW) is around 50% for negative and 40% for positive sentiments. This shows that the sentiments expressed in tweets on Twitter by the people of both countries were different during the same period. A possible reason for this is the different approach that these two countries have taken over the outbreak. Similar trend can be observed for emotions depicted in Table 10. Pakistan and India have the highest correlation across all five emotions, followed by the USA and Canada. While Norway and Sweden have the least number of tweets sharing common polarity, as evident from the emotions "surprise" and "anger". A possible explanation for this is the response of people to respective Governments' decision on COVID-19, especially to the lockdown restrictions. There were few Swedes who felt surprised and angry as well, towards the Swedish Government's decision to not impose any lockdown measures and its choice to go for the herd immunity. For example, the tweet "Tweet No 416: A mad experiment 10 million people #Coronasverige #COVID19 #SWEDEN" expresses both feelings, surprise and anger, of the user on the decision of the Swedish Government. On the other side, users from Norway did not express any kind of these feelings as their Government followed the approach implied by most of the countries in the world by imposing lockdown measures from the very beginning of the outbreak. For instance,
"
Findings concerning RQ's
Following the detected sentiment and emotions by the proposed model and the analysis of results presented in previous subsections, for (RQ1), it is safe to assume that NLP-based deep learning models can provide, if not enough, some cultural and emotional insight across cross-cultural trends. It is still difficult to say to what extent, as for non-native English speaking countries, the number of tweets was far less than those of the USA for any statistically significant observations. (RQ2) Nevertheless, the general observations of users' concern and their response to respective Governments' decision on COVID-19 resonates with sentiments analyzed from the tweets. (RQ3) It was observed that the there is a very high correlation between the sentiments expressed between the neighbouring countries within a region (Table 9 and 10). For instance, Pakistan and India, similar to the USA and Canada, have similar polarity trends, unlike Norway and Sweden. (RQ4) Both positive and negative emotions were equally observed concerning #lockdown; however, in Pakistan, Norway, and Canada the average number of positive tweets was more than the negative ones ( Figure 5 and 6).
Conclusion
This paper aimed to find the correlation between sentiments and emotions of the people from within neighboring countries amidst coronavirus (COVID-19) outbreak from their tweets. Deep learning LSTM architecture utilizing pretrained embedding models that achieved state-of-the-art accuracy on the Sentiment140 dataset and emotional tweet dataset are used for detecting both sentiment polarity and emotions from users' tweets on Twitter. Initial tweets right after the pandemic outbreak were extracted by tracking the trending hasthtags# during February 2020. The study also utilized the publicly available Kaggle tweet dataset for March -April 2020. Tweets from six neighboring countries are analyzed, employing NLP-based sentiment analysis techniques. The paper also presents a unique way of validating the proposed model's performance via emoticons extracted from users' tweets. We further cross-checked the detected sentiment polarity and emotions via various published sources on the number of positive cases reported by respective health ministries and published statistics.
Our findings showed a high correlation between tweets' polarity originating from the USA and Canada, and Pakistan and India. Whereas, despite many cultural similarities, the tweets posted following the corona outbreak between two Nordic countries, i.e., Sweden and Norway, showed quite the opposite polarity trend. Although joy and fear dominated between the two countries, the positive polarity dropped below the average for Norway much earlier than the Swedes. This may be due to the lockdown imposed in Norway for a good month and a half before the Government decided to ease the restrictions, whereas, Swedish Government went for the herd immunity, which was equally supported by the Swedes. Nevertheless, the average number of positive tweets was higher than the average number of negative tweets for Norway. The same trend was observed for Pakistan and Canada, where the positive tweets were more than the negative ones. We further observed that the number of negative and positive tweets started dropping below the average sentiments in the first and second week of April for all six countries.
This study also suggests that NLP-based sentiment and emotion detection can not only help identify cross-cultural trends but is also plausible to link actual events to users' emotions expressed on social platforms with high certitude, and that despite socio-economic and cultural differences, there is a high correlation of sentiments expressed given a global crisis -such as in the case of coronavirus pandemic. Deep learning models on the other hand can further be enriched with semantically rich representations using ontology as presented in [37,38] for effectively grasping one's opinion from tweets. Moreover, advanced seq2seq type language models as word embedding can be explored as a future work.
1 .
1RQ1: To what extent NLP can assist in understanding cultural behavior? 2. RQ2: How reflective are the observations to the actual user sentiments analyzed from the tweets? 3. RQ3: To what extent the sentiments are the same within and across the region? 4. RQ4: How are lockdowns and other measures seen by different countries/cultures?
Figure 2 :
2Total No. of Tweets per Country for the Period 3 rd to 29 th Feb. 2020 for Trending Hashtags #.
Figure 3 :
3LSTM + FastText Model Summary.
Figure 4 :
4LSTM + GloVe Model Summary.
Figure 5 :
5Side-by-Side Country-Wise Comparison of Sentiments Analysis on Trending Hashtag # data for the Period Feb.3 to Feb.29, 2020. Positive and Negative Sentiment Graphs long with the Averaged Tweets' Polarity for Sweden (top-left), Norway (top-right), Canada (middle-left), USA (middle-right), Pakistan (bottom-left), and India (bottomright).
(top-left), the attitudes of Swedes over coronavirus outbreak has changed over time. The peak of negative comments expressed in twitter is registered on March 22. This was a day before the Prime Minister had a rare public appearance addressing the nation over the coronavirus outbreak. It is fascinating to note that on the day of Prime Minister's speech there exists an equal number of positive (top-right) and negative (top-left) sentiments, while a day after, the positive emotions dominated the tweets showing Swedes' trust in Government with respect to the outbreak.
Figure 6 :
6Side by side country-wise comparison of sentiments analysis: (top-left) negative polarity between NO -SW, (top-right) positive polarity between NO -SW; (middle-left) negative polarity between PK -IN, (middle-right) positive polarity between PK -IN; (bottom-left) negative polarity between US -CA, (bottom-right) positive polarity between US -CA.
Figure 7 :
7No. of positive cases reported in Norway between 24 th Feb. to 30 th May, 2020.
Figure 8 :
8Side-by-Side Country-Wise Comparison of Emotions Between Sweden and Norway: (left-side) +ve emotions, (right-side) -ve emotions. avg = 0.201 -yellow line -(middle-left)), whereas, Swedes were more positive (i.e., avg = 3.98 -yellow line -topright)
The following items are cataloged for each tweet: Tweet ID, Time, Original Text, Cleaned Text, Polarity, Subjectivity, User Name, User Location and Emoticons. A total of 27,357 tweets were extracted after pre-processing and filtering, as depicted inTable 1. : No of tweets per Country for Trending Hashtag # Dataset After Filtering and Kaggle Dataset.Sr.# Country Trending Hashtag # Dataset Kaggle Dataset
1
Pakistan
2501
9869
2
India
8455
70392
3
Norway
168
476
4
Sweden
571
816
5
Canada
5367
42127
6
USA
10295
336606
Total
27357
460286
Table 1
Emotional Tweet Dataset Containing Six Class Labels for Positive and Negative Sentiment Polarity.Sr. # Class Label Number of Instances Sentiment Polarity
1
Joy
8240
Positive
2
Surprise
3849
Positive
3
Sad
3830
Negative
4
Fear
2816
Negative
5
Anger
1555
Negative
6
Disgust
761
Negative
Table 2:
Table 3 :
3Training-validation Accuracy on Sentiment140 Dataset for Five Proposed Deep Learning Models.
Table 5 :
5F1 and Accuracy Scores of Five Proposed Models on Positive Emotions (Joy and Surprise).The GloVe: Global Vector for Word Representation used in classifier B and C is a model for word representation
trained on five corpora, a 2010 Wikipedia dump with 1 billion tokens; a 2014 Wikipedia dump with 1.6 billion tokens;
Gigaword 5 which has 4.3 billion tokens; the combination Gigaword5 + Wikipedia2014, which has 6 billion tokens;
Table 6 :
6F1 and Accuracy Scores of Five Models on Negative Emotions (Sad, Anger, Fear) Grouping of the Emoticons based on the Emotions.Sr Emtotions Emoticons Unicode Description
1
Joy
1F600
grinning face
1F602
face with tears of joy
1F603
smiling face with open mouth
1F604
smiling face with open mouth and open eyes
1F605
smiling face with open mouth and cold sweat
1F606
smiling face with open mouth and tighly-closed eyes
1F60A
smiling face with smiling eyes
1F60D
smiling face with heart-shaped eyes
2
Surprise
1F632
astonished face
1F62E
face with open mouth
1F62F
hushed face
3
Sad
1F613
face with cold sweat
1F614
pensive face
1F61E
dissappointed face
1F622
crying face
1F62D
loudly crying face
1F623
persevering face
4
Anger
1F620
angry face
1F621
pounting face
1F624
face with look of triumph
5
Fear
1F628
fearful face
1F632
face screaming in fear
6
Disgust
1F62C
grimacing face
Table 7:
Initial COVID-19 Case and the Lockdown Dates. *Lockdown Dates Varies for Different States/ProvinceSr. # Country First COVID-19 case Lockdown Date
1
Pakistan
26 th Feb.
21 st Mar.*
2
India
30 th Jan.
24 th Mar.
3
Norway
26 th Feb.
12 th Mar.
4
Sweden
31 st Jan.
No Lockdown
5
USA
21 st Jan.
19 th Mar.*
6
Canada
25 th Jan.
26 th Mar.
Table 8:
No. Sentiments US-CA PK-IN NO-SW : Correlation for Sentiment Polarity Between Neighbouring Countries.1
Positive
0.967
0.816
0.402
2
Negative
0.971
0.860
0.517
Table 9
Tweet No 103: Norway closing borders, airports, harbours from Monday 16th 08:00. The Norwegian government taking Corona #Covid_19 seriously I wish us best hope survive"shows people's faith in the Norwegian Government's decision.Table 10: Correlation for Emotions Between Neighbouring Countries.No Correlation
b/w
Joy
Surprise
Sad
Fear Anger
1
US-CA
0.795
0.740
0.877 0.718 0.673
2
PK-IN
0.962
0.959
0.953 0.945 0.913
3
NO-SW
0.229
0.161
0.343 0.375 0.190
https://www.tweepy.org 2 https://www.nltk.org
http://www.liwc.net/
http://www.prisma-statement.org 5 https://www.kaggle.com/smid80/coronavirus-covid19-tweets
https://github.com/sherkhalil/COVID19
Till to date (i.e., the first week of May 2020), the pandemic is still rising in other parts of the world, including Brazil and Russia. It would be interesting to observe more extended patterns of tweets across more countries to detect and assert people's behavior dealing with such calamities. We hope and believe that this study will provide a new perspective to readers and the scientific community interested in exploring cultural similarities and differences from public opinions given a crisis, and that it could influence decision makers in transforming and developing efficient policies to better tackle the situation, safe-guarding people's interest and needs of the society.
Are there cross-cultural differences in reasoning. P Johnson-Laird, N Lee, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science SocietyP. Johnson-Laird and N. Lee, "Are there cross-cultural differences in reasoning?" in Proceedings of the Annual Meeting of the Cognitive Science Society, 2006, pp. 459-464.
The geography of thought: How Asians and Westerners think differently. R Nisbett, Simon and SchusterR. Nisbett, The geography of thought: How Asians and Westerners think differently... and why. Simon and Schuster, 2004.
Why is denmark's coronavirus lockdown so much tougher than sweden's?" The Local. T L Dk, T. L. Dk, "Why is denmark's coronavirus lockdown so much tougher than sweden's?" The Local. [Online]. Avail- able: https://www.thelocal.dk/20200320/why-is-denmarks-lockdown-so-much-more-severe-than-swedens
A fast learning algorithm for deep belief nets. G E Hinton, S Osindero, Y.-W Teh, Neural computation. 187G. E. Hinton, S. Osindero, and Y.-W. Teh, "A fast learning algorithm for deep belief nets," Neural computation, vol. 18, no. 7, pp. 1527-1554, 2006.
A conceptual framework for studying collective reactions to events in location-based social media. A Dunkel, G Andrienko, N Andrienko, D Burghardt, E Hauthal, R Purves, International Journal of Geographical Information Science. 334A. Dunkel, G. Andrienko, N. Andrienko, D. Burghardt, E. Hauthal, and R. Purves, "A conceptual framework for studying collective reactions to events in location-based social media," International Journal of Geographical Information Science, vol. 33, no. 4, pp. 780-804, 2019.
How did ebola information spread on twitter: broadcasting or viral spreading?. H Liang, I C Fung, Z T H Tse, J Yin, C.-H Chan, L E Pechta, B J Smith, R D Marquez-Lamed, M I Meltzer, K M Lubell, K.-W Fu, BMC Public Health. 19438H. Liang, I. C.-H. Fung, Z. T. H. Tse, J. Yin, C.-H. Chan, L. E. Pechta, B. J. Smith, R. D. Marquez-Lamed, M. I. Meltzer, K. M. Lubell, and K.-W. Fu, "How did ebola information spread on twitter: broadcasting or viral spreading?" BMC Public Health, vol. 19, no. 438, pp. 1-11, 2019.
Informational flow on twitter -corona virus outbreak -topic modelling approach. R P Kaila, A K Prasad, International Journal of Advanced Research in Engineering and Technology (IJARET). 113R. P. Kaila and A. K. Prasad, "Informational flow on twitter -corona virus outbreak -topic modelling approach," International Journal of Advanced Research in Engineering and Technology (IJARET), vol. 11, no. 3, pp. 128- 134, 2020.
Twitter informatics: Tracking and understanding public reaction during the 2009 swine flu pandemic. M Szomszor, P Kostkova, C S Louis, ACM International Conferences on Web Intelligence and Intelligent Agent Technology. 1M. Szomszor, P. Kostkova, and C. S. Louis, "Twitter informatics: Tracking and understanding public reaction during the 2009 swine flu pandemic," in Proceedings of the IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, vol. 1, 2011, pp. 320-323.
How people react to zika virus outbreaks on twitter? a computational content analysis. K.-W Fu, H Liang, N Saroha, Z T H Tse, P Ip, I. C.-H Fung, American Journal of Infection Control. 4412K.-W. Fu, H. Liang, N. Saroha, Z. T. H. Tse, P. Ip, and I. C.-H. Fung, "How people react to zika virus outbreaks on twitter? a computational content analysis," American Journal of Infection Control, vol. 44, no. 12, pp. 1700- 1702, 2016.
#ebola and twitter. what insights can global health draw from social media. T Vorovchenko, P Ariana, F Van Loggerenberg, P Amirian, Proceedings of the Big Data in Healthcare: Extracting Knowledge from Point-of-Care Machines. P. Amirian, T. Lang, and F. van Loggerenbergthe Big Data in Healthcare: Extracting Knowledge from Point-of-Care MachinesT. Vorovchenko, P. Ariana, F. van Loggerenberg, and P. Amirian, "#ebola and twitter. what insights can global health draw from social media?" in Proceedings of the Big Data in Healthcare: Extracting Knowledge from Point-of-Care Machines, P. Amirian, T. Lang, and F. van Loggerenberg, Eds., 2017, pp. 85-98.
Pandemics in the age of twitter: Content analysis of tweets during the 2009 h1n1 outbreak. C Chew, G Eysenbach, PLoS ONE. 511C. Chew and G. Eysenbach, "Pandemics in the age of twitter: Content analysis of tweets during the 2009 h1n1 outbreak," PLoS ONE, vol. 5, no. 11, pp. 1-13, 2010.
Sentiment analysis of twitter data. A Shelar, C.-Y Huang, Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI). the 2018 International Conference on Computational Science and Computational Intelligence (CSCI)A. Shelar and C.-Y. Huang, "Sentiment analysis of twitter data," in Proceedings of the 2018 International Con- ference on Computational Science and Computational Intelligence (CSCI), 2018, pp. 1301-1302.
Weakly supervised framework for aspect-based sentiment analysis on students' reviews of moocs. Z Kastrati, A S Imran, A Kurti, IEEE Access. 8Z. Kastrati, A. S. Imran, and A. Kurti, "Weakly supervised framework for aspect-based sentiment analysis on students' reviews of moocs," IEEE Access, vol. 8, pp. 106 799-106 810, 2020.
Real-time sentiment analysis of twitter streaming data for stock prediction. S Das, R K Behera, S K Rath, Procedia computer science. 132S. Das, R. K. Behera, S. K. Rath et al., "Real-time sentiment analysis of twitter streaming data for stock predic- tion," Procedia computer science, vol. 132, pp. 956-964, 2018.
Sentiment analysis of twitter data for predicting stock market movements. V S Pagolu, K N Reddy, G Panda, B Majhi, Proceedings of the international conference on signal processing. the international conference on signal processingV. S. Pagolu, K. N. Reddy, G. Panda, and B. Majhi, "Sentiment analysis of twitter data for predicting stock market movements," in Proceedings of the international conference on signal processing, communication, power and embedded system (SCOPES), 2016, pp. 1345-1350.
Integrating stocktwits with sentiment analysis for better prediction of stock price movement. R Batra, S M Daudpota, Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET). the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET)R. Batra and S. M. Daudpota, "Integrating stocktwits with sentiment analysis for better prediction of stock price movement," in Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), 2018, pp. 1-5.
Prediction and analysis of indonesia presidential election from twitter using sentiment analysis. W Budiharto, M Meiliana, Journal of Big data. 5151W. Budiharto and M. Meiliana, "Prediction and analysis of indonesia presidential election from twitter using sentiment analysis," Journal of Big data, vol. 5, no. 1, p. 51, 2018.
Cnn for situations understanding based on sentiment analysis of twitter data. S Liao, J Wang, R Yu, K Sato, Z Cheng, Procedia computer science. 111S. Liao, J. Wang, R. Yu, K. Sato, and Z. Cheng, "Cnn for situations understanding based on sentiment analysis of twitter data," Procedia computer science, vol. 111, pp. 376-381, 2017.
A comparison of lexicon-based approaches for sentiment analysis of microblog posts. C Musto, G Semeraro, M Polignano, Proceedings of the 8th International Workshop on Information Filtering and Retrieval. the 8th International Workshop on Information Filtering and RetrievalC. Musto, G. Semeraro, and M. Polignano, "A comparison of lexicon-based approaches for sentiment analysis of microblog posts," in Proceedings of the 8th International Workshop on Information Filtering and Retrieval, 2014, pp. 59-68.
Sentiment analysis of twitter data: a survey of techniques. V Kharde, S Sonawane, arXiv:1601.06971arXiv preprintV. Kharde and S. Sonawane, "Sentiment analysis of twitter data: a survey of techniques," arXiv preprint arXiv:1601.06971, pp. 5-15, 2016.
Deep learning for sentiment analysis: A survey. L Zhang, S Wang, B Liu, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 84L. Zhang, S. Wang, and B. Liu, "Deep learning for sentiment analysis: A survey," Wiley Interdisciplinary Re- views: Data Mining and Knowledge Discovery, vol. 8, no. 4, pp. 1-34, 2018.
Like it or not: A survey of twitter sentiment analysis methods. A Giachanou, F Crestani, ACM Computing Surveys (CSUR). 492A. Giachanou and F. Crestani, "Like it or not: A survey of twitter sentiment analysis methods," ACM Computing Surveys (CSUR), vol. 49, no. 2, pp. 1-41, 2016.
Techniques for sentiment analysis of twitter data: A comprehensive survey. M Desai, M A Mehta, Proceedings of the International Conference on Computing, Communication and Automation (ICCCA). the International Conference on Computing, Communication and Automation (ICCCA)M. Desai and M. A. Mehta, "Techniques for sentiment analysis of twitter data: A comprehensive survey," in Proceedings of the International Conference on Computing, Communication and Automation (ICCCA), 2016, pp. 149-154.
Twitter sentiment classification for measuring public health concerns. X Ji, S A Chun, Z Wei, J Geller, Social Network Analysis and Mining. 51X. Ji, S. A. Chun, Z. Wei, and J. Geller, "Twitter sentiment classification for measuring public health concerns," Social Network Analysis and Mining, vol. 5, no. 1, pp. 1-25, 2015.
Covid-19 public sentiment insights and machine learning for tweets classification. J Samuel, M N Ali, M M Rahman, E Esawi, Y Samuel, Information. 11J. Samuel, M. N. Ali, M. M. Rahman, E. Esawi, and Y. Samuel, "Covid-19 public sentiment insights and machine learning for tweets classification," Information, vol. 11, pp. 1-21, 2020.
Sentiment analysis of nationwide lockdown due to covid 19 outbreak: Evidence from india. G Barkur, G B Vibha, Kamath, Asian Journal of Psychiatry. 51G. Barkur, Vibha, and G. B. Kamath, "Sentiment analysis of nationwide lockdown due to covid 19 outbreak: Evidence from india," Asian Journal of Psychiatry, vol. 51, pp. 1-2, 2020.
Emotex: Detecting emotions in twitter messages. M Hasan, E Rundensteiner, E Agu, Proceedings of the ASE Bigdata/Socialcom/Cybersecurity Conference. the ASE Bigdata/Socialcom/Cybersecurity ConferenceM. Hasan, E. Rundensteiner, and E. Agu, "Emotex: Detecting emotions in twitter messages," in Proceedings of the ASE Bigdata/Socialcom/Cybersecurity Conference, 2014, pp. 1-10.
Ebola and the social media. I C Fung, Z T H Tse, C.-N Cheung, A S Miu, K.-W Fu, The Lancet. 3849961I. C.-H. Fung, Z. T. H. Tse, C.-N. Cheung, A. S. Miu, and K.-W. Fu, "Ebola and the social media," The Lancet, vol. 384, no. 9961, pp. 128-134, 2014.
Analyzing emotions in twitter during a crisis: A case study of the 2015 middle east respiratory syndrome outbreak in korea. H J Do, C.-G Lim, Y J Kim, H.-J Choi, Proceedings of the international conference on big data and smart computing (BigComp). the international conference on big data and smart computing (BigComp)H. J. Do, C.-G. Lim, Y. J. Kim, and H.-J. Choi, "Analyzing emotions in twitter during a crisis: A case study of the 2015 middle east respiratory syndrome outbreak in korea," in Proceedings of the international conference on big data and smart computing (BigComp), 2016, pp. 415-418.
Twitter sentiment classification using distant supervision. A Go, R Bhayani, L Huang, CS224N project report. Stanford1A. Go, R. Bhayani, and L. Huang, "Twitter sentiment classification using distant supervision," CS224N project report, Stanford, vol. 1, no. 12, pp. 1-6, 2009.
WASSA-2017 shared task on emotion intensity. S M Mohammad, F Bravo-Marquez, Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA. the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSAS. M. Mohammad and F. Bravo-Marquez, "WASSA-2017 shared task on emotion intensity," in Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), 2017, pp. 34-39.
Sentiment analysis of tweets using deep neural architectures. M Cai, Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS 2018). the 32nd Conference on Neural Information Processing Systems (NIPS 2018)M. Cai, "Sentiment analysis of tweets using deep neural architectures," in Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS 2018), 2018, pp. 1-8.
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, Proceedings of the 2014 conference on empirical methods in natural language processing. the 2014 conference on empirical methods in natural language processingJ. Pennington, R. Socher, and C. D. Manning, "Glove: Global vectors for word representation," in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532-1543.
Integrating word embeddings and document topics with deep learning in a video classification framework. Z Kastrati, A S Imran, A Kurti, Pattern Recognition Letters. 128Z. Kastrati, A. S. Imran, and A. Kurti, "Integrating word embeddings and document topics with deep learning in a video classification framework," Pattern Recognition Letters, vol. 128, pp. 85 -92, 2019.
Semantic textual similarity of sentences with emojis. A Debnath, N Pinnaparaju, M Shrivastava, V Varma, I Augenstein, Companion Proceedings of the Web Conference 2020. A. Debnath, N. Pinnaparaju, M. Shrivastava, V. Varma, and I. Augenstein, "Semantic textual similarity of sen- tences with emojis," in Companion Proceedings of the Web Conference 2020, 2020, pp. 426-430.
The impact of sentiment on content post popularity through emoji and text on social platforms. W.-L Chang, H.-C Tseng, Proceedings of the Cyber Influence and Cognitive Threats. the Cyber Influence and Cognitive ThreatsW.-L. Chang and H.-C. Tseng, "The impact of sentiment on content post popularity through emoji and text on social platforms," in Proceedings of the Cyber Influence and Cognitive Threats, 2020, pp. 159-184.
An improved concept vector space model for ontology based classification. Z Kastrati, A S Imran, S Y Yayilgan, 2015 11th International Conference on Signal-Image Technology Internet-Based Systems (SITIS). Z. Kastrati, A. S. Imran, and S. Y. Yayilgan, "An improved concept vector space model for ontology based clas- sification," in 2015 11th International Conference on Signal-Image Technology Internet-Based Systems (SITIS), 2015, pp. 240-245.
The impact of deep learning on document classification using semantically rich representations. Z Kastrati, A S Imran, S Y Yayilgan, Information Processing & Management. 565Z. Kastrati, A. S. Imran, and S. Y. Yayilgan, "The impact of deep learning on document classification using semantically rich representations," Information Processing & Management, vol. 56, no. 5, pp. 1618-1632, 2019.
| [
"https://github.com/sherkhalil/COVID19"
]
|
[
"Web Pages Clustering: A New Approach",
"Web Pages Clustering: A New Approach"
]
| [
"Prashanth P P #2Jeevan H E #1 \nDept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia\n",
"Punith Kumar \nDept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia\n",
"S N #3 \nDept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia\n",
"Vinay Hegde \nDept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia\n"
]
| [
"Dept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia",
"Dept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia",
"Dept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia",
"Dept. of Computer Science and Engineering\nRV College of Engineering\nBangaloreKarnatakaIndia"
]
| [
"INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING"
]
| The rapid growth of web has resulted in vast volume of information. Information availability at a rapid speed to the user is vital. English language (or any for that matter) has lot of ambiguity in the usage of words. So there is no guarantee that a keyword based search engine will provide the required results. This paper introduces the use of dictionary (standardised) to obtain the context with which a keyword is used and in turn cluster the results based on this context. These ideas can be merged with a metasearch engine to enhance the search efficiency. | null | [
"https://arxiv.org/pdf/1108.5703v1.pdf"
]
| 17,223,428 | 1108.5703 | 25d8b142d7c033ded55d0c660c507b09d13c952f |
Web Pages Clustering: A New Approach
APRIL 2011
Prashanth P P #2Jeevan H E #1
Dept. of Computer Science and Engineering
RV College of Engineering
BangaloreKarnatakaIndia
Punith Kumar
Dept. of Computer Science and Engineering
RV College of Engineering
BangaloreKarnatakaIndia
S N #3
Dept. of Computer Science and Engineering
RV College of Engineering
BangaloreKarnatakaIndia
Vinay Hegde
Dept. of Computer Science and Engineering
RV College of Engineering
BangaloreKarnatakaIndia
Web Pages Clustering: A New Approach
INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING
14APRIL 201142Clusteringconcept mininginformation retrievalmetasearch engine
The rapid growth of web has resulted in vast volume of information. Information availability at a rapid speed to the user is vital. English language (or any for that matter) has lot of ambiguity in the usage of words. So there is no guarantee that a keyword based search engine will provide the required results. This paper introduces the use of dictionary (standardised) to obtain the context with which a keyword is used and in turn cluster the results based on this context. These ideas can be merged with a metasearch engine to enhance the search efficiency.
INTRODUCTION
As information availability increases with the growth of the web, the number of users who want to retrieve that information also increases. This has led to the rise of search engines. A search engine typically is based on a keyword as a query, uses this to search its indexed database which has data about different web sites and their content and presents the results to the user. But users still find it fairly difficult to find the exact information required by them, even though it may be present in the web. There are various reasons for this.
One reason for this is that many users search the Internet with keywords that are ambiguous to certain degree.
For example : If one searches for "keyboard" in a search engine expecting sites containing information about the musical instrument, he gets a list that is a mix of links to pages containing information about typing keyboard and musical instrument.
Today we have many sophisticated search engines like Google, Yahoo, Bing etc. But still we are not guaranteed of accurate search results. Apart from the above mentioned reason, it may also be due to the fact that a single search engine may not be able to index the entire web which has grown to such a large extent. Every day thousands of new web sites are created and millions of existing pages get updated. To keep track of every such detail is impossible.
In order to solve this problem, many meta-search engines emerge such as, Excite, WebCrawler and so on, which make further processing of search results gathered from many existing search engines as explained in [1]. For example Excite issues queries to three other search engines, including Google, Yahoo, and Bing. The results from these search engines are combined to find the most relevant pages. The advantage is obvious. People can fast identify the information they need.
In this paper we propose a simple and effective method to cluster web pages and extract concepts from a keyword. We also introduce an improved ranking algorithm for metasearch engines.
II. WEB PAGES CLUSTERING AND CONCEPT MINING
A. Web Pages Clustering
Clustering can be considered the most important unsupervised learning problem; so, as every other problem of this kind, it deals with finding a structure in a collection of unlabeled data.
A loose definition of clustering could be "the process of organizing objects into groups whose members are similar in some way".
A cluster is therefore a collection of objects which are "similar" between them and are "dissimilar" to the objects belonging to other clusters as defined in [2].
Web pages clustering, in particular, mean removing irrelevant links from the obtained results. The result from multiple search engines is processed to obtain the final search result page. The result which appears in results of more search engines will be listed above the others.
B. Concept Mining
Concept mining is an activity that results in the extraction of concepts from artefacts. Solutions to the task typically involve aspects of artificial intelligence and statistics, such as data mining and text mining. Because artefacts are typically a loosely structured sequence of words and other symbols (rather than concepts), the problem is nontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents.
The idea is to use the dictionary available in the Internet to determine the different contexts in which the keyword can appear, that is, the same keyword explaining different concepts.
III. USE OF DICTIONARY
Concept mining as mentioned earlier involves Artificial Intelligence. Extracting concepts from short text snippets retrieved from the search results may not be accurate enough. To achieve good amount of accuracy, we may require the entire text to be available. Hence it can be computationally intensive and consume high bandwidth to function at an acceptable speed [3]. For the internet environment, a better solution can be to use a dictionary. A dictionary can be used for the queries that the user gives. Each ambiguous word will lead to multiple meanings obtained from the dictionary. Based on these multiple meanings clusters can be done for each type of result. This clustering can be done in two ways. One is to process the search results. Compare the context of the results with the meanings retrieved from the dictionary. This is again not straightforward and requires considerable data mining techniques [4]. Hence we propose a simple alternative but an efficient technique. The technique is to submit the meanings retrieved itself as queries to the search engine. This eliminates the need for any data mining algorithm. Each result retrieved already belongs to a particular cluster (the meaning used for searching). So this eliminates the need for a clustering algorithm. Now consider a query such as "Bank". The dictionary can provide meanings such as financial institution, sides of a water body and rely upon. The search engine can resolve the ambiguity by forming three clusters of results, one for each meaning. The meaning itself is sent to the search engine as a query. Further, the results can be improved by concatenating the user query and the meaning and making it a single new query. In this case it can be "Bank financial institution". //Module to retrieve meanings from a dictionary //Input-user query -string //Output-list of meanings Dictionary (String query) do meanings = getFromDictionary (query); for each meaning from the dictionary do AddToList (list, meaning) end if (list is NULL) // no meaning found // query may be a noun AddToList (list, query) return list end When it comes to implementation of the same, the dictionary can be maintained either online or offline. An online dictionary such as that of the WorldNet is a better choice, since it is updated regularly and is widely accepted standard dictionary. On the other hand an offline, local dictionary is also possible, provided it is sophisticated enough to provide the results with minimum delay and can be updated regularly.
One problem with this is the use of multiword queries. In this case, it may still be possible to get the meaning of each word of the query from the dictionary, but constructing a new query from that will be a problem. Different solutions can be provided for the same. The algorithm may be designed to select only one word for querying, based on the number of meanings retrieved for each word in the multiword query. The word with maximum number of different meanings can be used. Another solution is to perform a quick concept mining from the multi word query and obtain a single word query. For Example, a query such as "Where is Bangalore", can be reduced to just "Bangalore"
Another problem with the use of dictionary is for the queries that involve proper nouns. The dictionary is not expected to provide results for these. Even proper nouns can be ambiguous to some extent. For example consider "Sachin". This could refer to cricket player Sachin Tendulkar or any other individual with the same name ( music director Sachin Dev Burman), resolving such ambiguities is non-trivial and may require more input from the user itself. One approach to remove such ambiguities is to use the history of searches by the same user [5]. This can inherently point to a certain context. In this case if the user had earlier searched things about sports, then the probability is more that the query "Sachin" meant "Sachin Tendulkar". This requires data mining and statistical analysis of previous data available.
IV. METASEARCH ENGINE A metasearch engine is a search tool that sends user requests to several other search engines and/or databases and aggregates the results into a single list and provides it to the user in way similar to any other search engine. The concept of metasearch engine arises from the fact that the web is too large for one search engine to index it completely and more comprehensive results can be obtained by combining the results of various search engines [6]. The obvious advantage of this technique is that the search space is more i.e. more web pages are covered. Since a metasearch engine has to deal with different search engines, it requires a parsing stage to convert the results from all the search engines into a uniform manner. The implementation can typically involve XML and HTML parsing.
The usage of a metasearch engine must be done in an intelligent manner to extract the maximum benefit out of it. The ranking of results is very crucial to provide the user with the required information in minimum time. A straightforward algorithm that can be adopted to provide a well refined search result is given below. The underlying assumption is that a few results will be same, from all the search engines. Here we consider the count of each result link from all the search engines used. Then rank it, based on the decreasing order of the count. The use of this approach provides a far more efficient ranking than simply performing a union of all the results. Moreover it's a simple approach and easily implementable. This ranking can also be done on client side (using client side scripting). Hence it provides a flexible approach for implementation. Experimental implementation of the same technique has been done, with a good amount of success.
V. CONCLUSION The paper proposed a new basis for web pages clustering and concept extraction from a keyword based on results of multiple search engines on the Internet. It will help user to get relevant information needed upon querying. We also did an experimental implementation of the same ideas, which performed to meet our expectations of speed and efficiency.
It can be said that providing context sensitive results increases the efficiency of the user, so that he can easily find the document he is searching for in the web.
Current keyword based search engines rank the web pages based on frequency of the keywords, inbound link count etc. Hence these results require user to go through all the returned links for finding the right one. With the use of a metasearch engine the relevance of results is also high, since it uses multiple search engines like Google, Yahoo and Bing. The links that appear in most of search engines' results are given higher priority.
Further enhancements include support for queries from languages other than English, enabling caching mechanism for recently queried keywords and moving forward to implement the above idea for image searching as well as video searching.
A Performance Study of Data Mining Techniques: Multiple Linear Regression vs.
Factor Analysis ďŚŝƐŚĞŬ dĂŶĞũĂ͕ Z͘<͘ŚĂƵŚĂŶ ƐƐŝƐƚĂŶƚ WƌŽĨĞƐƐŽƌ͕ ĞƉƚ͘ ŽĨ ŽŵƉƵƚĞƌ ^Đ͘ Θ ƉƉůŝĐĂƚŝŽŶƐ͕ /Dd͕ <ƵƌƵŬƐŚĞƚƌĂ WƌŽĨĞƐƐŽƌ͕ ĞƉƚ͘ ŽĨ ŽŵƉƵƚĞƌ ^Đ͘ Θ ƉƉůŝĐĂƚŝŽŶƐ͕<ƵƌƵŬƐŚĞƚƌĂ hŶŝ|ĞƌƐŝƚLJ͕ <ƵƌƵŬƐŚĞƚƌĂ
Abstract:The growing volume of data usually creates an interesting challenge for the need of data analysis tools that discover regularities in these data. Data mining has emerged as disciplines that contribute tools for data analysis, discovery of hidden knowledge, and autonomous decision making in many application domains. The purpose of this study is to compare the performance of two data mining techniques viz., factor analysis and multiple linear regression for different sample sizes on three unique sets of data. The performance of the two data mining techniques is compared on following parameters like mean square error (MSE), R-square, R-Square adjusted, condition number, root mean square error(RMSE), number of variables included in the prediction model, modified coefficient of efficiency, F-value, and test of normality. These parameters have been computed using various data mining tools like SPSS, XLstat, Stata, and MS-Excel. It is seen that for all the given dataset, factor analysis outperform multiple linear regression. But the absolute value of prediction accuracy varied between the three datasets indicating that the data distribution and data characteristics play a major role in choosing the correct prediction technique.
Keywords: Data mining, Multiple Linear Regression, Factor Analysis, Principal Component Regression, Maximum Liklihood Regression, Generalized Least Square Regression
Data Introduction
A basic assumption concerned with general linear regression model is that there is no correlation (or no multi-collinearity) between the explanatory variables. When this assumption is not satisfied, the least squares estimators have large variances and become unstable and may have a wrong sign. Therefore, we resort to biased regression methods, which stabilize the parameter estimates [17]. The data sets we have chosen for this study have a combination of ƚŚĞ ĨŽůůŽǁŝŶŐ characteristics: few predictor variables, many predictor variables, highly collinear variables, very redundant variables and presence of outliers.
The three data sets used in this paper viz., marketing, bank and parkinsons telemonitoring data set are taken from [8],[9], and [10] respectively.
From the foregoing, it can be observed that each of these three sets has unique properties. The marketing dataset consists of 14 demographic attributes. The dataset is a good mixture of categorical and continuous variables with a lot of missing data. This is characteristic for data mining applications. The bank dataset is synthetically generated from a simulation of how bank-customers choose their banks.
Tasks are based on predicting the fraction of bank customers who leave the bank because of full queues. Each row corresponds to one of 5,875 voice recording from these individuals. The main aim of the data is to predict the total UPDRS scores ('total_UPDRS') from the 16 voice measures. This is a multivariate dataset with 26 attributes and 5875 instances. All the attributes are either integer or real with lots of missing and outlier values. The box plot of the three datasets (fig 1 to fig.3) shown above display measure of dispersion between these variables, compares the mean of different variables, and also shows the outliers in three datasets. In this regard, it becomes necessary to scale these three datasets to reduce the measure of dispersion and bring all the variables of all datasets to the same unit of measure.
Prediction Techniques
There are many prediction techniques (association rule analysis, neural networks, regression analysis, decision tree, etc.) but in this study only two linear regression techniques have been compared.
Multiple Linear Regression
Multiple linear regression model maps a group of predictors x to a response variable y [4]. The multiple linear regression is defined by the following relationship, for i = 1, 2, n:
y i = a + b 1 x i1 + b 2 x i2 + ・・ ・+b k x ik + e i
or, equivalently, in more compact matrix terms:
Y = Xb + E
where, for all the n considered observations, Y is a column vector with n rows containing the values of the response variable; X is a matrix with n rows and k + 1 columns containing for each column the values of the explanatory variables for the n observations, plus a column (to refer to the intercept) containing n values equal to 1; b is a vector with k + 1 rows containing all the model parameters to be estimated on the basis of the data: the intercept and the k slope coefficients relative to each explanatory variable. Finally E is a column vector of length n containing the error terms. In the bivariate case the regression model was represented by a line, now it corresponds to a (k + 1)-dimensional plane, called the regression plane. This plane is defined by the equation
ŷ i = a + b 1 x i1 + b 2 x i2 + ・・ ・+b k x ik +µ i
Where ŷ i is dependent variable. X i 's are independent variables, and µ i is stochastic error term. We have compared three basic methods under this multiple linear regression technique. They are full method (which uses the least square approach), forward method, and stepwise approach (which used discriminant approach or all possible subsets) [5].
Factor Analysis
Factor analysis attempts to represent a set of observed variables X 1 , X 2 …. X n in terms of a number of 'common' factors plus a factor which is unique to each variable. The common factors (sometimes called latent variables) are hypothetical variables which explain why a number of variables are correlated with each other-it is because they have one or more factors in common [7].
Factor analysis is basically a one-sample procedure [6]. We assume a random sample y 1 , y 2 , y n from a homogeneous population with mean vector µ and covariance matrix ∑ . The factor analysis model expresses each variable as a linear combination of underlying common factors f 1 , f 2 , . . . , f m , with an accompanying error term to account for that part of the variable that is unique (not in common with the variables). For y 1 , y 2 , y p in any observation vector y, the model is as follows:
y 1 − µ 1 = λ 11 f 1 + λ 12 f 2 +· · ·+λ 1m f m + ε 1 y 2 − µ 2 = λ 21 f 1 + λ 22 f 2 +· · ·+λ 2m f m + ε 2 ... y p − µ p = λ p1 f 1 + λ p2 f 2 +· · ·+λ pm f m + ε p .
Ideally, m should be substantially smaller than p; otherwise we have not achieved a parsimonious description of the variables as functions of a few underlying factors. We might regard the f's in equations above as random variables that engender the y's. The coefficients λ ij are called loadings and serve as weights, showing how each y i individually depends on the f 's. With appropriate assumptions, λ ij indicates the importance of the jth factor f j to the ith variable y i and can be used in interpretation of f j . We describe or interpret f 2 , for example, by examining its coefficients, λ 12 , λ 22 , λ p2 . The larger loadings relate f 2 to the corresponding y's. From these y's, we infer a meaning or description of f 2 . After estimating the λ ij 's, it is hoped they will partition the variables into groups corresponding to factors. There is superficial resemblance to the multiple linear regression, but there are fundamental differences. For example, firstly f's in above equations are unobserved, secondly equations above represents one observational vector, whereas multiple linear regression depicts all n observations. There are a number of different varieties of factor analysis: the comparison here is limited to principal component analysis, generalized least square and maximum likelihood estimation.
Related Work
There are many data mining techniques (decision tree, neural networks, regression, clustering etc.) but in this paper we have compared two linear techniques viz., multiple linear regression, and factor analysis. In this domain there have been many researchers and authors who compared various data mining techniques from varied aspects.
In year 2004 Munoz et. al did a comparison of three data mining methods: linear statistical methods, neural network method, and non-linear multivariate methods [11]. In 2008, Saikat and Jun Yan compared PCA and PLS on simulated data [12]. Munoz et.al compared logistic regression, principal component regression, and classification and regression tree with multivariate adaptive regression spines [16]. In 1999, Manel et.al compared discriminate analysis, neural networks, and logistic regression for predicting species distribution [13]. In year 2005, Orsalya et. al compared ridge regression, pair wise correlation method, forward selection, best subset selection, on quantitative structure retention relationship study based on multiple linear regression on predicting the retention indices for aliphatic alcohols [14]. In year 2002 Huang et. al compared least square regression, ridge and partial least square in the context of the varying calibration data size using only squared prediction errors as the only model comparison criteria [15].
Preparation and Methodology
Both the techniques under study are linear in nature and the choice of technique is vital for getting significant results. When a nonlinear data are fitted to a linear technique, the results obtained are biased and when linear data are fitted to a non-linear technique, the results have increased variance. As the techniques undertaken for this study are both linear, so to get significant results we need to apply the same on linear data sets. Both the techniques are linear regression techniques, we mean that they are linear in parameters [1] [2]; the β 's (that is, the parameters are raised to the first power only. It may or may not be linear in explanatory variables, the X's. To make our data sets linear it is preprocessed by taking natural log of all the instances of the data sets or normalized using z-score [3] normalization. After scaling and standardizing the three datasets, it is found that skewness is reduced that is shown by histogram diagram of all three datasets. For proving linearity of these data sets box-plot, histogram and JB Test (Jarque Bera Test) with p-value (exact significance level or probability value of committing type-I error) have been used. After scaling and standardizing the data sets are divided into two parts, taking 70% observations as the "training set" and the remaining 30% observations as the "test validation set" [3]. For each data set training set is used to build the model and various methods of that technique are employed. For example in Multiple Linear Regression (MLR), three methods are associated in this study: the full model, forward model and stepwise model. The model is validated using test validation data set and the results are presented using ten goodness of fit criteria. Both the techniques are intra and inter compared for their performance on the underlying three unique datasets. Refer to table 1and table 2 given below.
Interpretation and Findings
Interpreting Marketing Dataset
In marketing dataset, the value of R 2 and Adj.R 2 , of full model was found with good explanatory power i.e., 0.47, which is higher than both stepwise and forward model.
On the behalf of this explanatory power value we can say that among all methods of multiple linear regression, full model was found best method for data mining purpose, since 47% change in variation in dependent variable was explained by independent Table 1 variables. But 0.47 value of explanatory power is not significant up-to the mark which requires another regression model than multiple regression model for reporting data set, since 0.53 means 53% of the total variation was found unexplained. So, within multiple regression techniques full model was found best but not up-to the mark. Value of R 2 suggest for using another regression model.
The inclusion of some other independent variables (either relevant or irrelevant) in multiple regression model mostly generate non-decreasing explanatory value or R 2 value. In this case we can use anther good measure of R 2 i.e., Adj. R 2 , which accounts for the effect of new explanatory variables in the model, since it incorporate degree of freedom of the model, or denominator of the explained and unexplained variation [18]. The expression for the adjusted multiple determination is:
Adj. R 2 = 1-(1-r 2 ) k n n − −1 Adj. R 2 = 1- − − ∑ ∑ ) 1 /( ) /( 2 2 n y k n e i
If n is large Adj. R 2 and R 2 will not differ much. But with small samples, if the number of regressors X's is large in relation to the sample observations Adj. R 2 will be much smaller than R 2 and can even assume negative values in which case Adj. R 2 should be interpreted as being equal to zero.
For marketing data set, all methods of multiple linear regression Adj. R 2 was found similar to R 2 value which means sample size is sufficiently large as required for data mining purpose [19].
Table 2
The R 2 in case of marketing dataset for factor analysis was found around 0.58. So, all methods have equal explanatory power under factor analysis. More over, under all methods viz., PCR, Maximum Likelihood, and GLS, explained variation is 58% out of total variation in the dependent variable which signifies that factor analysis extraction is better than multiple linear regression. The Adj. R 2 i.e., adjusted for inclusion of new explanatory variable was also found 0.56 less than R 2 . The 58% variation was captured due to regression, it explains the overall goodness of fit of the regression line to marketing dataset due to use of factor analysis.
So, on the behalf of first order statistical test (R 2 ), we can conclude that factor analysis technique is better than multiple regression technique due to explanatory power.
Mean Square Error (MSE) criteria is a combination of unbiased-ness and the minimum variance property. An estimator is a minimum MSE estimator if it has smallest MSE, defined as the expected value of the squared differences of the estimator around the true population parameter b.
MSE( bˆ) =E( bˆ-b) 2 . It can be proved that it is equal to MSE( bˆ)'s =Var( bˆ)'s+bias 2 ( bˆ)
The MSE criteria for unbiased-ness and minimum variance were found increasing under multiple linear regression models. It signifies that full method MSE is less than all model's MSE, which further means that under full model of multiple linear regression of marketing dataset there is less unbiased-ness and less variance.
The minimum variance also increases the probability of unbiased-ness and gives better explanatory power like R 2 in marketing dataset.
The inter comparison of two techniques multiple linear regression and factor analysis generated that in factor analysis models MSE is significantly different which signifies that under factor analysis all b's are unbiased but with large variance. Due to large variance in factor analysis techniques the probability value of unbiased-ness increases that generates a contradictory result about the explanatory power of the factor analysis methods. But factor analysis methods may have questionable values of MSE, due to this reason new measure of MSE that is RMSE (root mean square error) was used in the study.
RMSE was found considerably similar in methods of both the techniques. Due to less variation in RMSE of both MLR and factor analysis of marketing dataset it can be stated that both techniques have equal weights for consideration.
A common measure used to compare the prediction performance of different models is Mean Absolute Error (MAE).
If Y p be the predicted dependent variable and Y be the actual dependent variable then the MAE can be computed by
MAE= Y Y Y n p ∑ − 1
In marketing dataset MAE was found less under full model, which is less than stepwise and forward model. MAE signifies that full model under MLR techniques give better prediction than other mode Test Vs. Actual Under factor analysis marketing dataset MAE in all models was found considerably similar but higher than multiple regression techniques, therefore we can say factor analysis models for such kind of datasets generate poor prediction performance.
The diagnosis index of multi collinearity was found significantly below 100 under MLR methods in marketing dataset, which means there is no scope for high and severe multi collinearity. In case of same dataset condition number was found lower than factor analysis technique. This means factor analysis is better technique to diagnosis the effect of multi collinearity. But in marketing dataset both factor analysis and MLR techniques were found with less multi collinearity in regressors than severe level of multi collinearity.
The F value in case of marketing dataset was found more than critical value with respect to dF(degree of freedom), in both techniques, which signifies that overall regression model is significantly estimated but stepwise model of MLR technique was found high F corresponding to its dF which means overall significance of the regression model was up-to the mark in case of stepwise method. The prediction plots of two techniques on marketing dataset better represent above discussion visually (see fig. 4- fig. 6 and fig. 13-fig. 15)
Interpreting Bank Dataset
In case full model of bank dataset explanatory power (R 2 ) was found considerably low due to residual, whereas in stepwise and forward model MLR generated satisfactory explanatory power. Due to stepwise and forward model 56% variation in dependent variable was explained with respect to independent variables. Another measure of explanatory power was also found satisfactory in case of stepwise and forward model but not in full model.
On the other hand factor analysis models on bank dataset generated higher value of both R 2 and adjusted R 2 , which signifies that the explanatory power of factor analysis in case of bank dataset is more than MLR technique. Overall one drastic point was found that in all models of factor analysis and MLR, full model of MLR generated very poor R 2 value, which means this dataset is not having proper specification according to magnitude change.
The MSE criteria for unbiasness and minimum variance for all parameters is found increasing under both factor analysis and MLR techniques, but all models of factor analysis are found with low unbiasness and variance than all models of MLR. It means both the technique parameters are significant, but MLR techniques parameters are significant with high variance.
The RMSE is also satisfactory and upto the mark in case of factor analysis. Therefore, we can say that factor analysis parameters have low variance and unbiasness.
The prediction power of the regression model is also found good fit in all factor analysis models. In case of bank dataset MLR is having more MAE due to test dataset skewness.
Modified coefficient of efficiency was found low in case of factor analysis model in case of bank dataset, since this dataset does not satisfy the center limit theorem due to constant number of variables; but in MLR model modifies coefficient of efficiency was found considerably significant for all models. This may be due to the successful implementation of center limit theorem.
In case bank dataset the diagnosis index of multicollinearity was found higher in factor analysis than MLR, which signifies that factor analysis is better technique to identify multi-colinearity problem.
The F value in case of bank dataset was found significant under MLR model but F value was found very low rather in case of factor analysis was found 200 times more than the critical value, which means overall significance of all factor analysis model is higher than MLR model. The prediction plots of the two techniques (see fig. 7- fig. 9 and fig. 16-fig. 18) corroborate our discussion.
Interpreting of Parkinson Dataset
In case of Parkinson dataset forward model of MLR was found very low explanatory power, it is due to hetroscedasticity in stochastic error term of the model, but the full and the stepwise model was found to have 90% explanatory power of the model. In all models of factor analysis R 2 was found to have 60%, which is considerably sufficient for satisfactory explanatory power of the model. Moreover adjusted R 2 was found similar in both techniques i.e., MLR and factor analysis, due to no intrapolation.
In case of MLR models on Parkinson dataset MSE was found low and up-to the mark, which signifies that MLR technique is better technique for the extraction of structural parameters with unbiasness and low variance. On the other hand factor analysis was found having high biasness and high variance for extracting structural parameters of the model.
RMSE was found similar in all models of MLR and factor analysis which signifies the same consideration for unbiasness and variance.
The prediction power (MAE) of two models of factor analyis viz. PCR and maximum likelihood was found significant but GLS model prediction power was found considerably higher than PCR and maximum likelihood methods. On the other hand MLR prediction power was found significantly different in all three models. In case of stepwise and forward models prediction power increased more than full model.
The center limit theorem for getting efficiency of the model was found incompatible, but in case of factor analysis it was found satisfactory to the center limit theorem. Overall inn case of factor analysis modified coefficient of efficiency was found increasing.
In Parkinson dataset multi-colinearity extraction index was found higher under all models of MLR techniques except forward model. In factor analysis on the same dataset, this index was found lower than MLR model. This means MLR is better technique for diagnosing multi-colinearity particularly with full and stepwise methods.
The significance of overall model was found higher in two models of MLR viz. full and stepwise methods but in case of factor analysis, overall significance of regression model was found similar in all methods. The forward method of MLR generated considerably low F value, which means overall significance is poor than another models of both technique. The prediction plots of two techniques on Parkinson dataset is given in figure 10 to figure 12 and figure 19 to figure 21.
Conclusion and Future Work
The analysis of linear techniques (MLR and Factor Analysis) suggests that factor analysis is considerably better technique than MLR. The principal component model extracted good performance on all datasets of the study. The good performance is said on the basis of higher explanatory power, higher goodness of fit, and higher prediction power.
In diagnosis of multi-colinearity PCR model of factor analysis was found better model. However, full model of MLR also extracted satisfactory result. All other models of both the techniques were found with high explanatory power but with moderate prediction power.
All models are best fit from the point of view of linearity and unbiased ness due to moderate variance and heteroscedasticity, distribution of residual term. Their prediction power was found considerably moderate fit.
From the point of view of structural parameters and overall significance of regression model again factor analysis was found significantly up-to the mark.
From overall analysis of regression technique we can say that data with high skew ness and large structural observations should be estimated/treated with principal component model of factor analysis. The dataset with high multicolinearity should also be treated through factors/components according to relevancy. The small dataset on the other hand should be extracted through full model of multiple regression.
The compatibility of a technique on particular dataset also depends on particular dataset's distribution of residual term of the model. In our study marketing or Parkinson dataset are having normal distribution of the residual term, on the other hand bank dataset residual term was found non normally distributed considerably. The violation of this residual assumption is affecting the prediction power for removing heteroscedastic variance of residual term. The method GLS should be adopted to estimate the structural parameters with suitable suggested forms of the regression model.
The techniques in which estimators satisfy BLUE (best, linear, unbiased, and efficient) properties of structural parameters estimates and stochastic random error term are considered better than others.
The skewness of predictors and random term in the linear regression model is creating obstacles to satisfy BLUE properties. Reducing skewness with some advance data mining tool and then comparing performance of said techniques can further enlighten us, which is an area that can be further explored.
Fig 1
1Box Plot of Marketing Dataset Fig 2: Box Plot of Parkinson Dataset
Fig 10 :
10MLR-Full Model (Parkinson Dataset) Fig 11: MLR-Forward Model (Parkinson Dataset) Stepwise Predicted Vs. Actual
Fig 12: MLR-Stepwise Model (Parkinson Dataset) Fig 13: Factor Analysis-GLS Model (Marketing Dataset)
Fig 3: Box Plot of Bank DatasetIn the rej prototasks, the object is to predict the rate of rejections, i.e., the fraction of customers that are turned away from the bank because all the open tellers have full queues. This dataset consists of 32 continuous attributes and having 4500 records. The parkinsons telemonitoring data set is composed of a range of biomedical voice measurements from 42 people with early-stage Parkinson's disease recruited to a six-month trial of a telemonitoring device for remote symptom progression monitoring. The recordings were automatically captured in the patient's homes.Columns in the table contain subject number, subject age, subject gender, time interval from baseline recruitment date, motor UPDRS, total UPDRS, and 16 biomedical voice measures.
ACKNOWLEDGMENT We would like to thank. Dr. T. M. Rangaswamy, Professor, IEM department, R.V College of Engineering for providing support and guidance for the study and research regarding the subject.
Web Pages Clustering and Concept Mining-An approach towards intelligent information retrieval. Fang Li, Martin Mehlitz, Li Feng, Huange Sheng, Fang Li, Martin Mehlitz, Li Feng and Huange Sheng,"Web Pages Clustering and Concept Mining-An approach towards intelligent information retrieval", 2006.
Web Document Clustering: A Feasibility Demonstration. Oren Zamir, Oren Etzioni, Department of Computer Science and Engineering, University of WashingtonOren Zamir and Oren Etzioni, Department of Computer Science and Engineering, University of Washington," Web Document Clustering: A Feasibility Demonstration".
Information Retrieval -Algorithms and Heuristics. A David, Ophir Grossman, Frieder, David A Grossman and Ophir Frieder," Information Retrieval -Algorithms and Heuristics", 2004.
Jiawei Han, Micheline Kamber, Data mining: concepts and techniques. Jiawei Han and Micheline Kamber, "Data mining: concepts and techniques", 2006.
Scalable Techniques for Clustering the Web. Y Taher, H Haveliwala, Aristides Gionis and Piotr IndykY Taher H. Haveliwala, Aristides Gionis and Piotr Indyk,"Scalable Techniques for Clustering the Web".
Towards adaptive Web sites: Conceptual framework and case study. Mike Perkowitz, Oren Etzioni, SeattleDepartment of Computer Science and Engineering, Box 352350, University of WashingtonMike Perkowitz and Oren Etzioni, Department of Computer Science and Engineering, Box 352350, University of Washington, Seattle,"Towards adaptive Web sites: Conceptual framework and case study".
Basic Econometrics. N Gujarati, Damodar, Sangeetha, McGraw HillNew York4 th editionGujarati N. Damodar, Sangeetha, "Basic Econometrics" 4 th edition, New York: McGraw Hill, (2007).
R E Walpole, S Myers, K Ye, Probability and Statistics for Engineers and Scientists. Englewood Cliffs, NJPrentice Hall7th editionWalpole, R.E, S.L Myers, and K. Ye., Probability and Statistics for Engineers and Scientists, 7th edition. Englewood Cliffs, NJ: Prentice Hall (2002).
Making Sense of Data-A practical guide to exploratory data analysis and data mining. J Myatt, Glenn, New Jersy. Wiley-InterscienceMyatt J. Glenn, "Making Sense of Data-A practical guide to exploratory data analysis and data mining" New Jersy: Wiley-Interscience (2007).
Applied Data Mining-Statistical methods for business and industry. Giudici Paolo, wileyGiudici Paolo, "Applied Data Mining-Statistical methods for business and industry" wiley, (2003)
Feature Selection for Classification. M Dash, H Liu, Intelligent Data Analysis. 13Dash, M., and H. Liu, "Feature Selection for Classification," Intelligent Data Analysis. 1:3 (1997) pp. 131-156.
Methods of Multivariate Analysis. C Rencher, Alvin, Wiley Interscience2nd EditionRencher C. Alvin, "Methods of Multivariate Analysis" 2nd Edition, Wiley Interscience, (2002).
Introduction to Factor Analysis-What it is and how to do it. Jae-On.; Kim, Charles W Mueller, Sage Publications, IncKim, Jae-on.; Mueller, Charles W., "Introduction to Factor Analysis-What it is and how to do it.", Sage Publications, Inc., (1978).
Comparison of Statistical Methods Commonly used in Predictive Modeling. Jesus Munoz, Angel M Felicisimo, Journal for Vegetations Science. 15Munoz, Jesus, and Angel M. Felicisimo, "Comparison of Statistical Methods Commonly used in Predictive Modeling," Journal for Vegetations Science 15 (2004), pp. 285-292.
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression. Maitra Saikat, Yan Jun, Casualty Actuarial Society. Discussion Paper ProgramMaitra Saikat and Yan Jun," Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression" Casualty Actuarial Society, 2008 Discussion Paper Program, pp. 79-90
Comparing Discriminant Analysis, Neural Networks and Logistic Regression for Predicting Species Distribution: A Case Study with a Himalayian River Bird. S Manel, J M Dias, S J Ormerod, Ecol. Model. 120Manel, S., J. M. Dias, and S. J. Ormerod, "Comparing Discriminant Analysis, Neural Networks and Logistic Regression for Predicting Species Distribution: A Case Study with a Himalayian River Bird," Ecol. Model. 120 (1999), pp. 337-347.
Comparison of Ridge Regression, PLS, Pairwise Correlation, Forward and Best Subset Selection methods for Prediction of Retention indices for Aliphatic Alcohols. Orsolya Farkas, Heberger Karoly, Journal of Information and Modeling. 452Farkas, Orsolya, and Heberger Karoly, "Comparison of Ridge Regression, PLS, Pairwise Correlation, Forward and Best Subset Selection methods for Prediction of Retention indices for Aliphatic Alcohols," Journal of Information and Modeling, 45:2 (2005) pp. 339-346.
A Comparison of Calibration Methods Based on Calibration Data size and Robustness. J Huang, Journal of Chemometrics and Intelligent Lab. Systems. 62Huang, J. et al., "A Comparison of Calibration Methods Based on Calibration Data size and Robustness," Journal of Chemometrics and Intelligent Lab. Systems, 62:1 (2002) pp. 25-35.
A General Regression Neural Network. D F Specht, IEEE Transactions on Neural Networks. 26Specht, D. F., "A General Regression Neural Network," IEEE Transactions on Neural Networks, 2:6 (1991), pp. 568-576.[17] Al-
A Monte Carlo Comparison between Ridge and Principal Components Regression Methods. M Kassab, Applied Mathematical Sciences. 342Kassab M, "A Monte Carlo Comparison between Ridge and Principal Components Regression Methods" Applied Mathematical Sciences, Vol. 3, 2009, no. 42, 2085 -2098
Data Mining-Methods and Models. T Larose, Daniel, Wiley Interscience114Larose T. Daniel, "Data Mining-Methods and Models" Wiley Interscience, (2002), pp 114.
Data Mining Concepts and Techniques. Han Jiawei, Kamber Micheline, Morgan Kaufmann Publishers6Han Jiawei and Kamber Micheline "Data Mining Concepts and Techniques" Morgan Kaufmann Publishers, 2006, PP6.
| []
|
[
"Active Learning for Improved Semi-Supervised Semantic Segmentation in Satellite Images",
"Active Learning for Improved Semi-Supervised Semantic Segmentation in Satellite Images"
]
| [
"Shasvat Desai [email protected] \nOrbital Insight\nYale University\n\n",
"Debasmita Ghose [email protected] \nOrbital Insight\nYale University\n\n"
]
| [
"Orbital Insight\nYale University\n",
"Orbital Insight\nYale University\n"
]
| []
| Remote sensing data is crucial for applications ranging from monitoring forest fires and deforestation to tracking urbanization. Most of these tasks require dense pixel-level annotations for the model to parse visual information from limited labeled data available for these satellite images. Due to the dearth of high-quality labeled training data in this domain, there is a need to focus on semi-supervised techniques. These techniques generate pseudo-labels from a small set of labeled examples which are used to augment the labeled training set. This makes it necessary to have a highly representative and diverse labeled training set. Therefore, we propose to use an active learning-based sampling strategy to select a highly representative set of labeled training data. We demonstrate our proposed method's effectiveness on two existing semantic segmentation datasets containing satellite images: UC Merced Land Use Classification Dataset and DeepGlobe Land Cover Classification Dataset. We report a 27% improvement in mIoU with as little as 2% labeled data using active learning sampling strategies over randomly sampling the small set of labeled training data. | 10.1109/wacv51458.2022.00155 | [
"https://arxiv.org/pdf/2110.07782v1.pdf"
]
| 239,009,545 | 2110.07782 | a29225c3ae7c4f8e5352fdd03a266c0928a8da42 |
Active Learning for Improved Semi-Supervised Semantic Segmentation in Satellite Images
Shasvat Desai [email protected]
Orbital Insight
Yale University
Debasmita Ghose [email protected]
Orbital Insight
Yale University
Active Learning for Improved Semi-Supervised Semantic Segmentation in Satellite Images
Remote sensing data is crucial for applications ranging from monitoring forest fires and deforestation to tracking urbanization. Most of these tasks require dense pixel-level annotations for the model to parse visual information from limited labeled data available for these satellite images. Due to the dearth of high-quality labeled training data in this domain, there is a need to focus on semi-supervised techniques. These techniques generate pseudo-labels from a small set of labeled examples which are used to augment the labeled training set. This makes it necessary to have a highly representative and diverse labeled training set. Therefore, we propose to use an active learning-based sampling strategy to select a highly representative set of labeled training data. We demonstrate our proposed method's effectiveness on two existing semantic segmentation datasets containing satellite images: UC Merced Land Use Classification Dataset and DeepGlobe Land Cover Classification Dataset. We report a 27% improvement in mIoU with as little as 2% labeled data using active learning sampling strategies over randomly sampling the small set of labeled training data.
Introduction
Semantic segmentation has found vast applications in the domain of remote sensing, including but not limited to environmental monitoring [64,66], land use classification, and change detection [11,28,4,12,18,55]. The largest barrier to applying these segmentation techniques is the availability of representative labeled data across different geographies and terrains. Each pixel in a satellite image can represent a large area on the ground, thus requiring domain knowledge and experience to annotate pixel-level labels. This makes it significantly expensive in terms of cost and time to collect a large set of pixel-wise labels [23]. To alleviate this problem, recent work in the computer vision community has explored * Authors contributed equally a) b) c) d) Figure 1: a) An image from the UC Merced Land Use Classification Dataset [63], b) Ground Truth of the same image provided by the DLSRD dataset [48], c) Baseline semisupervised semantic segmentation model trained with 2% labeled data, d) Output of our active learning based semisupervised semantic segmentation model trained with 2% labeled data.
using fewer pixel-wise labels along with information from unlabeled images in a semi-supervised fashion [17,32,52]. However, these small sets of images that are labeled pixelwise are chosen randomly from a dataset [17,32]. This might bias the semi-supervised network towards a particular set of classes, degrading its performance. Therefore, we propose to use active learning to select a representative set of labeled examples for semi-supervised semantic segmentation for land cover classification.
This work is the first to explore a semi-supervised approach to semantic segmentation in satellite images to the best of our knowledge. We use a conditional GAN [31] based on Mittal et al. [32] which takes in a small number of labeled examples and a large unlabeled pool of data. This conditional GAN generates pseudo-labels based on limited labeled examples to augment the labeled pool. This makes it essential to have a diverse set of labeled training data. Thus, we propose to use active learning to select a highly representative set of labeled training samples.
Active learning aims to select the most informative and representative data instances for labeling from an unlabeled data pool based on some information measure. We sample a subset of the images and their corresponding labels at random from a dataset, which serves as our labeled training set for the conditional GAN. We do the sampling again using an active learning-based sampling strategy which would provide a more diverse set of training data and show a performance improvement even when only very few training samples are available. With as little as 2% labeled data, we report an improvement of up to 27% in mIoU over random sampling. We demonstrate our proposed method's efficacy on two existing semantic segmentation datasets containing satellite images: UC Merced Land Use Classification Dataset [48,63], and DeepGlobe Land Cover Classification Dataset [9].
Active learning for semantic segmentation [29,60] yields patches of the given input image that are most informative. However, in this work, we require an active learning-based sampling strategy that gives us the set of most informative images from the given dataset. To achieve this, we propose using active learning for image classification to select entire images from the given dataset, which are the most informative. We then query and obtain dense-pixel level annotations only for the actively selected samples, giving us our diverse labeled training data for semi-supervised semantic segmentation.
Finally, we propose this method for sample selection to act as a guiding process for large-scale dataset creation, requiring the collection of dense pixel-level annotations. It would require significantly less cost and effort to obtain coarse image-level labels for the images and then use our proposed methodology to sample informative images labeled at pixel-level using image-classification-based active learning. The code has been made publicly available * .
Our key contributions are summarized as follows:
• We use pool-based active learning sampling strategies to intelligently select labeled examples and improve performance for a GAN-based semi-supervised semantic segmentation network for satellite images.
• We demonstrate the applicability of the proposed method for selecting an optimal subset of data instances for which pixel-level annotations should be obtained.
Background and Related Work
Active Learning
Active Learning is a technique that uses a learning algorithm which learns to select samples from an unlabeled pool of data for which the labels should be queried. * https://github.com/immuno121/ALS4GAN Scenarios for Active Learning: Active Learning is typically employed in the following settings: [44]. Membership Query Synthesis [45] is a setting where the learner generates an instance from an underlying distribution. Stream-based Selective Sampling [1] queries each unlabeled instance one at a time based on some information measure. The last scenario Pool-based sampling, used in this paper, assumes a large pool of unlabeled data and draws instances from the pool according to some information measure. Query Strategies: There are several query strategies in the pool-based sampling strategies to select samples for which we need to query labels.
The margin sampling strategy [2,40] selects the instance that has the smallest difference between the first and second most probable labels.
x M = arg max x [P θ (ŷ 2 |x) − P θ (ŷ 1 |x)](1)
Intuitively, instances with small margins are more ambiguous and knowing the true label should help the model discriminate more effectively. The third and the most common strategy is Entropybased sampling [24,25].
x * H = arg max x − y P θ (y|x) log(P θ (y|x))(2)
where y ranges over all possible labels of x. Entropy is a measure of a variable's average information content. So intuitively, this method selects samples by ranking them based on their information content. Applications: Active learning techniques have found numerous applications in medical imaging [16,19,30,47,51] and remote sensing [20,27,39,54,58,59] communities because obtaining labeled data in those domains have been particularly challenging [36,43]. Some recent work have used deep learning techniques for active learningbased image classification [38,47], semantic segmentation [29,60,62] and object detection [41]. However, to the best of our knowledge, this is the first work that uses a deep active learning-based image classifier to select labeled examples for a semi-supervised semantic segmentation network.
Semi-Supervised Semantic Segmentation
Most of the existing semi-supervised semantic segmentation techniques either use consistency regularization strategies [7,13,33,34,35] or generative model to augment pseudo-labels to existing labeled data pool [17,52] or some combination of the two [32]. Consistency Regularization-based strategies: The core idea in consistency regularization is that predictions for unlabeled data should be invariant to perturbations. Some recent work has pointed out the difficulty in performing consistency regularization for semi-supervised seman-tic segmentation because it violates the cluster assumption [13,35]. Some other work [6,13,21] use data augmentation techniques like CutMix [65] and ClassMix [34], which composite new images by mixing two original images. They hypothesize that this would enforce consistency over highly varied mixed samples while respecting the original images' semantic boundaries. GAN-based strategies: Souly et al. [52] was the first work to perform semi-supervised semantic segmentation using a GAN. They employ the generator to generate realistic visual data that forced the discriminator to learn better features for more accurate pixel classification. However, these generated images were not sufficiently close to the real images since it is challenging to generate realistic-looking images from pixel-wise maps.
To overcome the drawback of poorly generated images, Hung et al. [17] propose a conditional GAN. The generator is a standard semantic segmentation network that takes in images and their ground truth maps. The discriminator is a fully convolutional network (FCN) that takes the ground truth, and the segmentation map predicted by the generator and aims to distinguish between the two. Thus, it is difficult for the discriminator to determine if the pixels belong to the real or the fake distribution by looking at one pixel at a time without context.
Mittal et al. [32] propose to replace the FCN-based discriminator with an image-wise discriminator that determines if the image belongs to the real or the fake distribution, which is a relatively easy task. Additionally, they propose to use a supervised multi-label classification branch [56] which decides on the classes present in the image and thus aids the segmentation network to make globally consistent decisions. During evaluation, they fuse the two branches to alleviate both low-level and high-level artifacts that often occur when working in a low-data regime. In this work, we use the s4GAN branch of the network presented by Mittal et al. [32] and propose to select a more optimal set of labeled examples to improve the performance of the network over a random selection of labeled examples.
Our Method
We propose to use active learning techniques to select a small informative subset of labeled data that would help the semi-supervised semantic segmentation model learn more effectively with a representative pool of labeled data. The proposed framework is demonstrated in Figure 2. It should be noted that for any given image, our method assumes the ability to gain access to its corresponding image and pixellevel annotations.
Active Learning for Image Classification
Algorithm 1 describes how active learning was used to select the most informative and diverse set of labeled sam-
Algorithm 1: Active Learning for Labeled Sample Selection
Input: Labeled ratio R and unlabeled pool of data X N Output:
Informative samples and their image-level labels (X L ,Y I L ) Define: learner ← Neural network based image classifier oracle ← source of labels 1 Number of data points to sample: X N L ← R * X N 2 Size of initial labeled pool for learner:
init size ← ⌈α init * X N L ⌉ 3 Initial labeled pool for active learner:(X init , Y init ) 4 Train the active learner: learner(X init , Y init ) 5 Unlabeled pool:
X pool ← X N − X init 6 X L ← {}, Y I L ← {} 7
Number of samples to query in each iteration :
N Q = β Q * init size while n(X L ) ≤ X N L do Query
Step: Inference on unlabeled pool using learner:
8 prediction scores = learner(X pool ) Select top-Q most informative instances (N Q ): 9 X Q = sampling strategy(prediction scores) 10 Y Q = oracle(X Q ) 11 X pool = X pool − X Q 12 X L = X L X Q 13 Y I L = Y I L Y Q Teach
Step: Retrain the learner with updated labeled pool: 14 learner(X L , Y I L ) 15 end 16 return X L , Y I L ples for semi-supervised semantic segmentation. We use active image classification and sample images using poolbased sampling strategies [44] as described in Section 2.1. Algorithm 1 is used as an offline process to sample informative samples and it accepts two inputs: labeled ratio R, and an unlabeled pool of data, X N . The labeled ratio R, determines the number of labeled samples used to train the semi-supervised model. The labeled ratio R, used in this paper for each dataset, can be found in Table 1.
Initialization:
The active learner is initialized with init size number of image-level labels (line 3), which is a function of the number of labeled samples to be returned. We define the parameter α init ∈ (0, 1] (line 2) to control the size of the initial labeled pool of the active learner. α init helps in determining the optimal size of the labeled pool that the active learner should be initialized with for every labeled 1) The active learning module expects an unlabeled pool of data as its input. It is an image classification network that returns samples X L selected based on some information measure determined by the sampling strategy used for the active learner. 2) A get label operation is performed to obtain pixel-level labels corresponding to images returned by the active learning module. 3) A conditional GAN is then trained where the generator module is a semantic segmentation network and expects the labeled images returned by the active learning module, X L , the corresponding pixelwise labels for these samples Y P L , along with the remaining unlabeled images X U . It outputs a segmentation mask. 4) The discriminator expects the predicted segmentation masks from the generator along with the pixel-wise ground truth labels Y P L , and outputs a prediction confidence score. 5) prediction masks with a score greater than the predefined confidence threshold τ are selected and treated as pseudo-labels to train the GAN and are augmented to the labeled pool as shown by the "+" sign on the bottom left corner. ratio, R. A low value of α init would result in the active learner being initialized with a tiny pool of data not providing sufficient information about the data distribution. In contrast, a large value of α init might bias the active learner toward a particular set of initial samples, which might lead to under-sampling of a particular class. Intuitively, setting α init to 1 would make to outcome close to being equivalent to random sampling. We found this approach to perform better than having a fixed initial labeled pool size, irrespective of the labeled ratio, R. The learner is then trained with this initial labeled pool (line 4). Once the initial labeled samples are selected for the active learner, they are removed from the unlabeled pool (line 5). We sample data instances and their labels by performing the query and teach steps in an interleaved fashion. Query Step: We run inference using the trained learner on the entire unlabeled pool (line 8) and obtain the model's confidence scores for each sample in that pool. Then using some uncertainty measure based on the active learning strategy used, the oracle queries image-level labels for top-Q uncertain samples, N Q . For instance, if entropy-based sampling [24,25] is used, then the oracle will return labels for samples with the highest entropy measure. The optimal number of data instances N Q , queried from the oracle in every iteration is a function of the initial labeled pool size init size and another parameter, β Q ∈ (0, 1] (line 7). We define β Q to determine the number of iterations for which the active learner will be trained. It is crucial for the active learner's performance because a small value of β Q will add only a small number of labels in each iteration, resulting in a negligible weight update of the active learner. In contrast, a large value of β Q will cause a massive update in the learner's weights at every step. It will also reduce the total number of steps the learning algorithm will take to reach its target X N L . This will leave little room for the learner to learn from its mistakes in each iteration, directly impacting the quality labels produced. Once the labels for N Q number of samples are queried from oracle, the images and their corresponding labels are added to the result set, X L and Y I L (lines 12, 13). Teach Step: In this step, the learner is trained with the updated labeled pool of samples obtained from the query step (line 14). The image classification network's capacity for the learner is also crucial in determining the quality of the selected samples. Any network with low capacity tends to underfit, while any network with a higher capacity than required could overfit and detrimentally affect the downstream task's performance.
Semi-Supervised Semantic Segmentation
We use the s4GAN network proposed by Mittal et al. [32] for performing semi-supervised semantic segmentation using a small number of pixel-wise labeled examples along with a pool of unlabeled examples. This is a conditional GAN-based technique where the generator G is a segmentation network. The generator takes in all the labeled and unlabeled images, along with the ground truth masks. The discriminator D takes the predicted segmentation map and the available ground truth masks concatenated with their respective images. The network attempts to match the real and the predicted segmentation maps' distribution through adversarial training. Notation:
x P L , y P L : image with their pixel-wise labels x P U : image with no pixel-wise ground-truth labels
Segmentation Network (Generator)
The segmentation network S is trained with loss L S , which is a combination of three losses: the standard cross-entropy loss, the feature matching loss, and the self-training loss. Cross Entropy Loss: Standard supervised pixel-wise cross entropy loss term evaluated only for the labeled samples x p L is shown is Equation 3
.
L ce = − y P L · log(S(x P L ))(3)
Feature-Matching Loss: The feature matching loss L f m [42] aims to minimize the mean discrepancy between the feature statistics of the predicted, S(x P U ) and the ground truth segmentation maps, y P L as shown in Equation 4. This loss uses both labeled and unlabeled training examples.
L f m = ||E (y P L ,x P L )∼D l D(x P L ⊕ y P L )− E (x P U )∼Du D(x P U ⊕ S(x P U ))|| (4)
Self-Training Loss: This loss is used for only unlabeled data. This loss aims to pick the best outputs of the segmentation network (i.e., those outputs that could fool the discriminator) that do not have a corresponding ground truth mask and reuse them for supervised training. Intuitively, it pushes the segmentation network to produce predictions that the discriminator cannot distinguish from real. The discriminator's output is a score between 0 and 1, denoting the discriminator's confidence that the predicted segmentation mask is real. The predicted segmentation mask with a score greater than the predefined confidence threshold τ , is selected and treated as a pseudo-label to train the GAN. Equation 5 describes the self-training loss.
L st = − y* · log(S(x P U )) if D(x P U ) ≥ τ 0 otherwise (5)
y* = pseudo pixel-wise labels which are the predictions of the segmentation network Finally, the objective function for the generator is given by Equation 6.
L S = L ce + λ f m L f m + λ st L st(6)
where, λ f m > 0 and λ st > 0 are the weighting parameters for the feature matching and the self-training losses.
Discriminator
The discriminator is trained to distinguish between the real labeled examples and the fake segmentation masks generated by the network concatenated with the corresponding input images. It is trained using the original GAN loss proposed by Goodfellow et al. [14] as shown in Equation 7.
L D = E (y P L ,x P L )∼D l logD(x P L ⊕ y P L )+ E (x P U )∼Du log(1 − D(x P U ⊕ S(x P U )) (7)
Labeled Example Selection for Semi-Supervised Semantic Segmentation using Active Learning
To obtain labeled samples for a semi-supervised semantic segmentation network, the proposed framework uses the active learning module from Algorithm 1 defined as Active Sampler in Algorithm 2 and shown in Figure 2. The active learning module expects an unlabeled pool of data (X N ) and the labeled ratio R as its input. It returns samples X N L (line 4), which are selected based on some information measure determined by the sampling strategy used for the active learner. The active learning module is called only once to select informative data instances for the semi-supervised model. In the get label stage, we obtain pixel-level labels corresponding to only those images returned by the Active Sampler to serve as the initial labeled training data for the semi-supervised segmentation module. This enables the semi-supervised semantic segmentation network to learn with a representative labeled set instead of a random subset of the data.
The conditional GAN described in Section 3.2 is then trained, where the generator module is a semantic segmentation network and expects the labeled images returned by the active learning module X N L , the corresponding pixelwise labels for these samples Y P N L , along with the remaining unlabeled images, X N U . The output of the generator network is a segmentation mask. The discriminator expects the predicted masks along with the pixel-wise ground truth labels Y P N L , and outputs a probability score between 0 and 1, denoting its confidence in the predicted mask being real and belonging to the ground truth. If this confidence score is greater than the predefined confidence threshold τ , then it implies the generator has successfully predicted a mask that appears real to the discriminator. Hence, this prediction is augmented to the ground-truth, Y P N L (line 14) and used as a pseudo-label, and the GAN is trained with this updated dataset. These pseudo-labels contribute to the selftraining loss detailed in Section 3.2.1. The generator and the discriminator are trained adversarially until a stopping criterion is satisfied.
Algorithm 2: Semi-Supervised semantic segmentation using samples obtained from the Active Learner Input: Labeled Ratio, R and unlabeled pool, X N Define: Active Sampler ← Active Learning module 1 τ ← Confidence Threshold 2 G ← Generator network of conditional GAN 3 D ← Discriminator network of conditional GAN 4 Active sampler from Algorithm 1: X N L , = Active Sampler(R, X N ) 5 Get Label: Obtain pixel-wise labels for the images returned by the active sampler: Y P N L = get label(X N L ) 6 Images without pixel-wise labels:
X N U ← X N − X N L Semi-Supervised Semantic Segmentation: while i < iterations do 7 pseudo label = {} 8 for (x N L , x N U , y P N L ) in (X N L , X N U , Y P N L ) do 9 mask = G(x N L , x N U , y P N L ) 10 conf idence = D(mask, y P N L ) 11
if conf idence > τ then 12 pseudo label = pseudo label mask
Experiments and Results
Datasets and Evaluation Metric
UC Merced Land Use Classification Dataset: The UC Merced Land Use Classification dataset [63] has 2100 RGB images of size 256x256 pixels and 0.3m spatial resolution, with image-level annotations for each of the 21 classes. We use the pixel-level annotations for the UC Merced dataset made publicly available by Shao et al. [48] which has 17 classes as proposed in [3]. The dataset was randomly split into training and validation sets with 1680 training images (80%) 420 validation images (20%). DeepGlobe Land Cover Classification Dataset: The DeepGlobe land cover classification dataset is comprised of DigitalGlobe Vivid+ images of dimensions 2448x2448 pixels and spatial resolution of 0.5 m. There are 803 pixel-wise annotated training images, each with pixel-wise label covering seven land cover classes. Since there are no image-level annotations available for the DeepGlobe dataset, to generate image-level annotations, we calculate which class contains the highest number of pixels for every image and assign that particular coarse class to the image. The dataset was randomly split into training and validation sets such that 642 training images (80%) and 161 validation images (20%). Evaluation Metric: We use mean Intersection-over-Union (mIoU) as our evaluation metric.
Implementation Details
Active Learning for Image Classification We used ResNet 101 and ResNet 50 [15] as our image classification networks for the UC Merced and the DeepGlobe Datasets respectively which were trained using the Cross-Entropy loss. The network was trained using the SGD optimizer with a base learning rate of 0.001 and momentum of 0.9. We used a step learning rate scheduler where the learning rate is dropped by a factor of 0.1 every 7 epochs. We used a batch size of 4 and trained for 50 epochs after each query. Through cross-validation, we found the optimal value of α init = 0.1 and β Q = 0.5. We implemented the network using the skorch [57] framework. The different active learning query strategies were implemented using the modAL toolbox [8] and trained on a NVIDIA GTX-2080ti GPU.
Semi-Supervised Semantic Segmentation
We use a GANbased semi-supervised semantic segmentation technique proposed by Mittal et al. [32]. The generator is comprised of a segmentation network which in our case is DeepLabv2 [5] trained with a ResNet-101 [15] backbone pretrained on the ImageNet dataset [10]. The discriminator is a binary classifier with four convolutional layers with 4x4 kernels with 64, 128, 256, 512 channels each followed by a Leaky ReLU activation [61] with negative slope of 0.2 and a dropout layer [53] with dropout probability of 0.5. The segmentation network in the generator is trained with SGD optimizer base learning rate of 2.5e-4, momentum of 0.9, and a weight decay of 5e-4 as described in [17,32]. The image classification network in the discriminator is trained using the Adam optimizer [22] with a base learning rate of 1e-4. Through cross-validation, we found the optimal loss weights to be λ f m = 0.1 and λ st = 1.0 and the optimal value of τ to be 0.6. For the DeepGlobe dataset, we re-size each image to 320x320 pixels to reduce the training time. We implemented the network using PyTorch [37] on NVIDIA Tesla V100 GPUs.
Results and Analysis
Our baseline is a vanilla s4GAN [32] network, where the labeled data is selected randomly from a given dataset. We report the mean and standard deviation of mIoUs across three experiments with different random seeds for robust evaluation for our baseline method. We compare our approach of using active learning to select representative labeled examples with this baseline. We experiment with two pool-based query strategies, entropy and margin sampling and demonstrate qualitative and quantitative performance improvements on two datasets, DeepGlobe Land Cover Classification Dataset [9], and UC Merced Land Use Classification Dataset [48,63] over the stated baseline. We evaluated our approach with labeled ratios of 2%, 5%, and 12.5%. The qualitative results in Figures 3 and 4 are shown for the best out of the two sampling strategies for each labeled ratio. Table 1 shows the number of labeled images in each dataset for different labeled ratios. UC Merced Land Use Classification Dataset: Table 2 shows a quantitative comparison of our method with the baseline for the UC Merced Land Use Classification Dataset [48,63]. We compare the performance of entropy and margin sample selection strategies with the baseline and show significant and consistent performance improvements. Both the active learning strategies out-perform the baseline by a significant margin. We report a maximum mIoU improvement of close to 15% with as little as 2% labeled data, a maximum improvement of about 18% over the baseline when training with 5% and 12.5% labeled data across the two active learning strategies. Figure 3 shows how our proposed method qualitatively improves over the UC Merced Dataset baseline for different labeled ratios. Our method predicts a finer coastline with no false positives, even with only as few as 34 labeled images which are 2% of labeled data (Row 1 of Figure 3). Similarly, we demonstrate that even when using only 5% (85 images) of labeled data (Row 2 of Figure 3). our method predicts the green river that is camouflaging into the background while the baseline method completely misses it (Row 2 of Figure 3). This shows the importance of having a representative pool of labeled data, especially in a low data regime, as is our case. With 12.5% (211 images) of labeled data (Row 3 of Figure 3), our method accurately predicts the complex shape of the airplane (Column d), as opposed to the baseline (Column c), which was confused between multiple unrelated classes. DeepGlobe Land Use Classification Dataset: Table 5 shows a quantitative comparison of our method with the baseline for the DeepGlobe Land Cover Classification Dataset [9]. We report significant performance improvements over the baseline using both entropy and margin sampling strategies. We report a maximum mIoU improvement of close to 27% with as little as 2% labeled data, a maximum improvement of about 6% over the baseline when training with 5% labeled data, and an improvement of approximately 8% with 12.5% labeled data across the two active learning strategies. Figure 4 shows some visualizations from the DeepGlobe Dataset where it is seen that our method results in fewer false positives than the baseline.
Conclusion
This work proposes a method to leverage active learningbased sampling techniques to improve performance on the downstream task of semi-supervised semantic segmentation for land cover classification in satellite images. We do this by intelligently selecting samples for which pixel-wise labels should be obtained using coarse image classificationbased active-learning strategies. Our method helps the semi-supervised semantic segmentation network start with an optimal set of labeled examples to help it get the right amount of initial information to learn the suitable representation. We prototype this method for a GAN-based semisupervised semantic segmentation network, where the labeled images were selected using pool-based active learning strategies. We demonstrate the efficacy of our method for two satellite image datasets, both quantitatively and qualitatively, and report sizable performance gains. Tables 4 and 5 show the results of our experiments with different combinations of α init and β Q on UC Merced Land Use Classification [63] and the DeepGlobe Land Cover Classification [9] datasets respectively. We vary both the parameters between 0.1 and 0.9 for both entropy and marginbased sampling strategies for three different labeled ratios. We found the best performing α init and β Q values to be 0.1 and 0.5 respectively. Overall we noticed out method to be sensitive to changes in α init and β Q as the average difference in the worst performing and best performing model across all labeled ratios and sampling techniques is 4 mIoU points for the UC Merced Land Use Classification Dataset and 2.4 mIoU points for the DeepGlobe Land Cover Classification Dataset. A.2 Network Capacity of Active Learner Tables 6 and 7 show the results of our experiments with different backbone networks on UC Merced Land Use Classification [63] and the DeepGlobe Land Cover Classification [9] datasets respectively. We experiment with VGG-16 [49], ResNet-50 [15] and ResNet-101 [15] which have different network capacities. We found the best performing backbone network to be ResNet-101 for the UC Merced Land Use Classification dataset and ResNet-50 for the DeepGlobe Land Cover Classification dataset. As shown by the results, the image classification network's capacity for the learner is crucial in determining the quality of the selected samples. Any network with low capacity with respect to the size of the dataset and the number of classes tends to underfit, while any network with a higher capacity than required could overfit and detrimentally affect the downstream task's performance. We noticed our method to be sensitive to networks with different capacities as the average difference in the worst performing and best performing model across all labeled ratios and sampling techniques is 3.3 mIoU points for the UC Merced Land Use Classification Dataset and 2.9 mIoU points for the DeepGlobe Land Cover Classification Dataset. Notably, we see that in most cases, VGG-16 performed significantly performed poorly across all labeled ratios in both the datasets as compared to the ResNet-50 and ResNet-101 models reinforcing the hypothesis that models with insufficient network capacity underperform at the downstream task.
Appendices
A. Ablation Study
A.1 Active Learning Parameters
B. Quantitative Evaluation of Diversity
In this paper, we proposed a method which aims to select the most diverse and representative set of samples to serve as an initial labeled set of data for the semi-supervised network. We empirically showed the success of the proposed method on different datasets. In this section, we evaluate the robustness of our method using statistical indices which measure the diversity of the selected samples. To achieve this, we choose two diversity indices which are frequently used in ecological studies that measure species diversity, but the same analysis can also be applied to measure diversity of any set of random samples.
B.1 Shannon's Diversity Index
The Shannon index [46] was developed from information theory and is based on measuring uncertainty. Shannon's index accounts for both abundance and evenness of the samples present. Shannon index is defined in Equation 8: In our case, each sample is a pixel. Hence, p i indicates the probability that a given pixel belongs to class i. N indicates the total number of classes that a given pixel can belong to. Thus, we are measuring how diverse are the samples selected by the active learning method as compared to samples selected randomly. Therefore, samples with a large number of pixels from different classes that are evenly distributed are the most diverse. On the other hand, samples that are dominated by pixels from one class are the least diverse. We report the value of Shannon diversity index for our baseline method averaged across our three experiments with different random seeds and for samples selected by both the active learning techniques. Intuitively, Shannon's index quantifies the uncertainty in predicting the class to which a given pixel belongs and hence a higher value of Shannon diversity index indicates a more diverse set of samples.
H(x) = − N i=1 p i log p i(8)
Our results for Shannon's diversity index are shown in Tables 8 and 9 for the UC Merced Land Use Classification [63] and DeepGlobe Land Cover Classification [9] datasets respectively. We notice a strong correlation between the mIoU values reported in the paper for the baseline and active learning strategies and the values of the Shannon's diversity index obtained for the respective experiments.
B.2 Simpson's Diversity Index
Traditionally, Simpson's Diversity Index [50] measures the probability that two individuals randomly selected from a sample will belong to the same species (or some category other than species). We extend it to our use case to measure the diversity of the selected samples. To make it easier and intuitive to understand the relevance of this index, we use the inverse Simpson index. Thus, greater the value, the greater the sample diversity. In this case, the index represents the probability that two individuals randomly selected from a sample will belong to different species. Thus, the inverse Simpson index is defined in 9:
D = 1 − (n(n − 1)) N (N − 1)(9)
where, n = the number of pixels belonging to class i, N = total number of classes that exist in the dataset. Similar to Shannon's index in Section B.1, we report results on the UC Merced Land Use Classification [63] and the DeepGlobe Land Cover Classification [9] datasets in Tables 10 and 11 We show that both the active learning sampling strategies used in this paper yield more diverse set of samples and show strong correlation with the mIoU values reported on these datasets in the paper.
C. Discussion
C.1 Applicability of Our Method to Land Use Classification
The average number of semantic categories per scene in the UC Merced and DeepGlobe Landuse Classification datasets used in this paper is 3.39 and 2.51 respectively as depicted by figure 6. This implies that a given scene from the UCM dataset with a given image-level label will have 3 or more different pixel-level labels(semantic categories). Similarly, for the DeepGlobe dataset, we have about 2 or more semantic categories per scene on an average. UCM dataset has a total of 18 semantic categories and DeepGlobe has 6 semantic categories. Thus, each satellite scene in the UCM dataset has about 18% of all pixel level labels and similarly each satellite scene in the DeepGlobe dataset has about 42% of all pixel-level labels on an average. Figure 6 also shows us that about 90% of scenes in the UCM dataset have more than 1 semantic category and similarly about 80% of scenes [26]. Less that 30% of the images in the COCO dataset have more than 1 semantic category. This tells us that the landuse scenes in the domain of satellite imagery are inherently more diverse and hence our method is highly applicable specifically for land use classification in satellite images. We will get a more diverse set of samples for satellite domain as compared to using our method on generic datasets like COCO.
C.2 Suitability of s4GAN as our baseline [32] propose to fuse the output of the s4GAN network with another image classification-based network called MLMT [56] during inference to reduce false positives. This MLMT branch uses an image classification network to output a confidence score for every category in the dataset. This output is combined with the pixel level output of the s4GAN network to reduce the number of false positives in the segmentation network. Therefore, one major constraint for using MLMT is that there should be a one-to-one corre- Land Cover Classification Dataset spondence between the image-level and the pixel-level labels. This would mean that the number of image-level categories should equal the number of pixel-level categories in a dataset. However, this does not always hold in the case of land use classification. An image-level label for land use classification in a satellite scene indicates predominant usage of land. However, the same scene can have multiple semantic categories. This prevents us from using MLMT as done by [32] as our baseline for the task of land use classification.
D. More Qualitative Evaluation
In this section, we provide more qualitative results from our best performing active learning strategies and compare them to our baseline for the UC Merced Land Use Classification Dataset [63]. Figure 7 compares the performance of our method with the baseline when trained with 2% labeled data. Row 1 shows how our method predicts the row of boats parked on the harbor better than the baseline method. Rows 2, 3, and 4 show that the baseline method gets confused between multiple unrelated classes, whereas our method reasonably predicts the correct classes.
Similarly, Figure 8 qualitatively compares the performance of our method with the baseline when trained with 5% labeled data. Rows 1 and 4 show an example of our method predicting the complex shape of airplanes better than the baseline method. Row 2 shows the baseline method being confused between cars in a parking lot and boats parked along a harbor, whereas our method predicts cars parked close together correctly. Row 3 shows how the baseline method completely misses the river and gets confused between multiple classes, while our method predicts the river reasonably well.
Finally, Figure 9 shows some qualitative examples of how our method outperforms the baseline when trained with 12.5% labeled data. Row 1 shows the baseline being confused between buildings and mobile homes, while our method predicts buildings in a dense residential setting more accurately. Rows 2 and 4 show our method predicting the baseball diamond structures accurately without being confused between other classes. Similarly, as shown by Row 3, our method predicts the contours of the airplane better than the baseline.
Figure 2 :
2Proposed Framework:
pseudo-labels to ground-truth and train the Generator model via self-training loss:Y P N L = Y
Original Image b) Ground Truth c) Baseline d) Our Results
Figure 3 :
3Qualitative Results from the UC Merced Land Use Classification Dataset for different labeled Original Image b) Ground Truth c) Baseline d) Our Results
Figure 4 :
4Qualitative Results from the DeepGlobe Land Cover Classification Dataset for different labeled ratios
Figure 5 :
5Visualization of quantitative results for different labeled ratios for the a) UC Merced Land Use Classification Dataset[63] and, b) DeepGlobe Land Cover Classification Dataset[9]
Figure 6 :
6(a) Average number of semantic categories per image-level category for UC Merced Land Use Classification Dataset. (b) Percentage of images containing vs. number of pixel-level categories per image for UC Merced Land Use Classification Dataset. (c) Average number of semantic categories per image-level category for DeepGlobe Land Cover Classification Dataset. (d) Percentage of images containing vs. number of pixel-level categories per image for DeepGlobe
Figure 7 :Figure 8 :
78Qualitative Results from the UC Merced Land Use Classification Dataset for 2% labeled data a) Original Image b) Ground Truth c) Baseline d) Our Results Qualitative Results from the UC Merced Land Use Classification Dataset for 5% labeled data a) Original Image b) Ground Truth c) Baseline d) Our Results
Figure 9 :
9Qualitative Results from the UC Merced Land Use Classification Dataset for 12.5% labeled data
Table 1: Number of Labeled Examples per Labeled Ratio in the UC Merced and DeepGlobe datasetsLabeled Ratio(R)
2%
5%
12.5% 100%
UC Merced[63]
34
85
211
1680
DeepGlobe[9]
12
32
80
642
Table 2 :
2mIoU Scores for the UC Merced Land Use Classi-
fication Dataset [63]
Table 3 :
3mIoU Scores for the DeepGlobe Land Cover Clas-
sification Dataset [9]
Table 4 :
4Ablation Study for the different Active Learning parameters on the UC Merced Land Use Classification Dataset[63] Active Learning Parameters
2%
5%
12.5%
α init
β Q
Entropy Margin Entropy Margin Entropy Margin
0.1
0.1
0.464
0.497
0.507
0.502
0.549
0.513
0.1
0.5
0.469
0.511
0.513
0.513
0.554
0.529
0.9
0.9
0.449
0.462
0.495
0.498
0.527
0.512
Table 5 :
5Ablation Study for the different Active Learning parameters on the DeepGlobe Land Cover Classification Dataset
[9]
Table 6 :
6Impact of different network architectures for the active learner in UC Merced Land Use Classification Dataset[63] on mIoU values2%
5%
12.5%
Backbone Entropy Margin Entropy Margin Entropy Margin
VGG-16
0.421
0.445
0.492
0.499
0.523
0.52
Resnet-50
0.469
0.511
0.513
0.513
0.554
0.529
Resnet-101
0.443
0.482
0.505
0.492
0.534
0.518
Table 7 :
7Impact of different network architectures for the active learner in the DeepGlobe Land Cover Classification Dataset[9] on mIoU valuesLabeled Ratio(R)
2%
5%
12.5%
s4GAN [32] (Baseline) 1.96 ±
0.08
2.16 ±
0.02
2.14 ±
0.03
s4GAN + Entropy
(Ours)
2.10
2.20
2.22
s4GAN + Margin
(Ours)
2.08
2.22
2.25
Table 8 :
8Shannon's Diversity Index for the UC Merced Land Use Classification Dataset [63] (Higher the better)Labeled Ratio(R)
2%
5%
12.5%
s4GAN [32] (Baseline) 1.01 ±
0.04
1.16 ±
0.05
1.19 ±
0.14
s4GAN + Entropy
(Ours)
1.06
1.25
1.38
s4GAN + Margin
(Ours)
1.09
1.24
1.36
Table 9 :
9Shannon's Diversity Index for the DeepGlobe Land Cover Classification Dataset[9] (Higher the better)Labeled Ratio(R)
2%
5%
12.5%
s4GAN [32] (Baseline) 0.79 ±
0.03
0.83 ±
0.009
0.83 ±
0.008
s4GAN + Entropy
(Ours)
0.85
0.84
0.85
s4GAN + Margin
(Ours)
0.82
0.86
0.87
Table 10 :
10Simpson's Diversity Index for the UC Merced Land Use Classification Dataset[63] (Higher the better)Labeled Ratio(R)
2%
5%
12.5%
s4GAN [32] (Baseline) 0.55 ±
0.04
0.64 ±
0.01
0.65 ±
0.02
s4GAN + Entropy
(Ours)
0.58
0.73
0.71
s4GAN + Margin
(Ours)
0.62
0.71
0.68
Table 11 :
11Simpson's Diversity Index for the DeepGlobe Land Cover Classification Dataset[9] (Higher the better) in the DeepGlobe dataset have more than 1 semantic category. This number if quite high when we compare this statistics with that in some generic standard dataset. For instance, consider the COCO dataset
AcknowledgementsWe would like to thank our peers who helped us improve our paper with their feedback, in no particular order -Joseph Weber, Wencheng Wu, Gowdhaman Sadhasivam, Surya Teja, Rheeya Uppal, Julius Simonelli, Jing Tian, Kaushik Patnaik.
Training connectionist networks with queries and selective sampling. E Les, Atlas, A David, Richard E Cohn, Ladner, Advances in neural information processing systems. Les E Atlas, David A Cohn, and Richard E Ladner. Training connectionist networks with queries and selective sampling. In Advances in neural information processing systems, pages 566-573. Citeseer, 1990. 2
Margin based active learning. Maria-Florina Balcan, Andrei Broder, Tong Zhang, International Conference on Computational Learning Theory. SpringerMaria-Florina Balcan, Andrei Broder, and Tong Zhang. Mar- gin based active learning. In International Conference on Computational Learning Theory, pages 35-50. Springer, 2007. 2
Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method. Bindita Chaudhuri, Begüm Demir, Subhasis Chaudhuri, Lorenzo Bruzzone, IEEE Transactions on Geoscience and Remote Sensing. 562Bindita Chaudhuri, Begüm Demir, Subhasis Chaudhuri, and Lorenzo Bruzzone. Multilabel remote sensing im- age retrieval using a semisupervised graph-theoretic method. IEEE Transactions on Geoscience and Remote Sensing, 56(2):1144-1158, 2017. 6
A spatial-temporal attentionbased method and a new dataset for remote sensing image change detection. Hao Chen, Zhenwei Shi, Remote Sensing. 12101662Hao Chen and Zhenwei Shi. A spatial-temporal attention- based method and a new dataset for remote sensing image change detection. Remote Sensing, 12(10):1662, 2020. 1
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, IEEE transactions on pattern analysis and machine intelligence. 40Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolu- tion, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834-848, 2017. 6
Semi-supervised semantic segmentation with cross pseudo supervision. Xiaokang Chen, Yuhui Yuan, Gang Zeng, Jingdong Wang, Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with cross pseudo supervision, 2021. 3
Mask-based data augmentation for semi-supervised semantic segmentation. Ying Chen, Xu Ouyang, Kaiyue Zhu, Gady Agam, arXiv:2101.10156arXiv preprintYing Chen, Xu Ouyang, Kaiyue Zhu, and Gady Agam. Mask-based data augmentation for semi-supervised seman- tic segmentation. arXiv preprint arXiv:2101.10156, 2021. 2
modAL: A modular active learning framework for Python. Tivadar Danka, Peter Horvath, Tivadar Danka and Peter Horvath. modAL: A modular ac- tive learning framework for Python. available on arXiv at https://arxiv.org/abs/1805.00979. 6
Deepglobe 2018: A challenge to parse the earth through satellite images. Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, Ramesh Raskar, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. the IEEE Conference on Computer Vision and Pattern Recognition Workshops1314Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raskar. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition Workshops, pages 172-181, 2018. 2, 5, 8, 12, 13, 14
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. 6
Land use and land cover change detection by using principal component analysis and morphological operations in remote sensing applications. M Dharani, G Sreenivasulu, Journal of Computers and Applications. 1M Dharani and G Sreenivasulu. Land use and land cover change detection by using principal component analysis and morphological operations in remote sensing applications. In- ternational Journal of Computers and Applications, pages 1-10, 2019. 1
Self-supervised representation learning for remote sensing image change detection based on temporal prediction. Huihui Dong, Wenping Ma, Yue Wu, Jun Zhang, Licheng Jiao, Remote Sensing. 12111868Huihui Dong, Wenping Ma, Yue Wu, Jun Zhang, and Licheng Jiao. Self-supervised representation learning for re- mote sensing image change detection based on temporal pre- diction. Remote Sensing, 12(11):1868, 2020. 1
Semi-supervised semantic segmentation needs strong, varied perturbations. Geoffrey French, Samuli Laine, Timo Aila, Michal Mackiewicz, Graham Finlayson, British Machine Vision Conference. 313Geoffrey French, Samuli Laine, Timo Aila, Michal Mack- iewicz, and Graham Finlayson. Semi-supervised semantic segmentation needs strong, varied perturbations. In British Machine Vision Conference, number 31, 2020. 2, 3
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27:2672-2680, 2014. 5
Identity mappings in deep residual networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, European conference on computer vision. Springer612Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630-645. Springer, 2016. 6, 12
Batch mode active learning and its application to medical image classification. C H Steven, Rong Hoi, Jianke Jin, Michael R Zhu, Lyu, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningSteven CH Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Batch mode active learning and its application to medical im- age classification. In Proceedings of the 23rd international conference on Machine learning, pages 417-424, 2006. 2
Adversarial learning for semi-supervised semantic segmentation. Wei-Chih Hung, Yi-Hsuan Tsai, Yan-Ting Liou, Yen-Yu Lin, Ming-Hsuan Yang, arXiv:1802.07934arXiv preprintWei-Chih Hung, Yi-Hsuan Tsai, Yan-Ting Liou, Yen-Yu Lin, and Ming-Hsuan Yang. Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:1802.07934, 2018. 1, 2, 3, 7
A review of multi-temporal remote sensing data change detection algorithms. The International Archives of the Photogrammetry. Gong Jianya, Sui Haigang, Ma Guorui, Zhou Qiming, Remote Sensing and Spatial Information Sciences. 37B7Gong Jianya, Sui Haigang, Ma Guorui, and Zhou Qiming. A review of multi-temporal remote sensing data change de- tection algorithms. The International Archives of the Pho- togrammetry, Remote Sensing and Spatial Information Sci- ences, 37(B7):757-762, 2008. 1
Scalable active learning for multiclass image classification. J Ajay, Fatih Joshi, Nikolaos P Porikli, Papanikolopoulos, IEEE transactions on pattern analysis and machine intelligence. 34Ajay J Joshi, Fatih Porikli, and Nikolaos P Papanikolopou- los. Scalable active learning for multiclass image classifi- cation. IEEE transactions on pattern analysis and machine intelligence, 34(11):2259-2273, 2012. 2
Half a percent of labels is enough: Efficient animal detection in uav imagery using deep cnns and active learning. Benjamin Kellenberger, Diego Marcos, Sylvain Lobry, Devis Tuia, IEEE Transactions on Geoscience and Remote Sensing. 5712Benjamin Kellenberger, Diego Marcos, Sylvain Lobry, and Devis Tuia. Half a percent of labels is enough: Efficient animal detection in uav imagery using deep cnns and ac- tive learning. IEEE Transactions on Geoscience and Remote Sensing, 57(12):9524-9533, 2019. 2
Structured consistency loss for semi-supervised semantic segmentation. Jongmok Kim, Jooyoung Jang, Hyunwoo Park, arXiv:2001.04647arXiv preprintJongmok Kim, Jooyoung Jang, and Hyunwoo Park. Struc- tured consistency loss for semi-supervised semantic segmen- tation. arXiv preprint arXiv:2001.04647, 2020. 3
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 7
Adriana Kovashka, Olga Russakovsky, Li Fei-Fei, Kristen Grauman, arXiv:1611.02145Crowdsourcing in computer vision. arXiv preprintAdriana Kovashka, Olga Russakovsky, Li Fei-Fei, and Kris- ten Grauman. Crowdsourcing in computer vision. arXiv preprint arXiv:1611.02145, 2016. 1
Heterogeneous uncertainty sampling for supervised learning. D David, Jason Lewis, Catlett, Machine learning proceedings 1994. Elsevier24David D Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In Machine learning pro- ceedings 1994, pages 148-156. Elsevier, 1994. 2, 4
A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, SIGIR'94. Springer24David D Lewis and William A Gale. A sequential algo- rithm for training text classifiers. In SIGIR'94, pages 3-12. Springer, 1994. 2, 4
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. Springer14Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 14
Active and incremental learning for semantic als point cloud segmentation. Yaping Lin, George Vosselman, Yanpeng Cao, Michael Ying Yang, ISPRS Journal of Photogrammetry and Remote Sensing. 1692Yaping Lin, George Vosselman, Yanpeng Cao, and Michael Ying Yang. Active and incremental learning for semantic als point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing, 169:73-92, 2020. 2
Classifying urban land use by integrating remote sensing and social media data. Xiaoping Liu, Jialv He, Yao Yao, Jinbao Zhang, Haolin Liang, Huan Wang, Ye Hong, International Journal of Geographical Information Science. 318Xiaoping Liu, Jialv He, Yao Yao, Jinbao Zhang, Haolin Liang, Huan Wang, and Ye Hong. Classifying urban land use by integrating remote sensing and social media data. International Journal of Geographical Information Science, 31(8):1675-1696, 2017. 1
Cereals-cost-effective region-based active learning for semantic segmentation. Radek Mackowiak, Philip Lenz, Omair Ghori, Ferran Diego, Oliver Lange, Carsten Rother, arXiv:1810.09726arXiv preprintRadek Mackowiak, Philip Lenz, Omair Ghori, Ferran Diego, Oliver Lange, and Carsten Rother. Cereals-cost-effective region-based active learning for semantic segmentation. arXiv preprint arXiv:1810.09726, 2018. 2
Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network. Dwarikanath Mahapatra, Behzad Bozorgtabar, Jean-Philippe Thiran, Mauricio Reyes, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerDwarikanath Mahapatra, Behzad Bozorgtabar, Jean-Philippe Thiran, and Mauricio Reyes. Efficient active learning for image classification and segmentation using a sample selec- tion and conditional generative adversarial network. In In- ternational Conference on Medical Image Computing and Computer-Assisted Intervention, pages 580-588. Springer, 2018. 2
Conditional generative adversarial nets. CoRR, abs/1411.1784. Mehdi Mirza, Simon Osindero, Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014. 1
Semi-supervised semantic segmentation with high-and lowlevel consistency. Sudhanshu Mittal, Maxim Tatarchenko, Thomas Brox, IEEE Transactions on Pattern Analysis and Machine Intelligence. 15Sudhanshu Mittal, Maxim Tatarchenko, and Thomas Brox. Semi-supervised semantic segmentation with high-and low- level consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 1, 2, 3, 5, 6, 7, 8, 14, 15
Consistency regularization for semantic segmentation. Viktor Olsson, Wilhelm Tranheden, 2020Viktor Olsson and Wilhelm Tranheden. Consistency regular- ization for semantic segmentation. 2020. 2
Classmix: Segmentation-based data augmentation for semi-supervised learning. Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, Lennart Svensson, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer Vision23Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, and Lennart Svensson. Classmix: Segmentation-based data aug- mentation for semi-supervised learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Com- puter Vision, pages 1369-1378, 2020. 2, 3
Semisupervised semantic segmentation with cross-consistency training. Yassine Ouali, Céline Hudelot, Myriam Tami, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition23Yassine Ouali, Céline Hudelot, and Myriam Tami. Semi- supervised semantic segmentation with cross-consistency training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12674- 12684, 2020. 2, 3
Semantic labeling of aerial and satellite imagery. Sakrapee Paisitkriangkrai, Jamie Sherrah, Pranam Janney, Anton Van Den, Hengel, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 97Sakrapee Paisitkriangkrai, Jamie Sherrah, Pranam Janney, and Anton Van Den Hengel. Semantic labeling of aerial and satellite imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(7):2868-2881, 2016. 2
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Al- ban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 7
Deep active learning for image classification. Hemanth Hiranmayi Ranganathan, Shayok Venkateswara, Sethuraman Chakraborty, Panchanathan, 2017 IEEE International Conference on Image Processing (ICIP). Hiranmayi Ranganathan, Hemanth Venkateswara, Shayok Chakraborty, and Sethuraman Panchanathan. Deep active learning for image classification. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3934-3938. IEEE, 2017. 2
Mapping oil palm density at country scale: An active learning approach. Andrés C Rodríguez, D' Stefano, Konrad Aronco, Jan D Schindler, Wegner, Remote Sensing of Environment. 2612112479Andrés C Rodríguez, Stefano D'Aronco, Konrad Schindler, and Jan D Wegner. Mapping oil palm density at country scale: An active learning approach. Remote Sensing of Envi- ronment, 261:112479, 2021. 2
Margin-based active learning for structured output spaces. Dan Roth, Kevin Small, European Conference on Machine Learning. SpringerDan Roth and Kevin Small. Margin-based active learning for structured output spaces. In European Conference on Machine Learning, pages 413-424. Springer, 2006. 2
Deep active learning for object detection. Soumya Roy, Asim Unmesh, P Vinay, Namboodiri, BMVC. 91Soumya Roy, Asim Unmesh, and Vinay P Namboodiri. Deep active learning for object detection. In BMVC, page 91, 2018. 2
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, arXiv:1606.03498Improved techniques for training gans. arXiv preprintTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. 5
Attention gated networks: Learning to leverage salient regions in medical images. Jo Schlemper, Ozan Oktay, Michiel Schaap, Mattias Heinrich, Bernhard Kainz, Ben Glocker, Daniel Rueckert, Medical Image Analysis. 532Jo Schlemper, Ozan Oktay, Michiel Schaap, Mattias Hein- rich, Bernhard Kainz, Ben Glocker, and Daniel Rueckert. At- tention gated networks: Learning to leverage salient regions in medical images. Medical Image Analysis, 53:197-207, 2019. 2
Active learning literature survey. Burr Settles, 164823University of Wisconsin-MadisonComputer Sciences Technical ReportBurr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin- Madison, 2009. 2, 3
Query by committee proceedings of 5th annual workshop on computational learning theory. Manfred H Sebastian Seung, Haim Opper, Sompolinsky, ACM Press10New YorkH Sebastian Seung, Manfred Opper, and Haim Sompolinsky. Query by committee proceedings of 5th annual workshop on computational learning theory, 287-294. New York, ACM Press, 10:130385-130417, 1992. 2
A mathematical theory of communication. The Bell system technical journal. Claude E Shannon, 2712Claude E Shannon. A mathematical theory of communi- cation. The Bell system technical journal, 27(3):379-423, 1948. 12
Deep active learning for nucleus classification in pathology images. Wei Shao, Liang Sun, Daoqiang Zhang, IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). Wei Shao, Liang Sun, and Daoqiang Zhang. Deep active learning for nucleus classification in pathology images. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 199-202. IEEE, 2018. 2
Multilabel remote sensing image retrieval based on fully convolutional network. Zhenfeng Shao, Weixun Zhou, Xueqing Deng, Maoding Zhang, Qimin Cheng, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 13Zhenfeng Shao, Weixun Zhou, Xueqing Deng, Maoding Zhang, and Qimin Cheng. Multilabel remote sensing image retrieval based on fully convolutional network. IEEE Jour- nal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:318-328, 2020. 1, 2, 6, 8
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, 12Karen Simonyan and Andrew Zisserman. Very deep con- volutional networks for large-scale image recognition, 2015. 12
Medición de la diversidad. E Simpson, Nature. 163688E Simpson. Medición de la diversidad. Nature, 163(688):1, 1949. 13
Medal: Accurate and robust deep active learning for medical image analysis. Asim Smailagic, Pedro Costa, Hae Young Noh, Devesh Walawalkar, Kartik Khandelwal, Adrian Galdran, Mostafa Mirshekari, Jonathon Fagert, Susu Xu, Pei Zhang, 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEEAsim Smailagic, Pedro Costa, Hae Young Noh, Devesh Walawalkar, Kartik Khandelwal, Adrian Galdran, Mostafa Mirshekari, Jonathon Fagert, Susu Xu, Pei Zhang, et al. Medal: Accurate and robust deep active learning for medi- cal image analysis. In 2018 17th IEEE International Confer- ence on Machine Learning and Applications (ICMLA), pages 481-488. IEEE, 2018. 2
Semi supervised semantic segmentation using generative adversarial network. Nasim Souly, Concetto Spampinato, Mubarak Shah, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision13Nasim Souly, Concetto Spampinato, and Mubarak Shah. Semi supervised semantic segmentation using generative ad- versarial network. In Proceedings of the IEEE International Conference on Computer Vision, pages 5688-5696, 2017. 1, 2, 3
Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. 6
Active learning in the spatial domain for remote sensing image classification. IEEE transactions on geoscience and remote sensing. André Stumpf, Nicolas Lachiche, Jean-Philippe Malet, 52Norman Kerle, and Anne PuissantAndré Stumpf, Nicolas Lachiche, Jean-Philippe Malet, Nor- man Kerle, and Anne Puissant. Active learning in the spatial domain for remote sensing image classification. IEEE trans- actions on geoscience and remote sensing, 52(5):2492-2507, 2013. 2
L-unet: An lstm network for remote sensing image change detection. IEEE Geoscience and Remote Sensing Letters. Shuting Sun, Lin Mu, Lizhe Wang, Peng Liu, 2020Shuting Sun, Lin Mu, Lizhe Wang, and Peng Liu. L-unet: An lstm network for remote sensing image change detection. IEEE Geoscience and Remote Sensing Letters, 2020. 1
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Antti Tarvainen, Harri Valpola, arXiv:1703.01780314arXiv preprintAntti Tarvainen and Harri Valpola. Mean teachers are bet- ter role models: Weight-averaged consistency targets im- prove semi-supervised deep learning results. arXiv preprint arXiv:1703.01780, 2017. 3, 14
Benjamin Bossan, and skorch Developers. skorch: A scikit-learn compatible neural network library that wraps PyTorch. Marian Tietz, Thomas J Fan, Daniel Nouri, Marian Tietz, Thomas J. Fan, Daniel Nouri, Benjamin Bossan, and skorch Developers. skorch: A scikit-learn compatible neural network library that wraps PyTorch, July 2017. 6
Active learning methods for remote sensing image classification. Devis Tuia, Frédéric Ratle, Fabio Pacifici, F Mikhail, William J Kanevski, Emery, IEEE Transactions on Geoscience and Remote Sensing. 477Devis Tuia, Frédéric Ratle, Fabio Pacifici, Mikhail F Kanevski, and William J Emery. Active learning methods for remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing, 47(7):2218-2232, 2009. 2
A survey of active learning algorithms for supervised remote sensing image classification. Devis Tuia, Michele Volpi, Loris Copa, Mikhail Kanevski, Jordi Munoz-Mari, IEEE Journal of Selected Topics in Signal Processing. 53Devis Tuia, Michele Volpi, Loris Copa, Mikhail Kanevski, and Jordi Munoz-Mari. A survey of active learning al- gorithms for supervised remote sensing image classifica- tion. IEEE Journal of Selected Topics in Signal Processing, 5(3):606-617, 2011. 2
Deal: Difficulty-aware active learning for semantic segmentation. Shuai Xie, Zunlei Feng, Ying Chen, Songtao Sun, Chao Ma, Mingli Song, Proceedings of the Asian Conference on Computer Vision. the Asian Conference on Computer VisionShuai Xie, Zunlei Feng, Ying Chen, Songtao Sun, Chao Ma, and Mingli Song. Deal: Difficulty-aware active learning for semantic segmentation. In Proceedings of the Asian Confer- ence on Computer Vision, 2020. 2
Empirical evaluation of rectified activations in convolutional network. Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li, arXiv:1505.00853arXiv preprintBing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015. 6
Suggestive annotation: A deep active learning framework for biomedical image segmentation. Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, Danny Z Chen, In International conference on medical image computing and computer-assisted intervention. 2SpringerLin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z Chen. Suggestive annotation: A deep active learn- ing framework for biomedical image segmentation. In In- ternational conference on medical image computing and computer-assisted intervention, pages 399-407. Springer, 2017. 2
Bag-of-visual-words and spatial extensions for land-use classification. Yi Yang, Shawn Newsam, Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems. the 18th SIGSPATIAL international conference on advances in geographic information systems1415Yi Yang and Shawn Newsam. Bag-of-visual-words and spa- tial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems, pages 270-279, 2010. 1, 2, 5, 6, 8, 12, 13, 14, 15
Deep learning in environmental remote sensing: Achievements and challenges. Qiangqiang Yuan, Huanfeng Shen, Tongwen Li, Zhiwei Li, Shuwen Li, Yun Jiang, Hongzhang Xu, Weiwei Tan, Qianqian Yang, Jiwen Wang, Remote Sensing of Environment, 241:111716, 2020. 1Qiangqiang Yuan, Huanfeng Shen, Tongwen Li, Zhiwei Li, Shuwen Li, Yun Jiang, Hongzhang Xu, Weiwei Tan, Qian- qian Yang, Jiwen Wang, et al. Deep learning in environmen- tal remote sensing: Achievements and challenges. Remote Sensing of Environment, 241:111716, 2020. 1
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionSangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regu- larization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 6023-6032, 2019. 3
Application of hyperspectral remote sensing for environment monitoring in mining areas. Bing Zhang, Di Wu, Li Zhang, Quanjun Jiao, Qingting Li, Environmental Earth Sciences. 653Bing Zhang, Di Wu, Li Zhang, Quanjun Jiao, and Qingting Li. Application of hyperspectral remote sensing for envi- ronment monitoring in mining areas. Environmental Earth Sciences, 65(3):649-658, 2012. 1
| [
"https://github.com/immuno121/ALS4GAN"
]
|
[
"THE CAFFARELLI-KOHN-NIRENBERG INEQUALITY FOR SUBMANIFOLDS IN RIEMANNIAN MANIFOLDS",
"THE CAFFARELLI-KOHN-NIRENBERG INEQUALITY FOR SUBMANIFOLDS IN RIEMANNIAN MANIFOLDS"
]
| [
"M Batista ",
"H Mirandola ",
"F Vitório "
]
| []
| []
| After works by Michael and Simon [10], Hoffman and Spruck[9], and White[14], the celebrated Sobolev inequality could be extended to submanifolds in a huge class of Riemannian manifolds. The universal constant obtained depends only on the dimension of the submanifold. A sort of applications to the submanifold theory and geometric analysis have been obtained from that inequality. It is worthwhile to point out that, by a Nash Theorem, every Riemannian manifold can be seen as a submanifold in some Euclidean space. In the same spirit, Carron obtained a Hardy inequality for submanifolds in Euclidean spaces. In this paper, we will prove the Hardy, weighted Sobolev and Caffarelli-Kohn-Nirenberg inequalities, as well as some of their derivatives, as Galiardo-Nirenberg and Heisenberg-Pauli-Weyl inequalities, for submanifolds in a class of manifolds, that include, the Cartan-Hadamard ones. | null | [
"https://arxiv.org/pdf/1509.03857v1.pdf"
]
| 119,587,382 | 1509.03857 | d6fc468a16748ee8f2a612dab20e5460a56f0be0 |
THE CAFFARELLI-KOHN-NIRENBERG INEQUALITY FOR SUBMANIFOLDS IN RIEMANNIAN MANIFOLDS
13 Sep 2015
M Batista
H Mirandola
F Vitório
THE CAFFARELLI-KOHN-NIRENBERG INEQUALITY FOR SUBMANIFOLDS IN RIEMANNIAN MANIFOLDS
13 Sep 2015
After works by Michael and Simon [10], Hoffman and Spruck[9], and White[14], the celebrated Sobolev inequality could be extended to submanifolds in a huge class of Riemannian manifolds. The universal constant obtained depends only on the dimension of the submanifold. A sort of applications to the submanifold theory and geometric analysis have been obtained from that inequality. It is worthwhile to point out that, by a Nash Theorem, every Riemannian manifold can be seen as a submanifold in some Euclidean space. In the same spirit, Carron obtained a Hardy inequality for submanifolds in Euclidean spaces. In this paper, we will prove the Hardy, weighted Sobolev and Caffarelli-Kohn-Nirenberg inequalities, as well as some of their derivatives, as Galiardo-Nirenberg and Heisenberg-Pauli-Weyl inequalities, for submanifolds in a class of manifolds, that include, the Cartan-Hadamard ones.
Introduction
Over the years, geometers have been interested in understanding how integral inequalities imply geometric or topological obstructions on Riemannian manifolds. Under this purpose, some integral inequalities lead us to study positive solutions to critical singular quasilinear elliptic problems, sharp constants, existence, non-existence and symmetry results for extremal functions on subsets in the Euclidean space. About these subjects, one can read, for instance, [1], [4], [7], [5], [6], [9], [10], [11] and references therein.
In the literature, some of the most known integral inequalities are the Hardy inequality, Gagliardo-Nirenberg inequality, and, more generally, the Caffarelli-Kohn-Nirenberg inequality. These inequalities imply comparison for the volume growth, estimates of the essencial spectrum for the Schrödinger operators, parabolicity, among others properties (see, for instance, [12,8,15]).
In this paper, we propose to study the Caffarelli-Kohn-Nirenberg (CKN) inequality for submanifolds in a class of Riemannian manifolds that includes, for instance, the Cartan-Hadamard manifolds, using an elementary and very efficient approach. We recall that a Cartan-Hadamard manifold is a complete simply-connected Riemannian manifold with nonpositive sectional curvature. Euclidean and hyperbolic spaces are the simplest examples of Cartan-Hadamard manifolds.
Preliminaries
In this section, let us start recalling some concepts, notations and basic properties about submanifolds. First, let M = M k be a k-dimensional Riemannian manifold with (possibly nonempty) smooth boundary ∂M . Assume M is isometrically immersed in a complete Riemannian manifoldM . Henceforth, we will denote by f : M →M the isometric immersion. In this paper, no restriction on the codimension of f is required. By abuse of notation, sometimes we will identify f (x) = x, for all x ∈ M . Let ·, · denote the Euclidean metric onM and consider the same notation to the metric induced on M . Associated to these metrics, consider the Levi-Civita connections D and ∇ onM and M , respectively. It easy to see that ∇ Y Z = (D Y Z) ⊤ , where ⊤ means the orthogonal projection onto the tangent bundle T M . The Gauss equation says
D Y Z = ∇ Y Z + II(Y, Z),
where II is a quadratic form named by second fundamental form. The mean curvature vector is defined by H = Tr M II. Let K : [0, ∞) → [0, ∞) be a nonnegative continuous function and h ∈ C 2 ([0, +∞)) the solution of the Cauchy problem: (1) h ′′ + Kh = 0, h(0) = 0, h ′ (0) = 1.
Let 0 <r 0 =r 0 (K) ≤ +∞ be the supremum value where the restriction h| [0,r 0 ) is increasing and let [0,s 0 ) = h([0,r 0 )). Notice that h ′ is nonincreasing since h ′′ = −Kh ≤ 0.
Example 2.1. If K = b 2 , with b ≥ 0, then (i) if b = 0, it holds h(t) = t andr 0 =s 0 = +∞; (ii) if b > 0, it holds h(t) = 1 b sin(bt) andr 0 = π 2b ands 0 = h(r 0 ) = 1 b .
For ξ ∈M , let r ξ = dM (· , ξ) be the distance function onM from ξ ∈M . In this paper, we will deal with complete ambient spacesM whose radial sectional curvature satisfies (K rad ) ξ 0 ≤ K(r ξ 0 ), for some fixed ξ 0 ∈M . Let us recall the definition of radial sectional curvature. Let x ∈M and, sincē M is complete, let γ : [0, t 0 = r ξ (x)] →M be a minimizing geodesic in M from ξ to x. For all orthonormal pair of vectors Y, Z ∈ T xM we define
(K rad ) ξ (Y, Z) = R (Y, γ ′ (t 0 ))γ ′ (t 0 ), Z .
Example 2.2. Let (P, dσ 2 P ) be a complete manifold. Consider the manifold M = [0, r 0 ) × P/ ∼, where (0, y 1 ) ∼ (0, y 2 ), for all y 1 , and y 2 ∈ P , with the following metric: (2) · , · M = dr 2 + h(r) 2 dσ 2 P . Since h > 0 in (0, r 0 ), h(0) = 0 and h ′ (0) = 1, it follows thatM defines a Riemannian manifold. If P = S n−1 is the round metric, , M is called a rotationally invariant metric.
We fix the point ξ 0 = (0, y) ∈M . The distance dM ((r, y), ξ 0 ) = r, for all (r, y) ∈M . The curvatura tensorR ofM satisfies
(3)R(Y, ∂ r )∂ r = − h ′′ (r) h Y, if Y is tangent to P ; 0, if Y = ∂ r .
Hence, the radial sectional curvature (K rad ) ξ 0 (· , ·) = R (· , ∂ r )∂ r , · , with basis point ξ 0 , satisfies (K rad ) ξ 0 = K(r).
A huge class of metrics are rotationally symmetric: (i) The Euclidean metric: , R n = dr 2 + r 2 dσ 2 S n−1 , in [0, ∞) × S n−1 . (ii) The spherical metric , S n = dr 2 + sin 2 (r)dσ 2 S n−1 , in [0, π] × S n−1 . (iii) The Hyperbolic metric: , H n = dt 2 + sinh 2 (r)dσ 2 S n−1 , in [0, ∞)× S n−1 ; (iv) Some classical examples in general relativity: Schwarzchild metric, De Sitter-Schwarzchild metric, Kottler-Schwarzchild metric, among others.
Assume the radial sectional curvatures ofM satisfies (K rad ) ξ 0 ≤ K(r), where r = r ξ 0 = dM (· , ξ 0 ). We fix 0 < r 0 < min{r 0 (K), InjM (ξ 0 )} and consider the geodesic ball B = B r 0 (ξ 0 ) = {x ∈M | dM (x, ξ 0 ) < r 0 }. It follows that r is differentiable at all points in B * = B \ {ξ 0 } and, by the Hessian comparison theorem (see Theorem 2.3 page 29 of [13]), we have (4) Hess
r (v, v) ≥ h ′ (r) h(r) (1 − ∇ r, v 2 ),
for all points in B * and vector fields v : B * → TM with |v| = 1. For a vector field Y : M → TM , the divergence of Y on M is given by
div M Y = k i=1 D e i Y, e i ,
where {e 1 , · · · , e k } denotes a local orthonormal frame on M . By simple computations, one has Lemma 2.3. Let Y : M → TM be a vector field and ψ ∈ C 1 (M ). The following items hold
(a) div M Y = div M Y ⊤ − H, Y ; (b) div M (ψY ) = ψ div M Y + ∇ M ψ, Y .
From now on, we will consider the radial vector field X = X ξ 0 = h(r)∇r, defined in B * . Notice that |X| = h(r) > 0 everywhere in B * .
Lemma 2.4. For all α ∈ (−∞, +∞), it holds div M ( X |X| α ) ≥ h ′ (r)[ k − α h(r) α + α |∇r ⊥ | 2 h(r) α ],
in M ∩ B * . Here, (·) ⊥ denotes the orthogonal projection on the normal bundle T M ⊥ of M.
Proof. By Lemma 2.3 item (b), div M ( X h(r) α ) = 1 h(r) α divX + ∇ M ( 1 h(r) α ), X . Since 1 = |∇r| 2 = |∇r ⊤ | 2 + |∇r ⊥ | 2 , and ∇ M ( 1 |X| α ) = −α h ′ (r)∇r ⊤ h(r) α+1 , one has div M ( X |X| α ) = 1 h(r) α div M X − αh ′ (r) h(r) α+1 ∇ r ⊤ , h(r)∇r = 1 h(r) α div M X − αh ′ (r) h(r) α + αh ′ (r)|∇r ⊥ | 2 h(r) α+1 .
On the other hand, let {e 1 , · · · , e k } denote an orthonormal frame on M . By (4), we have
div M X = k i=1 D e i X, e i = k i=1 [h ′ (r) ∇ r, e i 2 + h(r)Hess r (e i , e i )] ≥ h ′ (r)|(∇r) ⊤ | 2 + h ′ (r)(k − |(∇r) ⊤ | 2 ) = kh ′ (r)
Lemma 2.4 follows.
The Hardy inequality for submanifolds
Carron [4] proved the following Hardy Inequality.
Theorem A (Carron). Let Σ k be a complete non compact Riemannian manifold isometrically immersed in a Euclidean space R n . Fix v ∈ R n and let r(x) = |x − v|, for all x ∈ Σ. Then, for all smooth function ψ ∈ C ∞ c (Σ) compactly supported in Σ, the following Hardy inequality holds:
(k − 2) 2 4 Σ ψ 2 r 2 ≤ Σ [|∇ Σ ψ| 2 + k − 2 2 |H|ψ 2 r ].
Just comparing Theorem A with Corollary 3.3 below, given ψ ∈ C ∞ c (Σ), let M be a compact subset of Σ with compact smooth boundary ∂M satisfying supp (ψ) ⊂ M ⊂ Σ. We will see that Corollary 3.3 does not generalize Theorem A, unless Σ is a minimal submanifold.
The result below will be fundamental to obtain our Hardy inequality (see Theorem 3.2).
Proposition 3.1. Fixed ξ 0 ∈M , we assume (K rad ) ξ 0 ≤ K(r), where r = r ξ 0 = dM (· , ξ 0 ). Assume further M is contained in a ball B = Br 0 (ξ 0 ), for some 0 < r 0 < min{r 0 (K), InjM (ξ 0 )}. Let 1 < p < ∞ and −∞ < γ < k. Then, for all ψ ∈ C 1 (M ), with ψ ≥ 0, it holds (k − γ) p h ′ (r 0 ) p−1 p p M ψ p h ′ (r) h(r) γ + γ[(k − γ)h ′ (r 0 )] p−1 p p−1 M ψ p h ′ (r) |(∇r) ⊥ | 2 h(r) γ ≤ M 1 h(r) γ−p [|∇ M ψ| 2 + ψ 2 |H| 2 p 2 ] p/2 + [(k − γ)h ′ (r 0 )] p−1 p p−1 ∂M ψ p h(r) γ−1 ∇ r, ν , provided that ∂M ψ p h(r) γ−1 ∇ r, ν exists. Here, ν denotes the outward conor- mal vector to ∂M . Proof. First, we assume ξ 0 / ∈ M . Let X = h(r ξ 0 )∇r ξ 0 and write γ = α+β+1 with α, β ∈ R. Let ψ ∈ C 1 (M ). By Lemma 2.4, div M ( ψ p X ⊤ |X| γ ) = ψ p div M ( X ⊤ |X| γ ) + ∇ M ψ p , X |X| γ = ψ p div M ( X |X| γ ) + ψ p X |X| γ , H + p ψ p−1 ∇ M ψ, X |X| γ ≥ ψ p h ′ (r)[ k − γ h(r) γ + γ|∇r ⊥ | 2 h(r) γ ] + p∇ M ψ + ψH, ψ p−1 X |X| γ .(5)
By the divergence theorem,
M ψ p h ′ (r)[ k − γ h(r) γ + γ|∇r ⊥ | 2 h(r) γ ] ≤ − M p∇ M ψ + ψH, ψ p−1 X |X| γ + ∂M ψ p |X| γ X, ν ,(6)
where ν denotes the outward conormal vector to the boundary ∂M in M . (7), and multiplying both sides by 1
Let r * := max x∈supp (ψ) dM (f (x), ξ 0 ). Since 0 < r * < r 0 and h ′′ = −Kh ≤ 0 it holds that h ′ (r) ≥ h ′ (r * ) > h ′ (r 0 ) ≥ 0 in M . By the Young inequality with ǫ > 0 (to be chosen soon), it holds M ψ p h ′ (r)[ k − γ h(r) γ + γ|∇r ⊥ | 2 h(r) γ ] ≤ − M p ∇ M ψ |X| α + ψH |X| α , ψ p−1 X |X| β+1 + ∂M ψ p |X| γ X, ν ≤ 1 pǫ p M | p∇ M ψ h(r) α + ψH h(r) α | p + ǫ q q M |ψ| (p−1)q h(r) βq + ∂M ψ p h(r) γ−1 ∇ r, ν ≤ 1 pǫ p M | p 2 |∇ M ψ| 2 h(r) 2α + ψ 2 |H| 2 h(r) 2α | p/2 + ǫ q q M |ψ| (p−1)q h(r) βq h ′ (r) h ′ (r 0 ) + ∂M ψ p h(r) γ−1 ∇ r, ν , where q = p p−1 . Now, consider β = (p − 1)(α + 1). We have γ = qβ = p(α + 1). Thus, ϕ(ǫ) M ψ p h ′ (r) h(r) γ + pγǫ p M ψ p h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ M [ p 2 |∇ M ψ| 2 h(r) 2α + ψ 2 |H| 2 h(r) 2α ] p/2 + pǫ p ∂M ψ p h(r) γ−1 ∇ r, ν (7) where ϕ(ǫ) = pǫ p [(k − γ) − ǫ q qh ′ (r 0 ) ] = pǫ p h ′ (r 0 ) [(k − γ)h ′ (r 0 ) − ǫ q q ]. Now, notice that h ′ (r 0 ) p ϕ ′ (ǫ)(ǫ) = pǫ p−1 [(k − γ)h ′ (r 0 ) − ǫ q q ] − ǫ p ǫ q−1 = pǫ p−1 [(k − γ)h ′ (r 0 ) − ǫ q q − ǫ p ǫ q−1 ] = pǫ p−1 [(k − γ)h ′ (r 0 ) − ǫ q ]. And, h ′ (r 0 ) p 2 ϕ ′′ (ǫ) = (p−1)ǫ p−2 [(k−γ)h ′ (r 0 )−ǫ q ]−qǫ p−1 ǫ q−1 . Thus, ϕ ′ (ǫ) = 0 if and only if ǫ q = (k − γ)h ′ (r 0 ) > 0. At this point, ϕ ′′ (ǫ) = −p 2 qǫ p+q−2 < 0. Hence, ϕ(ǫ) attains its maximum at ǫ 0 = [(k − γ)h ′ (r 0 )] p−1 p , with ϕ(ǫ 0 ) = p h ′ (r 0 ) [(k − γ)h ′ (r 0 )] p−1 (1 − 1 q )(k − γ)h ′ (r 0 ) = (k − γ) p h ′ (r 0 ) p−1 . Since pα = γ − p, byp p , it holds (k − γ) p h ′ (r 0 ) p−1 p p M h ′ (r)ψ p h(r) γ + γ[(k − γ)h ′ (r 0 )] p−1 p p−1 M ψ p h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ M 1 h(r) γ−p [|∇ M ψ| 2 + ψ 2 |H| 2 p 2 ] p/2 + [(k − γ)h ′ (r 0 )] p−1 p p−1 ∂M ψ p h(r) γ−1 ∇ r, ν . Now, we assume ξ 0 ∈ M . Let Z 0 = {x ∈ M | f (x) = ξ 0 }.
Since every immersion is locally an embedding, it follows that Z 0 is discrete, hence it is finite, since M is compact. We write Z 0 = {p 1 , . . . , p l } and let ρ = r • f = dM (f , ξ 0 ). By a Nash Theorem, there is an isometric embedding ofM in an Euclidean space R N . The composition of such immersion with f induces an isometric immersionf : M → R N . By the compactness of M , finiteness of Z 0 , and the local form of an immersion, one can choose a small ǫ > 0, such
that [ρ < 2ǫ] := ρ −1 [0, 2ǫ) = U 1 ⊔ . . . ⊔ U l (disjoint union), where each U i is a neighborhood of p i in M such that the restrictionf | U i : U i → R N is a graph over a smooth function, say u i : U i → R N −k . Thus, considering the set [ρ < δ] = [r < δ] ∩ M (identifying x ∈ M with f (x)), with 0 < δ < ǫ, again by the finiteness of Z 0 , we have that the volume vol M ([ρ < δ]) = O(δ k ), as δ → 0. Similarly, one also obtain that vol ∂M (∂M ∩ [ρ < δ]) = O(δ k−1 ).
Now, for each 0 < δ < ǫ, consider the cut-off function η = η δ ∈ C ∞ (M ) satisfying:
0 ≤ η ≤ 1, in M ; η = 0, in [ ρ < δ], and η = 1, in [ ρ > 2δ]; (8) |∇ M η| ≤ L/δ, for some constant L > 1, that does not depend on δ and η. Consider φ = ηψ. Since φ ∈ C 1 (M ) and ξ 0 / ∈ M ′ := supp (φ), it holds (k − γ) p h ′ (r 0 ) p−1 p p M φ p h ′ (r) h(r) γ + γ[(k − γ)h ′ (r 0 )] p−1 p p−1 M φ p h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ M h ′ (r 0 ) 1−p h(r) γ−p [|∇ M φ| 2 + φ 2 |H| 2 p 2 ] p/2 + [(k − γ)h ′ (r 0 )] p−1 p p−1 ∂M φ p h(r) γ−1 ∇ r, ν . The integral ∂M φ p h(r) γ−1 ∇ r, ν exists, since ∂M ψ p h(r) γ−1 | ∇ r, ν | exists and 0 ≤ φ ≤ ψ. Furthermore, M 1 h(r) γ−p [|∇ M φ| 2 + φ 2 |H| 2 p 2 ] p/2 = [ρ>2δ] 1 h(r) γ−p [|∇ M ψ| 2 + ψ 2 |H| 2 p 2 ] p/2 + [δ<ρ<2δ] 1 h(r) γ−p [|∇ M φ| 2 + φ 2 |H| 2 p 2 ] p/2 (9) and [δ<ρ<2δ] 1 h(r) γ−p [|∇ M φ| 2 + φ 2 |H| 2 p 2 ] p/2 ≤ [δ<ρ<2δ] O(1) h(r) γ−p [η p |∇ M ψ| p + |ψ| p |∇ M η| p + |ψ| p |H| p ] = [δ<ρ<2δ] O( 1 h(δ) γ−p )(O(1) + O( 1 δ p )) = O( 1 δ γ−p )(O(1) + O( 1 δ p ))O(δ k ) = O(δ k−γ ), as δ → 0. Therefore, it holds (k − γ) p h ′ (r 0 ) p−1 p p [r>2δ]∩M ψ p h ′ (r) h(r) γ + γ[(k − γ)h ′ (r 0 )] p−1 p p−1 [r>2δ]∩M ψ p h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ M 1 h(r) γ−p [|∇ M ψ| 2 + ψ 2 |H| 2 p 2 ] p/2 + [(k − γ)h ′ (r 0 )] p−1 p p−1 ∂M ∩[ρ>2δ] ψ p h(r) γ−1 ∇ r, ν + O(δ k−γ ). Proposition 3.1, follows, since k − γ > 0 and ∂M ψ p h(r) γ−1 ∇ r, ν exits.
It is simple to see that, for all numbers a ≥ 0 and b ≥ 0, it holds
(10) min{1, 2 p−2 2 }(a p + b p ) ≤ (a 2 + b 2 ) p/2 ≤ max{1, 2 p−2 2 }(a p + b p ).
In fact, to show this, without loss of generality, we can suppose a 2 + b 2 = 1.
We write a = cos θ and b = sin θ, for some θ ∈ [0, π/2]. If p = 2, there is nothing to do. Assume p = 2. Consider f (θ) = a p + b p = cos p (θ) + sin p (θ).
The derivative of f is given by f ′ (θ) = −p cos p−1 sin(θ) + p sin p−1 cos(θ). Thus, f ′ (θ) = 0 iff cos p−1 sin(θ) = sin p−1 cos(θ) = 0, that is, iff either cos(θ) = 0 or sin(θ) = 0, or cos p−2 (θ) = sin p−2 (θ). Thus, f ′ (θ) = 0 iff θ = 0, θ = π 2 , or θ = π 4 . So, the critical values are f (0) = f ( π 2 ) = 1 and f ( π
4 ) = 2( 1 √ 2 ) p = 2 1− p 2 . Thus, min{1, 2 1− p 2 } ≤ f (θ) = a p + b p ≤ max{1, 2 1− p 2 }.
Hence, (10) follows.
As a consequence of (10) and Proposition 3.1, we obtain the following Hardy inequality.
Theorem 3.2. Fixed ξ 0 ∈M , we assume (K rad ) ξ 0 ≤ K(r), where r = r ξ 0 = dM (· , ξ 0 ). Assume that M is contained in a ball B = B r 0 (ξ 0 ), for some 0 < r 0 < min{r 0 (K), InjM (ξ 0 )}. Let 1 ≤ p < ∞ and −∞ < γ < k. Then, for all ψ ∈ C 1 (M ), it holds (k − γ) p h ′ (r 0 ) p−1 p p M |ψ| p h ′ (r) h(r) γ + γ[(k − γ)h ′ (r 0 )] p−1 p p−1 M |ψ| p h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ A p M [ |∇ M ψ| p h(r) γ−p + |ψ| p |H| p p p h(r) γ−p ] + [(k − γ)h ′ (r 0 )] p−1 p p−1 ∂M |ψ| p h(r) γ−1 . where A p = max{1, 2 p−2 2 }. Moreover, if M is minimal, we can take A = 1.
Proof. We may assume h ′ (r 0 ) > 0, otherwise, there is nothing to do. First, we fix p > 1 and let ψ ∈ C 1 (M ). Take ǫ > 0 and consider the function ψ ǫ = (ψ 2 + ǫ 2 ) 1/2 . Note that ψ ǫ ≥ |ψ| ≥ 0 and |∇ψ ǫ | = ψ (ψ 2 +ǫ 2 ) 1/2 |∇ψ| ≤ |∇ψ|.
Thus, by Proposition 3.1,
(k − γ) p p p M ψ p ǫ h ′ (r) h(r) γ + γ(k − γ) p−1 p p−1 M ψ p ǫ h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ M h ′ (r 0 ) 1−p h(r) γ−p [|∇ M ψ| 2 + ψ 2 ǫ |H| 2 p 2 ] p/2 + (k − γ) p−1 p p−1 ∂M ψ p ǫ h(r) γ−1 .
Since ψ ǫ 1 ≤ ψ ǫ 2 , if ǫ 1 < ǫ 2 , and |ψ| ≤ ψ ǫ ≤ |ψ| + ǫ, by taking ǫ → 0, we have
(k − γ) p p p M |ψ| p h ′ (r) h(r) γ + γ(k − γ) p−1 p p−1 M |ψ| p h ′ (r)|∇r ⊥ | 2 h(r) γ ≤ M h ′ (r 0 ) 1−p h(r) γ−p [|∇ M ψ| 2 + ψ 2 |H| 2 p 2 ] p/2 + (k − γ) p−1 p p−1 ∂M |ψ| p h(r) γ−1 .(11)
Now, taking p → 1, and applying the dominated convergence theorem, we obtain that (11) also holds for p = 1. Applying (10) in inequality (11), Theorem 3.2 follows.
As a consequence of Theorem 3.2, we obtain a Hardy type inequality for submanifolds in ambient spaces having a pole with nonpositive radial sectional curvature. Namely, the following result holds Corollary 3.3. LetM be a complete simply-connected manifold with radial sectional curvature (K rad ) ξ 0 ≤ 0, for some ξ 0 ∈M . Let r = r ξ 0 = dM (· , ξ 0 ) and let 1 ≤ p < k and −∞ < γ < k. Then, for all ψ ∈ C 1 (M ), it holds
(k − γ) p p p M |ψ| p r γ + γ(k − γ) p−1 p p−1 M |ψ| p |∇r ⊥ | 2 r γ ≤ A p M [ |∇ M ψ| p r γ−p + |ψ| p |H| p p p r γ−p ] + (k − γ) p−1 p p−1 ∂M |ψ| p r γ−1 .
where A p = max{1, 2 p−2 2 }. Moreover, if M is minimal, we can take A p = 1.
The weighted Hoffman-Spruck inequality for submanifolds
Another consequence of Theorem 3.2 is the Hoffman-Spruck Inequality. Namely, fixed ξ 0 ∈M , we assume (K) rad ≤ K(r) inM , where r = r ξ 0 = dM (· , ξ 0 ). Let B be the geodesic ball inM centered at ξ 0 and radiusr 0 = min{r 0 (K), InjM (ξ 0 )}. Since M is compact and contained in B, it follows that r * = max x∈M r(x) <r 0 = min{r 0 (K), InjM (ξ 0 )} and h is increasing in [0,r 0 ). Hence, we may assume M is contained in a ball B r 0 (ξ 0 ), for some 0 < r 0 < min{r 0 (K), InjM (ξ 0 )}, , arbitrarily close tor 0 , satisfying h ′ (r 0 ) > 0. In particular, h ′ (r) > 0, for all 0 ≤ r ≤ r 0 , since h ′′ = −Kh ≤ 0. Applying Theorem 3.2, we conclude M cannot be minimal. On the other hand, notice that η = −∇r is the unit normal vector to the boundary ∂B r 0 (ξ 0 ) pointing inward B r 0 (ξ 0 ). By the Hessian comparison theorem (see (4)), the shape operator A = −∇η satisfies A(v, v) = Hess r (v, v) ≥ h ′ (r 0 ) h(r 0 ) > 0, for all unit vector v tangent to ∂B r 0 (ξ 0 ). Hence, the boundary ∂B r 0 (ξ 0 ) is convex. Since (13) [
M |ψ| p * ] p p * ≤ S M (|∇ M ψ| p + |ψ| p |H| p p p ),
for all 1 ≤ p < k and ψ ∈ C 1 (M ), with ψ = 0 on ∂M , provided there exists z ∈ (0, 1) satisfyinḡ
J z := [ ω −1 k 1 − z vol M (supp (ψ))] 1 k < 1 b , if b > 0; and (14) 2h −1 b (J z ) ≤ InjM (supp (ψ)), (15) where h b (t) = t, with t ∈ (0, ∞), if b = 0, and h b (t) = 1 b sin(bt), with t ∈ (0, π 2b ), if b > 0 (In this case, h −1 b (t) = 1 b sin −1 (tb), with t ∈ (0, 1 b )).
Here, ω k is the volume of the standard unit ball B 1 (0) in R k , and Inj (supp (ψ)) is the infimum of the injectivity radius ofM restricted to the points of supp (ψ). Furthermore, the constant S = S k,z is given by
(16) S k,p,z = π 2 2 k k z(k − 1) ω −1 k 1 − z 1 k 2 p−1 [ p(k − 1) k − p ] p .
Moreover, if b = 0, S k,p,z can be improved by taking 1 instead π 2 . Remark 1. The Hoffman-Spruck's Theorem above can be generalized for ambient spacesM satisfying (K rad ) ξ ≤ K(r ξ ), for all ξ ∈M . The details and proof for this case can be found, for instance, in [2].
The constant S k,p,z as in (16) reaches its minimum at z = k k+1 , hence we can take S = S k,p = min
z∈(0,1) S k,p,z = π 2 2 k k k k+1 (k − 1) ω −1 k (k + 1) 1 k 2 p−1 [ p(k − 1) k − p ] p = π 2 2 k (k + 1) k+1 k k − 1 ω − 1 k k 2 p−1 [ p(k − 1) k − p ] p ,(17)providedJ = [ k+1 ω k vol M (supp (ψ))]
1 k ≤ s b and 2h −1 (J) ≤ InjM (supp (ψ)). Thus, as a corollary of Theorem 4.1 and (12), one has
Proposition 4.2. Fixed ξ 0 ∈M , assume (K rad ) ξ 0 ≤ K(r ξ 0 ). Assume M is contained in B = B r 0 (ξ 0 ), with r 0 = min{r 0 (K), InjM (ξ 0 )}.
Then, for all 1 ≤ p < k, there exists S > 0, depending only on k and p, such that, for all Now, we use Theorem 3.2 together with Theorem 4.1, in order to obtain a weighted Hoffman-Spruck inequality for submanifolds in manifolds.
ψ ∈ C 1 (M ) with ψ = 0 on ∂M , it holds [ M |ψ| p * ] p p * ≤ S M (|∇ M ψ| p + |ψ| p |H| p p p ),Theorem 4.4. Fixed ξ 0 ∈M , assume (K rad ) ξ 0 ≤ K(r), where r = dM (· , ξ 0 ). Assume M is contained in B = B r 0 (ξ 0 ), with r 0 = min{r 0 (K), InjM (ξ 0 )}. Then, for all ψ ∈ C 1 (M ), with ψ = 0 on ∂M , it holds 1 S [ M |ψ| p * h(r) p * α ] p/p * + Φ k,p,α M |ψ| p h ′ (r)|∇r ⊥ | 2 h(r) p(α+1) + ∆ k,p,α M |ψ| p h ′ (r)|∇r ⊥ | p h(r) p(α+1) ≤ Γ k,p,α M [ |∇ M ψ| p h(r) pα + |ψ| p |H| p p p h(r) pα ],
provided either k < 7, or vol(M ) < D, where 0 < D ≤ +∞ depends only injM (M ) and r 0 . Here, p * = kp k−p , S > 0 depends only on k and p, and
Γ k,p,α = A p 1 + |α| 2p 2+p 2 |p−2| (p+2) h ′ (r 0 ) 2(1−p) 2+p ( p k − γ ) 2p p+2 p+2 2 = h ′ (r 0 ) 1−p A p h ′ (r 0 ) 2(p−1) p+2 + |α| 2p 2+p 2 |p−2| (p+2) ( p k − γ ) 2p p+2 p+2 2 Φ k,p,α = 2 |p−2| 2 γp k − γ (|α| 2p 2+p + 2 −|p−2| p+2 h ′ (r 0 ) 2(p−1) p+2 ( p k − γ ) −2p p+2 ) p 2 |α| 2p 2+p ∆ k,p,α = A p (|α| 2p 2+p + 2 −|p−2| p+2 h ′ (r 0 ) 2(p−1) p+2 ( p k − γ ) −2p p+2 ) p 2 |α| 2p 2+p ,
where γ = p(α + 1) and A p = max{1, 2
p−2 2 }.
Proof. First, we assume ξ 0 / ∈ M . Then, r = dM (· , ξ 0 ) > 0 on M , hence ψ h(r) α is a C 1 function on M vanishing on ∂M . By Proposition 4.2, there is a constant S > 0, depending only on k and p, such that, for all ψ ∈ C 1 (M ) with ψ = 0 on ∂M , the following inequality holds Using
(18) [ M |ψ| p * h(r) p * α ] p p * ≤ S M [|∇ M ( ψ h(r) α )| p + |ψ| p |H| p p p h(r) pα ],that ∇ M ( ψ h(r) α ) = ∇ M ψ h(r) α − αψh ′ (r)∇r ⊤ h(r) α+1
, by the Young inequality,
|∇ M ( ψ h(r) α )| 2 = α 2 ψ 2 h ′ (r) 2 |∇r ⊤ | 2 h(r) 2α+2 + ( −α h(r) 2α ) 2 ∇ M ψ, ψh ′ (r)∇r ⊤ h(r) + |∇ M ψ| 2 h(r) 2α ≤ (α 2 + |α|ǫ 2 ) ψ 2 h ′ (r) 2 |∇r ⊤ | 2 h(r) 2α+2 + (1 + |α| ǫ 2 ) |∇ M ψ| 2 h(r) 2α = (|α| + ǫ 2 ) |α|ψ 2 h ′ (r) 2 |∇r ⊤ | 2 h(r) 2(α+1) + |∇ M ψ| 2 ǫ 2 h(r) 2α ,(19)
for all ǫ > 0. Hence, using (10),
|∇ Σ ( ψ h(r) α )| p = (|∇ M ( ψ h(r) α )| 2 ) p 2 ≤ A p (|α| + ǫ 2 ) p 2 [ |α| p 2 ψ p h ′ (r)|∇r ⊤ | p h(r) p(α+1) + |∇ M ψ| p ǫ p h(r) pα ] (20) ≤ A p (|α| + ǫ 2 ) p 2 [ |α| p 2 ψ p h ′ (r) h(r) p(α+1) (B p − |∇r ⊥ | p ) + |∇ M ψ| p ǫ p h(r) pα ] (21) = A p (|α| + ǫ 2 ) p 2 [ (B p |α| p 2 ψ p h ′ (r) h(r) p(α+1) − |α| p 2 ψ p h ′ (r)|∇r ⊥ | p ) h(r) p(α+1) + |∇ M ψ| p ǫ p h(r) pα ].
where A p = max{1, 2 p−2 2 } and B p = max{1, 2 2−p 2 }. Inequality (20) holds since h ′′ ≤ 0, hence h ′ (r) ≤ h ′ (0) = 1 and Inequality (20) holds since, by (10), one has |∇r ⊤ | p + |∇r ⊥ | p ≤ max{1, 2 2−p 2 }. Thus, using (18) and (20), we obtain
1 S [ M ψ p * h(r) p * α ] p/p * ≤ M ψ p |H| p p p h(r) pα + A p (|α| + ǫ 2 ) p 2 |α| p 2 B p M ψ p h ′ (r) h(r) p(α+1) + 1 ǫ p M |∇ M ψ| p h(r) pα − |α| p 2 M ψ p h ′ (r)|∇r ⊥ | p h(r) p(α+1) .
On the other hand, by using Theorem 3.2,
M ψ p h ′ (r) h(r) p(α+1) ≤ A k,p,α M [ |∇ M ψ| p h(r) pα + ψ p |H| p p p h(r) pα ]−B k,p,α M ψ p h ′ (r)|∇r ⊥ | 2 h(r) p(α+1) , = p(|α| + ǫ 2 ) p 2 −1 [ǫ |α| p 2 B p A k,p,α − ǫ 2−p |α|ǫ −3 ] = p(|α| + ǫ 2 ) p 2 −1 ǫ |α|[ |α| p−2 2 B p A k,p,α − ǫ −2−p ]. Thus, k ′ (ǫ) = 0 iff ǫ −2−p = |α| p−2 2 B p A k,p,α , i.e., ǫ = [|α| p−2 2 B p A k,p,α ] −1 p+2 .
Hence, it simple to see k(ǫ) reachs its minimum at ǫ 0 = [|α|
p−2 2 B p A k,p,α ] −1 p+2 . We obtain Γ k,p,α := C k,p,α,ǫ 0 = A p (|α| + ǫ 2 0 ) p 2 ( |α| p 2 B p A k,p,α + ǫ −p 0 ) = A p (|α| + [|α| p−2 2 B p A k,p,α ] −2 p+2 ) p 2 ( |α| p 2 B p A k,p,α + [|α| p−2 2 B p A k,p,α ] p p+2 ) = A p |α| p 2 (1 + |α| −2p 2+p [B p A k,p,α ] −2 p+2 ) p 2 ( 1 + |α| −2p p+2 [B p A k,p,α ] −2 p+2 )|α| p 2 B p A k,p,α = A p |α| p 2 (1 + |α| −2p 2+p [B p A k,p,α ] −2 p+2 ) p+2 2 |α| p 2 B p A k,p,α = A p |α| p 2 |α| −p [B p A k,p,α ] −1 (1 + |α| 2p 2+p [B p A k,p,α ] 2 p+2 ) p+2 2 |α| p 2 B p A k,p,α = A p (1 + |α| 2p 2+p [B p A k,p,α ] 2 p+2 ) p+2 2 = A p 1 + |α| 2p 2+p 2 |p−2| (p+2) h ′ (r 0 ) 2(1−p) 2+p ( p k − γ ) 2p p+2 p+2 2 .
The last equality holds since A p B p = max{1, 2
p−2 2 } max{1, 2 2−p 2 } = 2 |p−2| 2 . We also have Φ k,p,α := E k,p,α,ǫ 0 = A p (|α| + ǫ 2 0 ) p 2 |α| p 2 B p B k,p,α = A p (|α| + [|α| p−2 2 B p A k,p,α ] −2 p+2 ) p 2 |α| p 2 B p B k,p,α = A p (|α| 2 + |α| 4 2+p [B p A k,p,α ] −2 p+2 ) p 2 B p B k,p,α = A p (|α| 2p 2+p + [B p A k,p,α ] −2 p+2 ) p 2 |α| 2p 2+p B p B k,p,α = 2 |p−2| 2 γp k − γ (|α| 2p 2+p + 2 −|p−2| p+2 h ′ (r 0 ) 2(p−1) p+2 ( p k − γ ) −2p p+2 ) p 2 |α| 2p 2+p and ∆ k,p,α,ǫ := F k,p,α,ǫ = A p (|α| + ǫ 2 0 ) p 2 |α| p 2 = A p (|α| 2p 2+p + [B p A k,p,α ] −2 p+2 ) p 2 |α| 2p 2+p = max{1, 2 p−2 2 }(|α| 2p 2+p + 2 −|p−2| p+2 h ′ (r 0 ) 2(p−1) p+2 ( p k − γ ) −2p p+2 ) p 2 |α| 2p 2+p .
and, since h(δ) = O(δ), as δ → 0,
[δ<r<2δ]∩M |η∇ M ψ + ψ∇ M η| p h(r) αp ≤ [δ<r<2δ] (|∇ M ψ| p + ψ p O( 1 δ p ))O( 1 δ αp ) = {δ<r<2δ} (O( 1 δ αp ) + O( δ −p δ αp )) = (O( 1 δ αp ) + O( δ −p δ αp ))O(δ k ) = O(δ k−αp ) + O(δ k−p(α+1) ) = O(δ k−γ ),(24)as δ → 0, since k − γ > 0. Hence, 1 S [ [r>2δ]∩M ψ p * h(r) p * α ] p/p * + Φ k,p,α [r>2δ]∩M ψ p h ′ (r)|∇r ⊥ | 2 h(r) p(α+1) + ∆ k,p,α [r>2δ]∩M ψ p h ′ (r)|∇r ⊥ | p h(r) p(α+1) ≤ Γ k,p,α M [ |∇ M ψ| p h(r) pα + ψ p |H| p p p h(r) pα ] + O(δ k−γ ).
Since k − γ > 0, taking δ → 0, Theorem 4.4 follows.
As a corollary, we have the weighted Hoffman-Spruck type inequality for submanifolds in Cartan-Hadamard manifolds.
Corollary 4.5. AssumeM is a Cartan-Hadamard manifold. We fix any ξ 0 ∈M and let r = r ξ 0 = dM (· , ξ 0 ). Let 1 ≤ p < k and −∞ < α < k−p p . Then, for all ψ ∈ C 1 (M ), with ψ = 0 on ∂M , it holds
1 S [ M |ψ| p * r p * α ] p/p * + Φ k,p,α M |ψ| p |∇r ⊥ | 2 r p(α+1) + ∆ k,p,α M |ψ| p |∇r ⊥ | p r p(α+1) ≤ Γ k,p,α M [ |∇ M ψ| p r pα + |ψ| p |H| p p p r pα ],
Here, p * = kp k−p , S = S k,p > 0 depends only on k and p, and Γ k,pα , Φ k,p,α and ∆ k,p,α are defined as in Theorem 4.4, with h ′ (r 0 ) = 1.
The Caffarelli-Kohn-Nirenberg inequality for submanifolds
Inspired by an argument in Bazan and Neves [3], we will obtain the Caffarelli-Kohn-Nirenberg type inequality for submanifolds (see Theorem 5.2 below) by interpolating Theorem 3.2 and Theorem 4.4. In order to do that, first, we will test the interpolation argument to prove a particular case of our Caffarelli-Kohn-Inequality type inequality (compare with Theorem 4.4 above). We prove the following.
Theorem 5.1. Fixed ξ 0 ∈M , assume (K rad ) ξ 0 ≤ K(r), where r = dM (· , ξ 0 ). Assume M is contained in B = B r 0 (ξ 0 ), with r 0 = min{r 0 (K), InjM (ξ 0 )}.
Let 1 ≤ p < k and −∞ < α < k−p p . Let s > 0 and α ≤ γ ≤ α + 1 satisfying the balance condition:
1 s = 1 p − (α + 1) − γ k = 1 p * + γ − α k .
We write s = (1 − c)p + cp * , for some c ∈ [0, 1]. Then, for all ψ ∈ C 1 (M ), with ψ = 0 on ∂M , it holds
[ M |ψ| s h(r) sγ ] p s ≤ ( Λ h ′ (r 0 ) ) p(1−c) s (S Γ) p * c s M [ |∇ M ψ| p h(r) pα + |ψ| p |H| p p p h(r) pα ],Λ = max{1, 2 p−2 2 } p p h ′ (r 0 ) −p [k − p(α + 1)] p , Γ = max{1, 2 p−2 2 }h ′ (r 0 ) 1−p h ′ (r 0 ) 2(p−1) p+2 + |α| 2p 2+p 2 |p−2| (p+2) ( p k − p(α + 1) ) 2p p+2 p+2
Now, we will state our Caffarelli-Kohn-Nirenberg type inequality for submanifolds.
Theorem 5.2. Fixed ξ 0 ∈M , assume (K rad ) ξ 0 ≤ K(r), where r = dM (· , ξ 0 ). Assume M is contained in B = B r 0 (ξ 0 ), with r 0 = min{r 0 (K), InjM (ξ 0 )}. Let 1 ≤ p < k and −∞ < α < k−p p . Furthermore, let q > 0, t > 0 and β, γ, σ satisfying (i) γ is a convex combination, γ = aσ + (1 − a)β, for some a ∈ [0, 1]
and α ≤ σ ≤ α + 1; Proof. If a = 1 then α ≤ γ = σ ≤ α + 1 and 1 t = 1 p − (α+1)−γ k = 1 p * + γ−α k , in particular, p ≤ t ≤ p * . Thus, Theorem 5.2 follows from Theorem 5.1. If a = 0 then γ = β and q = t, hence there is nothing to do. From now on, we will assume 0 < a < 1.
By (i) and (ii), we obtain
1 t = γ k + a( 1 p − α + 1 k ) + (1 − a)( 1 q − β k ) = aσ + (1 − a)β k + a( 1 p − α + 1 k ) + (1 − a)( 1 q − β k ) = a( 1 p − (α + 1) − σ k ) + 1 − a q = a s + 1 − a q ,(26)
(
n < 7, or vol(M ) < D, where D depends only on B. Thus, |∇ M ψ| p + |ψ| p |H| p p p ),for all 1 ≤ p < k and ψ ∈ C 1 (M ), with ψ = 0 on ∂M , where S depends only on k and p. By Hoffman and Spruck[9], one see that D depends only on InjM (M ) andr 0 (K). Namely, Hoffman and Spruck proved the following.
Theorem 4 . 1 .
41Assume the sectional curvatures ofM satisfyK ≤ b 2 , for some constant b ≥ 0. Then, there exists a constant S > 0 satisfying
provided either k < 7, or vol M (supp (ψ)) < D, where 0 < D ≤ +∞, depends only on InjM (M ) and r 0 .
Example 4. 3 .
3AssumeM is a Cartan-Hadamard manifold. Then, it holds r 0 (K) = InjM (ξ 0 ) = InjM (M ) = ∞. Hence, we can take D = +∞ in Proposition 4.2.
provided k < 7 or vol(M ) ≤ D, where D, depends only on r 0 and InjM (M ).
provided either k < 7 or vol(M ) < D, being 0 < D ≤ +∞ a constant depending only on r 0 and InjM (M ). Here, S > 0 is a constant depending only on k and p, and
Then, for all ψ ∈ C 1 (M ), with ψ = 0 on ∂M , it holds either k < 7 or vol(M ) < D, being 0 < D ≤ +∞ a constant depending only on r 0 and InjM (M ). Here, where c ∈ [0, 1] and s ∈ [p, p * ] depend only on the parameters p, k, α and σ, S depends only on k and p, and Λ and Γ are defined as in Theorem 5.1.
α+1)−σ] ∈ [p, p * ]. We write (27) t = (1 − b)q + bs.If s = q, we take b = a. If s = q then, by (26), t = q + b(s − q) whether s = q or not. In particular, (1 − b) = (1−a)s aq+(1−a)s , hence (1 − b)aq + (1 − b)(1 − a)s = (1 − a)s, which implies,(29)(1 − b)aq = (1 − a)bs.Thus, it holdsγt = (aσ + (1 − a)β)((1 − b)q + bs) = [(1 − b)aq]σ + absσ + (1 − a)(1 − b)qβ + [(1 − a)bs]β = [(1 − a)bs]σ + absσ + (1 − a)(1 − b)qβ + [(1 − b)aq]β = bsσ + (1 − b)qβ.
} p p h ′ (r 0 ) −p [k−p(α+1)] p and Γ = Γ k,p,α is given as in Theorem 4.4.
t = [ M |ψ| (1−b)q+bs h(r) (1−b)qβ+bsσ ] 1 t = [ M |ψ| bs h(r) bsσ |ψ| (1−b)q h(r) (1−b)qβ ] 1 t ≤ [ M
p ,where C is given as in Theorem 5.1. Theorem 5.2 is proved.As a corollary, we have the Caffarelli-Kohn-Nirenberg type inequality for submanifolds in Cartan-Hadamard manifolds.
where A k,p,α = Ap h ′ (r 0 ) p−1 p p (k−γ) p and B k,p,α = p p h ′ (r 0 ) 1−pConsider the function k(ǫ) = C k,p,α,ǫ . We haveThus, it follows that LetHence, after a straightforward computation, one has sγ = p(1 − c)(α + 1) + p * cα. By the Hölder inequality,The last inequality holds sincewhere Λ = Λ k,p,α = max{1, 2The last equality holds since, by(26), the balance condition holds:AssumeM is a Cartan-Hadamard manifold. We fix any ξ 0 ∈M and let r = dM (· , ξ 0 ). Let 1 ≤ p < k and −∞ < α < k−p p . Furthermore, let q > 0, t > 0 and β, γ, σ satisfying (i) γ is a convex combination, γ = aσ + (1 − a)β, for some a ∈ [0, 1] and α ≤ σ ≤ α + 1;Here,and s ∈ [p, p * ] depend only on the parameters p, k, α and σ, S depends only on k and p, andExample 5.4. There are some inequalities that derive from Theorem 5.2.For the sake of simplicity, we assumeM is a Cartan-Hadamard manifold. Fix any ξ 0 ∈M and let r = dM (· , ξ 0 ). By Corollary 5.3, there exists a constant C, depending only on the parameters k, p, q, t, γ, α and β, such that, for all ψ ∈ C 1 (M ) with ψ = 0 on ∂M , the following inequality holds.1 -The weighted Michael-Simon-Sobolev inequality (compare with Theorem 4.4 and Theorem 5.1) is obtained from Theorem 5.2 by taking a = 1 (hence γ = σ). In particular, if a = 1 and α = 0 then, for all γ ∈ [0, 1] and t > 0 satisfying 12 -Hardy type inequality for submanifolds (compare with Theorem 3.2). We take a = 1 and γ = α + 1. Hence, γ = σ and, by the balance condition, t = p. Thus, it holds3 -Galiardo-Nirenberg type inequality for submanifolds. We take α = β = σ = 0. We obtain, γ = 0 and, for all t > 0, satisfying 1 t = a p * + 1−a q , with a ∈ [0, 1], it holdsIn particular, if we take k ≥ 3, p = 2, q = 1, and a = 2/(2 + 4 k ), then, and the following Nash type inequality for submanifolds holdsAknowledgementThe second author thanks his friends Wladimir Neves and Aldo Bazan for your suggestions and comments.
A Sobolev-Hardy inequality with applications to a nonlinear elliptic equation arising in astrophysics. M Badiale, G Tarantello, Arch. Ration. Mech. Anal. 1634Badiale, M.; Tarantello, G. A Sobolev-Hardy inequality with applications to a non- linear elliptic equation arising in astrophysics. Arch. Ration. Mech. Anal. 163 (2002), no. 4, 259 -293.
Sobolev and Isoperimetric inequalities for submanifolds in weighted ambient spaces. M Batista, H Mirandola, Annali di Matematica Pura ed ApplicataPreprint arXiv 1304.2271, to appear inBatista, M.; Mirandola, H., Sobolev and Isoperimetric inequalities for submanifolds in weighted ambient spaces. Preprint arXiv 1304.2271, to appear in Annali di Matem- atica Pura ed Applicata.
A scaling approach to Caffarelli-Kohn-Nirenberg inequality. A Bazan, W Neves, arXiv:1314.1823PreprintBazan, A.; Neves, W., A scaling approach to Caffarelli-Kohn-Nirenberg inequality. Preprint arXiv: 1314.1823.
Inégalités de Hardy sur les variétés riemanniennes non-compactes. G Carron, J. Math. Pures Appl. 9Carron, G. Inégalités de Hardy sur les variétés riemanniennes non-compactes. J. Math. Pures Appl. (9) 76 (1997), no. 10, 883 -891.
First order interpolation inequalities with weights. L Caffarelli, R Kohn, L Nirenberg, Compositio Math. 533Caffarelli, L.; Kohn, R.; Nirenberg, L. First order interpolation inequalities with weights. Compositio Math. 53 (1984), no. 3, 259 -275.
On the Caffarelli-Kohn-Nirenberg inequalities. F Catrina, Z Wang, C. R. Acad. Sci. Paris Sr. I Math. 3306Catrina, F.; Wang, Z. On the Caffarelli-Kohn-Nirenberg inequalities. C. R. Acad. Sci. Paris Sr. I Math. 330 (2000), no. 6, 437 -442.
Sharp weighted-norm inequalities for functions with compact support in R N \{0}. F Catrina, D G Costa, J. Differential Equations. 2461Catrina, F. ; Costa, D. G. Sharp weighted-norm inequalities for functions with com- pact support in R N \{0}. J. Differential Equations 246 (2009), no. 1, 164 -182.
Complete manifolds with non-negative Ricci curvature and the Caffarelli -Kohn -Nirenberg inequalities. M Do Carmo, C Xia, Compos. Math. 140do Carmo, M.; Xia, C. Complete manifolds with non-negative Ricci curvature and the Caffarelli -Kohn -Nirenberg inequalities. Compos. Math.140 (2004) 818 -826.
Sobolev and isoperimetric inequalities for Riemannian submanifolds. D Hoffman, J Spruck, Comm. Pure Appl. Math. 27Hoffman, D.; Spruck, J. Sobolev and isoperimetric inequalities for Riemannian sub- manifolds. Comm. Pure Appl. Math. 27 (1974), 715 -727.
Sobolev and mean-value inequalities on generalized submanifolds of R n. J H Michael, L M Simon, Comm. Pure Appl. Math. 26Michael, J. H.; Simon, L. M. Sobolev and mean-value inequalities on generalized sub- manifolds of R n . Comm. Pure Appl. Math. 26 (1973), 361 -379.
I Kombe, M Özaydin, Hardy-Poincaré, Rellich and uncertainty principle inequalities on Riemannian manifolds. 365Kombe, I.;Özaydin, M. Hardy-Poincaré, Rellich and uncertainty principle inequali- ties on Riemannian manifolds. Trans. Amer. Math. Soc. 365 (2013), no. 10, 5035 - 5050.
On manifolds with non-negative Ricci curvature and Sobolev inequalities. M Ledoux, Comm. Anal. Geom. 72Ledoux, M., On manifolds with non-negative Ricci curvature and Sobolev inequalities. Comm. Anal. Geom. 7 (1999), no. 2, 347 -353.
Vanishing and finiteness results in geometric analysis. S Pigola, M Rigoli, A G Setti, Progress in Mathematics. 266Birkhäuser VerlagA generalization of the Bochner techniquePigola, S., Rigoli, M. and Setti, A. G., Vanishing and finiteness results in geometric analysis, Progress in Mathematics, vol. 266, Birkhäuser Verlag, Basel, A generaliza- tion of the Bochner technique, (2008).
Which ambient spaces admit isoperimetric inequalities for submanifolds?. B White, J. Diff. Geometry. 83White, B., Which ambient spaces admit isoperimetric inequalities for submanifolds?, J. Diff. Geometry, 83 (2009), 213 -228.
The Gagliardo -Nirenberg inequalities and manifolds of non-negative Ricci curvature. C Xia, J. Funct. Anal. 224Xia, C. The Gagliardo -Nirenberg inequalities and manifolds of non-negative Ricci curvature, J. Funct. Anal. 224 (2005) 230 -241.
21945-970, Brasil E-mail address: [email protected] Instituto de Matemática. Matemática Instituto De, 57072-970Maceió, AL, CEP; Rio de Janeiro, RJ, CEP; Maceió, AL, CEPUniversidade Federal de Alagoas ; Universidade Federal do Rio de Janeiro ; Universidade Federal de Alagoas57072-970, Brazil E-mail address: [email protected] Instituto de Matemática. Brazil E-mail address: [email protected] de Matemática, Universidade Federal de Alagoas, Maceió, AL, CEP 57072-970, Brazil E-mail address: [email protected] Instituto de Matemática, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ, CEP 21945-970, Brasil E-mail address: [email protected] Instituto de Matemática, Universidade Federal de Alagoas, Maceió, AL, CEP 57072-970, Brazil E-mail address: [email protected]
| []
|
[
"Disturbance Enhanced Uncertainty Relations",
"Disturbance Enhanced Uncertainty Relations"
]
| [
"Liang-Liang Sun \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n",
"Kishor Bharti \nCentre for Quantum Technologies\nNational University of Singapore\n3 Science Drive 2117543Singapore, Singapore\n",
"Ya-Li Mao \nShenzhen Institute for Quantum Science and Engineering and Department of Physics\nSouthern University of Science and Technology\n518055ShenzhenChina\n",
"Xiang Zhou \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n",
"Leong-Chuan Kwek \nCentre for Quantum Technologies\nNational University of Singapore\n3 Science Drive 2117543Singapore, Singapore\n\nMajuLab\nCNRS-UNS-NUS-NTU International Joint Research Unit\nUMI 3654\nSingapore\n\nNational Institute of Education\nNanyang Technological University\n1 Nanyang Walk637616Singapore\n",
"Jingyun Fan \nSchool of Electrical and Electronic Engineering Block S2.1\n50 Nanyang Avenue639798Singapore\n\nShenzhen Institute for Quantum Science and Engineering and Department of Physics\nSouthern University of Science and Technology\n518055ShenzhenChina\n\nGuangdong Provincial Key Laboratory of Quantum Science and Engineering\nSouthern University of Science and Technology\n518055ShenzhenChina\n",
"Sixia Yu \nHefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n"
]
| [
"Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina",
"Centre for Quantum Technologies\nNational University of Singapore\n3 Science Drive 2117543Singapore, Singapore",
"Shenzhen Institute for Quantum Science and Engineering and Department of Physics\nSouthern University of Science and Technology\n518055ShenzhenChina",
"Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina",
"Centre for Quantum Technologies\nNational University of Singapore\n3 Science Drive 2117543Singapore, Singapore",
"MajuLab\nCNRS-UNS-NUS-NTU International Joint Research Unit\nUMI 3654\nSingapore",
"National Institute of Education\nNanyang Technological University\n1 Nanyang Walk637616Singapore",
"School of Electrical and Electronic Engineering Block S2.1\n50 Nanyang Avenue639798Singapore",
"Shenzhen Institute for Quantum Science and Engineering and Department of Physics\nSouthern University of Science and Technology\n518055ShenzhenChina",
"Guangdong Provincial Key Laboratory of Quantum Science and Engineering\nSouthern University of Science and Technology\n518055ShenzhenChina",
"Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina"
]
| []
| Uncertainty and disturbance are two most fundamental properties of a quantum measurement and they are usually separately studied in terms of the preparation and the measurement uncertainty relations. Here we shall establish an intimate connection between them that goes beyond the above mentioned two kinds of uncertainty relations. Our basic observation is that the disturbance of one measurement to a subsequent measurement, which can be quantified based on observed data, sets lower-bounds to uncertainty. This idea can be universally applied to various measures of uncertainty and disturbance, with the help of data processing inequality. The obtained relations, referred to as disturbance enhanced uncertainty relations, immediately find various applications in the field of quantum information. They ensure preparation uncertainty relation such as novel entropic uncertainty relations independent of the Maassen and Uffink relation. And they also result in a simple protocol to estimate coherence. We anticipate that this new twist on uncertainty principle may shed new light on quantum foundations and may also inspire further applications in the field of quantum information. † | null | [
"https://arxiv.org/pdf/2202.07251v1.pdf"
]
| 246,863,847 | 2202.07251 | 58161d866bd55c374b5463651821be5937a86807 |
Disturbance Enhanced Uncertainty Relations
Liang-Liang Sun
Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics
University of Science and Technology of China
230026HefeiAnhuiChina
Kishor Bharti
Centre for Quantum Technologies
National University of Singapore
3 Science Drive 2117543Singapore, Singapore
Ya-Li Mao
Shenzhen Institute for Quantum Science and Engineering and Department of Physics
Southern University of Science and Technology
518055ShenzhenChina
Xiang Zhou
Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics
University of Science and Technology of China
230026HefeiAnhuiChina
Leong-Chuan Kwek
Centre for Quantum Technologies
National University of Singapore
3 Science Drive 2117543Singapore, Singapore
MajuLab
CNRS-UNS-NUS-NTU International Joint Research Unit
UMI 3654
Singapore
National Institute of Education
Nanyang Technological University
1 Nanyang Walk637616Singapore
Jingyun Fan
School of Electrical and Electronic Engineering Block S2.1
50 Nanyang Avenue639798Singapore
Shenzhen Institute for Quantum Science and Engineering and Department of Physics
Southern University of Science and Technology
518055ShenzhenChina
Guangdong Provincial Key Laboratory of Quantum Science and Engineering
Southern University of Science and Technology
518055ShenzhenChina
Sixia Yu
Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics
University of Science and Technology of China
230026HefeiAnhuiChina
Disturbance Enhanced Uncertainty Relations
Uncertainty and disturbance are two most fundamental properties of a quantum measurement and they are usually separately studied in terms of the preparation and the measurement uncertainty relations. Here we shall establish an intimate connection between them that goes beyond the above mentioned two kinds of uncertainty relations. Our basic observation is that the disturbance of one measurement to a subsequent measurement, which can be quantified based on observed data, sets lower-bounds to uncertainty. This idea can be universally applied to various measures of uncertainty and disturbance, with the help of data processing inequality. The obtained relations, referred to as disturbance enhanced uncertainty relations, immediately find various applications in the field of quantum information. They ensure preparation uncertainty relation such as novel entropic uncertainty relations independent of the Maassen and Uffink relation. And they also result in a simple protocol to estimate coherence. We anticipate that this new twist on uncertainty principle may shed new light on quantum foundations and may also inspire further applications in the field of quantum information. †
I. INTRODUCTION
Uncertainty principle, one of fundamental traits of quantum mechanics, is quantified by two kinds of uncertainty relations, namely, for preparation and for measurement. They are commonly investigated in terms of the uncertainties [1][2][3][4][5][6] and the error-disturbance [7][8][9][10][11][12][13], respectively. Till now, these two fundamental properties have gained a relative well understanding and have been harnessed for applications such as the security proof of quantum communication [5,[14][15][16], witnessing entanglement [17,18], bounding quantum correlations [19][20][21], and quantum metrology [22][23][24][25][26]. As two basic properties, the disturbance and uncertainty present themselves in a quantum measurement process simultaneously. Frequently, they are studied separately within one of the two types of uncertainty relations.
A physical state, which may be prepared in a coherent superposition of classically distinguishable ones, undergoes a sudden random collapse after a measurement. The post-measurement state and subsequent measurements are therefore disturbed. Intuitively, with suitable measures, the less uncertainty a measurement exhibits, the less it will disturb quantum state and the subsequent measurements. Therefore the measurement uncertainty should impose some constraints on its disturbance effect, and vise versa. For a given outcome of measurement, Winter's gentle measurement lemma states that the post-measurement state is disturbed slightly if the associated outcome happens with a high probability (exhibiting small uncertainty) [27] and this special link has already led to some important applications such as channel coding theorem [28,29]. By using a specific distance measure to quantify disturbance, a trade-off between uncertainty and disturbance has been established, which, though dealing with local properties, can be used to bound nonlocality [30]. These results suggest an intrinsic link between uncertainty and disturbance, which, if existing, shall bring a deeper insight on uncertainty principle and may find novel applications in the fields of quantum information sciences.
In this paper, we present such a link that is referred to as disturbance enhanced uncertainty relation, exploiting the data processing inequality and the fact that the disturbance caused by measurement to the state can be quantified by physical measures based on observed data. We establish a universal relation between uncertainty and arXiv:2202.07251v1 [quant-ph] 15 Feb 2022 disturbance, valid for all disturbance measures that are induced by state distance measures with suitable properties (see Eq.(1) below). Specifically we also derive corresponding uncertainty-disturbance trade-offs for some widely used distance measures in various quantum informational scenarios. As applications, our relations, which impose constraints on the intrinsic "spreads" of incompatible quantities, can be seen as preparation uncertainty relations, giving rise to novel entropic uncertainty relations that are independent of the Maassen and Uffink's [6]. Moreover, our disturbance enhanced uncertainty relations are suitable for coherence detection, providing operational upper and lower bounds for coherence measures based on distance. Therefore we anticipate this new perspective of uncertainty principle may promise far-reaching impact on quantum information science and quantum foundations.
II. UNCERTAINTY-DISTURBANCE RELATIONS
We consider a sequential measurement →B of two
incompatible observables = {|A i A i | = Π A i } and B = {|B j B j | = Π B j },
where {|A i } and {|B j } are two sets of orthonormal bases. The intrinsic distributions of A andB on quantum state ρ are given by probability vectors
p = {p i = tr(ρΠ A i )} and q = {q j = tr(ρΠ B j )}. Measuring on the state ρ leaves a disturbed state ρ A = Φ A (ρ) = i p i Π A i , where Φ A (·) ≡ i Π A i · Π A i and similarly Φ B (·) = j Π B j · Π B
j are two complete-positivetrace-preserving (CPTP) operations. A subsequent mea-surementB on the state ρ A would yield a disturbed dis-
tribution q = {q j } with q j = tr(ρ A Π B j ) = i c ij p i = [[C· p]] j ,
where the overlaps of eigenstates {c ij = | A i |B j | 2 } form a unitary stochastic matrix [31,32], denoted by C, which depends only on the choices of andB and characterizes their incompatibility.
Here we shall quantify the uncertainty of by some measure δ A , which is expected to respect Shur concavity as a function of probability distribution and may depend on which state distance that is used to quantify the distinguishability of states (as shown below). We denote by D A→B the disturbance introduced by toB which is quantified by the distance between q and q and depends also on the state distance we choose. We note that the disturbance is estimated directly from experimental data, i.e., the undisturbed and disturbed statistics ofB.
Two key tools developed in the field of quantum information turn out to be very useful to explore the relation between δ A and D A→B . One tool is the state distance, specified by D(·, ·), which quantifies how distinguishable two quantum states are. Widely used measures of state distance are trace distance [33], Rényi divergence [34,35], Tsallis relative entropy [36], and infidelity. The state distance we choose should be non-increasing under at least CPTP operations Φ B . The other tool is the data processing inequality, i.e., two quantum states become less distinguishable after a general CPTP operation D(ρ 1 , ρ 2 ) ≥ D[Φ(ρ 1 ), Φ(ρ 2 )]. The data processing inequality has been used to derive entropic uncertainty relations [37][38][39]. Here, we shall see that it can lead to more fruitful results. In what follows we shall present two kinds of characterization of uncertainty and disturbance trade-off, a universal one and a distance measure dependent one.
For the purpose of a universal relation we shall make two reasonable assumptions about the state distance measure we choose besides its monotonicity under CPTP operations. First, we assume that the distance between two pure states, say |φ and |φ A , is an increasing function [56]
D(φ, φ A ) = G D IF(φ, φ A ) ,(1)of their infidelity IF(φ, φ A ), where G D (x)
is an increasing single variable function for x ≥ 0, depending on the distance measure we choose, and the infidelity for two general density matrices is
IF(ρ 1 , ρ 2 ) = 1 − F(ρ 1 , ρ 2 ) 2 with F(ρ 1 , ρ 2 ) = tr( √ ρ 2 ρ 1 √ ρ 2 )
being the quantum fidelity. For two pure states, their infidelity becomes sim-
ply their non-overlap IF(φ, φ A ) = 1 − | φ|φ A | 2 .
This assumption is reasonable since the more distinguishable the states the larger the infidelity and therefore the larger the distance should be. For each distance measure satisfying above property Eq.(1), we can introduce a gauged distance measurẽ
D(ρ 1 , ρ 2 ) = G −1 D D(ρ 1 , ρ 2 )(2)
which also satisfies the data processing inequality as function G D is monotonous. We shall refer to such kinds of distance measures as gaugeable. As can be seen in Table I, almost all commonly used distance measures are gaugeable. Second, we assume the state distance is unitary invariant, i.e., D(ρ 1 , ρ 2 ) = D(U ρ 1 U † , U ρ 2 U † ), which is satisfied by all distance measures considered below. As a result we can write D(Φ B (ρ), Φ B (ρ A )) = D(q, q ) since these two density matrices are diagonal in the same basis. For the state distance measure assuming above two assumptions, we have an universal disturbance-uncertainty relation (see Methods). Theorem 1 (Uncertainty-Disturbance relation). Performing sharp measurements in order →B on a system in state ρ, resulting in data p, q and q , it holds
δ A (ρ) ≥= max DD (q, q ) :=D A→B (ρ).(3)
where the maximization is taken over all monotonous gaugeable distance measures.
In general, if we can bound a given specific distance measure D(ρ, ρ A ) from above by a Schur concave function δ D A of p, the data processing inequality will lead to a specific pairwise definitions of uncertainty δ D A and disturbance D A→B via a distance-measure dependent uncertainty-disturbance relation
δ D A (ρ) ≥ D(ρ, ρ A ) ≥ D[Φ B (ρ), Φ B (ρ A )] = D A→B (ρ),(4)
where D(ρ, ρ A ) specifies disturbance in ρ, and D A→B specifies disturbance in measurement. For a given distance measure, the specific uncertainty-disturbance relation Eq.(4) may be different from the universal one, i.e., Eq(3).
In Table 1, we have summarized uncertaintydisturbance relations, universal Eq.(3) or distancemeasure dependent Eq.(4), arising from some widely used distance measures with proofs present in Methods. These distance measures include infidelity (if), trace distance (tr), Tsallis relative entropy (ts), Rényi divergence (rd), relative entropy (re), and Hilbert-Schimdt (hs) distance from first line to the last, respectively, in Table 1. Some remarks are in order. First, U if is equivalent to U 0.5 rd and U tr and U tr arise from the same distance measure, namely, the trace distance, while their uncertainties are quantified by different measures. We note that these two relations are independent. Second, in spite of the fact that relative entropy is not gaugeable, a specific uncertainty-disturbance relation still exists. Third, we also include Hilbert-Schmidt distance measure because, even though it is monotonous under dephasing operation but not a general CPTP operation [40], a uncertaintydisturbance relation can be obtained. Fourth, when considering the measurement sequenceB →Â, one obtains a dual of the corresponding uncertainty-disturbance relations. Lastly, for other monotonous distance measures we can obtain similar distance-measure dependent uncertainty-disturbance relations.
From discussion above, δ D A and D A→B can be pairwisely defined from a given state distance measure, where uncertainty measures are Shur concave and are equal or equivalent to α−Rényi entropy. It is worthy of pointing out that these uncertainty measures, such as Shannon entropy H(p) (employed in U re ), are intimately related to information gain. Previous researches from error-disturbance relations, in which a lower bound on disturbance is placed by the error allowed in a previous measurement [8,9,13,41], reveal an aspect of uncertainty principle that information gain implies disturbance. Here, our uncertainty-disturbance relations, especially U re , uncover another facet that disturbance of a measurement lower-bounds its information gain.
III. UNCERTAINTY DISTURBANCE TRADE-OFF AS PREPARATION UNCERTAINTY RELATIONS
In general, for two measurements andB, any relation reflecting the fact that probability distributions q and p cannot be simultaneously sharp, is an uncertainty relation. The disturbance enhanced uncertainty relations listed in the third column are formulated in terms of p, q, and q = C · p with C being the matrix formed by the overlaps of eigenstates of andB. Provided C, the relations, which are actually formulated in terms of intrinsic distributions p and q, can constrain their uncertainties and capture the spirit of preparation uncertainty.
As an example, we shall derive a novel entropic uncertainty relation from our uncertainty-disturbance relation. From uncertainty-disturbance relation U α ts arising form Tsallis relative entropy, it follows the following novel entropic uncertainty relation
1 2 − α H 2−α (q) + H α (p) ≥ − log c,(5)
where 0 ≤ α < 1. This is due to the fact that
q i = j c ij p j ≤ max ij c ij ≡ c so that 1 2 − α H 2−α (p) ≥ 1 α − 1 log i q α i c 1−α = − log c−H α (p).
A dual of Eq.(5) can be readily obtained by swapping p and q. In comparison, the well-known Maassen-Uffink (MU) entropic uncertainty relation reads
H α (p) + H β (q) ≥ −logc,(6)
where α, β ≥ 1 2 satisfying 1 α + 1 β = 2 are referred to as conjugated indices. The MU relation have found numerous applications in many information tasks and have been generalized in various scenarios [5,37,39,42,43]. Generalizing entropic uncertainty relation to the cases of nonconjugated indices is a topic that has sustained interests [44].
We shall see that uncertainty relations Eq.(5) and Eq.(6) are independent. In fact, the factor (2 − α) −1 ≤ 1 in Eq.(5) allows a possible strengthening over MU uncertainty Eq. (6). To see this we assume that p is uniform so that H α (p) = log d for arbitrary α and the indices pair for Eq.(5) is {β, 2 − β}, then and log d + 1 β H β (q) ≤ log d + H β (q) when 1 ≤ β. Therefore, Eq.(5) indicates an alternative approach of generalization, namely, by introducing modifying fractions, which is made possible by considering disturbance.
In MU entropic uncertainty relations, the incompatibility is characterized by a single number c instead of the whole set of {c ij } or the transition matrix C. This simplification leads to neat representations of uncertainty relations while sacrifices their tightness. In contrast, our uncertainty-disturbance trade-offs include all {c ij } and therefore are tighter, justifying the name disturbance enhanced uncertainty relations. For a visualized comparison, we present two case studies of the uncertaintydisturbance type uncertainty relations regarding to the tightness in the case of qubit (d = 2) and qutrit (d = 3) as illustrated in Table 2. For the qubit case, p, q, and C are determined by three independent parameters, {p 0 , q 0 , c 00 } ∈ [0, 1]. Tightness means how strong the uncertainty relations can bound p 0 and q 0 under a given c 00 . Geometrically one may visualize that each one of the inequalities in U tr , U tr , U α ts , U α rd , U re and the MU relation for measurementŝ A →B together with their dual inequalities for mea-surementsB →Â enclose a region shown in Fig.1 and their volumes are computed in Table 2. Hence, the diagrams and the volumes present a direct comparison regarding the tightness of these uncertainty relations. The smaller the volume, the tighter the constraint. The volume of parameter space is normalized to 1 if without any other constraints. Under constraint of our disturbance enhanced uncertainty relations, the volumes are in the range [0.705, 0.930], which are significantly smaller than the volume(=0.974) for the MU relation (α = 1).
D UD (measure-dependent)ŨD (uinversal) IF(ρ, ρA) U if : H 1 2 (p) ≥ D2(q q )Ũ if : δA(ρ) ≥ 1 − i qiq i 2 1 2 tr |ρ − ρA| Utr : δA(ρ) ≥ |q − q |Ũtr : δA(ρ) ≥ |q − q | 1 2 tr |ρ − ρA| U tr : 1 2 p 1 2 − 1 ≥ |q − q |Ũ tr : δA(ρ) ≥ |q − q | 1 α [1 − Tr(ρ α ρᾱ A )] Uts : 1 2−α H2−α(p) ≥ Dα(q q )Ũts : δA(ρ) ≥ 1 − i q α i q ᾱ i 1 α −1 α log tr(ρᾱ 2α A ρ 1 α ρᾱ 2α A ) α U rd : H 1 α (p) ≥ Dα(q q )Ũ rd : δA(ρ) ≥ 1 − i q α i q ᾱ i 1 α S(ρ||ρA) Ure : H(p) ≥ H(q q ) - Tr(ρ − ρA) 2 U hs : δA(ρ) ≥ i (qi − q i ) 2 -
For the qutrit case, the hypersurfaces corresponding to the uncertainty-disturbance relations enclose some regions in the parameter space spanned by p, q, and C. These conditions respectively yield the same computed volumes (see Table 2). The corresponding volumes are in the range [0.887, 0.947], which are also significantly smaller than the volume(=0.999) for the MU relation (see Table 2). Therefore, we have shown that the uncertaintydisturbance relations impose stronger constraints than the MU relation on the observed data, since in our cases we have used all elements of matrix C in the analysis.
IV. DETECTING COHERENCE WITH UNCERTAINTY-DISTURBANCE RELATION.
Coherence is a fundamental quantum feature that finds many applications ranging from quantum information science and quantum foundations [45] to quantum biology [46,47]. In practice detecting coherence is a crucial but complicated task. As another application, our disturbance enhanced uncertainty relation can provide an efficient detection of coherence.
Coherence is characterized with respect to some computation basis, for example, {|A i } and a coherence measure quantifies how much coherence contained in a quantum state [48]. A widely used measure of coherence [45,49,50] is defined by the minimum distance C D = min σ D(ρ, σ) to the set of all incoherent states (σ), which are diagonal states in the given basis. Here D is some suitable state distance measure. Naturally, an operational upper-bound C D ≤ D(ρ, ρ A ) ≤ δ D A (ρ) in terms of the uncertainty follows from uncertainty disturbance trade-off Eq.(4) since ρ A is an incoherent state. We note that both uncertainty and coherence are quantities defined with respect to computation basis, and the former relates to the diagonal terms of density matrix while the latter relates to the off-diagonal terms. The upper bound above provide a connection between them. For those distance measures whose nearest incoherent state is ρ A , our uncertainty disturbance trade-off Eq.(4) also provides an operational lower-bound D A→B ≤ C D , in terms of disturbance.
As a case illustration, we take the relative entropy as distance measure and we have relative entropy measure of coherence C r (ρ) ≡ S(ρ ρ A ), which quantifies the distillable coherence from a state ρ as well as extractable quantum randomness [51,52]. In this case the nearest incoherent state to ρ is exactly ρ A . By the above analysis, one immediate has operational bounds as
H(p) ≥ C r (ρ) ≥ H(q q ).(7)
One needs to independently measure →B and observ-ableB, which yield distributions p, q and q = C · p. When q = q, the observableB yields a trivial estimation and these failing cases compose a set of measure zero. Therefore, it is almost impossible to yield a trivial estimation even only one additional measurement is employed. For comparison, in a previous approach based on the theory of majorization [53,54], where a nontrivial estimation of the spectrum of ρ require that the distribution from a test measurement majorize the diagonal parts of the state, one may need several measurements so as to obtain such a distribution.
V. DISCUSSIONS
To summarize, we have established a quantitative link, a universal one (Theorem 1) and a distance-measure dependent one (for commonly used distance measures), between disturbance and uncertainty in sequential sharp measurements, uncovering a facet of uncertainty principle that has not been covered by previous approaches via preparation and the measurement uncertainty relations. Our uncertainty-disturbance trade-offs involve many basic concepts such as α−Rényi entropy, Tsallis relative entropy, etc., which have nice properties and are frequently employed in the field of quantum information. Thus, the reported relations naturally find corresponding applications such as in deriving novel uncertainty relations and detecting coherence. This new twist on uncertainty principle promises potential further applications in quantum information science and may shed new light in quantum foundations. One next immediate task is to generalize the uncertainty disturbance relations to general measurements find possible applications in tasks where uncertainty relation plays a key role, e.g., the detection of coherence quantified in other measures and the certification of quantum randomness. Moreover our relations, as generalization of Winter's gentle measurement lemma, may inspire further applications in channel coding theorem.
SUPPLEMENTARY MATERIALS
Proof of Theorem 1.
Our main tool is the data processing inequality. Consider an arbitrary monotonous (under CPTP) and gaugeable distance (satisfying assumption Eq.(1)), we have
δ A (ρ) := 1 − p 2 2 = 1 − tr(ρρ A ) ≥ IF(ρ, ρ A ) = IF(φ, φ A ) =D(φ, φ A ) ≥D(ρ, ρ A ) ≥D(Φ B (ρ), Φ B (ρ A )) =D(q, q )
In the first line above we have denoted p α = ( i p α i ) 1 α and 1 − p 2 2 is Shur concave and therefore a well-defined uncertainty measure. In the second line the inequality is due to the fact F(ρ 1 , ρ 2 ) 2 ≥ tr(ρ 1 · ρ 2 ) [55] for two general density matrices ρ 1 and ρ 2 and the equality is becuause for two mixed states, say ρ and ρ A , there are different purifications and the optimal ones |φ and |φ A give the quantum fidelity | φ|φ A | = F(ρ, ρ A ). In the third line the equality is due to the definition of gauged distance measure and the inequality is due to data processing inequality applied to φ, φ A under partial trace. In the fourth line data processing inequality is employed again for ρ, ρ A under Φ B . In the last line that we have noted that Φ B (ρ) and Φ B (ρ A ) are diagonal states in the same basis and the distance is basis-independent due to the unitary invariance, therefore the distance is the function of diagonal terms, namely, statistics arising from measuring B on ρ and ρ A .
Proof of uncertainty-disturbance Utr and U tr As the first example we employ the trace distance D tr (ρ, ρ A ) = 1 2 tr |ρ − ρ A | as the distance measure where tr |N | = tr √ N N † . For two pure states we have D tr (φ, φ A ) = IF(φ, φ A ) and therefore G tr (x) = x, i.e., the gauged distance is identical to the original one. Similarly by using data processing inequality as above we have
where D tr A→B (ρ) ≡ 1 2 i |q i − q i | is l 1 or the Kolmogorov distance commonly used disturbance measure. From an alternative upper bound of trace distance, i.e.,
1 2 ( p 1 2 − 1) = i>j √ p i p j ≥ i>j |ρ ij | ≥ 1 2 tr |ρ − ρ A |,
where {ρ ij } are elements of density matrix ρ we obtain U tr which was first derived in a previous work [30].
Proof of uncertainty-disturbance relations U α rd ,Ũ α rd , U if ,Ũ if , and Ure
We present below the variants associated with the distance measures of α−Rényi divergence and relative entropy. Employ Rényi divergence D α (ρ ρ A ) = 1 α − 1 log tr(ρ
1−α 2α
A ρρ
1−α 2α
A ) α , with 1/2 ≤ α < 1, as state distance measure. It is obviously invariant under unitaries and satisfying data processing inequality. For two pure states we have D α (φ φ A ) = α 1−α log(1 − IF(φ, φ A ) 2 ) and G D (x) = α 1−α log(1 − x 2 ) which is a reversible and monotonously increasing function for 1/2 ≤ α < 1. The corresponding gauged distance measure reads D α (ρ||ρ A ) = 1 − tr(ρ
1−α 2α A ρρ 1−α 2α A ) α 1 α
Figure 1 :
1Geometrical visualization of the constraints of uncertainty relations imposed on the observed data. In qubit case, the distributions from measuring andB can be specified respectively by p0 and q0, and their incompability is characterized by c00, which are constrained by various uncertainty relations. Plots (a-e) correspond to Utr1, Utr2, U 0.5 re , U 0.5 ts , Ure, and plot (f) for the The MU relation (α = β = 1).
i − q i | := D tr A→B (ρ),
Table I :
ITable II: Computed volumes according to different uncertainty-disturbance relations and and their duals in pure states. The difference between the computed volumes via Utr, U tr for qutrit states are small but nonvanishing.A list of universal and distance-measure dependent uncertainty-disturbance relations arising from commonly used
distance measures. For convenience we have denotedᾱ = 1 − α with 1
2 ≤ α < 1 Rényi divergence and 0 ≤ α < 1 for Tsallis
relative entropy. Moreover, |q − q | = 1
2
i |qi − q i |, Hα(p) = ᾱ
α log p α for Rényi entropy, and Dα(q q ) = −1
α log i q α
i q
1−α
i
for classical Rényi divergence. For universal trade-offs we list only the disturbance arising from gauged distance measures in
last column.
Relations
Volume(d=2) Volume(d=3)
Utr
0.930
0.94675
U tr
0.705
0.94682
U 0.5
rd
0.787
0.917
Ure
0.770
0.905
U 0.5
ts
0.814
0.937
U hs
0.705
0.887
Maassen-Uffink
0.974
0.999
uncertainty and disturbance, respectively. Take infidelity IF(ρ, ρ A ) as the distance measure, we have U if as a special case of Eq.(10) for α = 1 2 , i.e.,where δ A (ρ) = H 2 (p) and D A→B = D 1 2 (q q ). It turns out that it is equivalent to˜U if and is tighter than the first variant Eq.(8).Proof of U α ts andŨ α ts . Let us now employ Tsallis relative entropy as the state distance measurewhich gives an identical universal uncertainty disturbance trade-off as Eq.(9) from Rényi divergence. As an alternative way of bounding the Tsallis distance we note first thatfor α < 1 so that, by formulating in a similar way as uncertainty Eq.(10), we havewhere 1 2−α H 2−α (p) and D α (q q ) quantify uncertainty and disturbance, respectively. We note that uncertainty relation above is stronger than Eq.(10) and also that from quantum Rényi divergence we can also obtain this stronger version of trade-off by considering Araki-Lieb-Thirring inequality, in the same manner.Proof of U hsIn fact our method can be also slightly generalized to cover more distance measures. As Eq.(4) only requires the monotonicity of distance measure under the dephasing operation Φ B (·), the uncertainty-disturbance trade-off applies also to the Hilbert-Schmidt distance D HS (ρ, ρ A ) ≡ tr(ρ − ρ A ) 2 which is monotonous under dephasing operation (but not a general under CPTP[40]). It follows from the data processing inequality that
. E H Kennard, Zeitschrift für Physik. 44326E. H. Kennard, Zeitschrift für Physik 44, 326 (1927).
. W Heisenberg, Zeitschrift für Physik. 43172W. Heisenberg, Zeitschrift für Physik 43, 172 (1927).
. H P Robertson, Phys. Rev. 34163H. P. Robertson, Phys. Rev. 34, 163 (1929).
. W Beckner, Ann. Math. 102159W. Beckner, Ann. Math. 102, 159 (1975).
. P J Coles, M Berta, M Tomamichel, S Wehner, Rev. Mod. Phys. 8915002P. J. Coles, M. Berta, M. Tomamichel, and S. Wehner, Rev. Mod. Phys. 89, 015002 (2017).
. H Maassen, J B M Uffink, Phys. Rev. Lett. 601103H. Maassen and J. B. M. Uffink, Phys. Rev. Lett. 60, 1103 (1988).
. M Ozawa, Lecture Notes in Physics. 3781M. Ozawa, Lecture Notes in Physics 378, 1 (1991).
. P Busch, P Lahti, R F Werner, Rev. Mod. Phys. 861261P. Busch, P. Lahti, and R. F. Werner, Rev. Mod. Phys. 86, 1261 (2014).
. P Busch, C Shilladay, Phys. Rep. 4351P. Busch and C. Shilladay, Phys. Rep. 435, 1 (2006).
. M Ozawa, Phys. Lett. A. 31821M. Ozawa, Phys. Lett. A 318, 21 (2003).
no information without disturbance. B Paul, quant-ph/0706.3526Quantum limitations of measurement. B. Paul, "no information without disturbance": Quantum limitations of measurement (2007), quant-ph/0706.3526.
. C Branciard, PNAS. 1106742C. Branciard, PNAS 110, 6742 (2013).
. P Busch, P Lahti, R F Werner, J. Math. Phys. 55172P. Busch, P. Lahti, and R. F. Werner, J. Math. Phys. 55, 172 (2014).
. C A Fuchs, A Peres, Phys. Rev. A. 532038C. A. Fuchs and A. Peres, Phys. Rev. A 53, 2038 (1996).
. J Barrett, L Hardy, A Kent, Phys. Rev. Lett. 9510503J. Barrett, L. Hardy, and A. Kent, Phys. Rev. Lett. 95, 010503 (2005).
. S Wehner, A Winter, New J. Phys. 1225009S. Wehner and A. Winter, New J. Phys. 12, 025009 (2010).
. H F Hofmann, S Takeuchi, Phys. Rev. A. 6832103H. F. Hofmann and S. Takeuchi, Phys. Rev. A 68, 032103 (2003).
. O Gühne, M Lewenstein, Phys. Rev. A. 7022316O. Gühne and M. Lewenstein, Phys. Rev. A 70, 022316 (2004).
G V Steeg, S Wehner, Quantum Information and Computation. 9801G. V. Steeg and S. Wehner, Quantum Information and Computation 9, 801 (2008).
. J Oppenheim, S Wehner, Science. 3301072J. Oppenheim and S. Wehner, Science 330, 1072 (2010).
. D Girolami, T Tufarelli, G Adesso, Phys. Rev. Lett. 110240402D. Girolami, T. Tufarelli, and G. Adesso, Phys. Rev. Lett. 110, 240402 (2013).
. S L Braunstein, C M Caves, Phys. Rev. Lett. 723439S. L. Braunstein and C. M. Caves, Phys. Rev. Lett. 72, 3439 (1994).
. L Braunstein, C M Caves, G J Milburn, Ann. Phys. 247135L. Braunstein, C. M. Caves, and G. J. Milburn, Ann. Phys. 247, 135 (1996).
. S Luo, Phys. Rev. Lett. 91180403S. Luo, Phys. Rev. Lett. 91, 180403 (2003).
. V Giovannetti, S Lloyd, L Maccone, Nat. Photon. 5222V. Giovannetti, S. Lloyd, and L. Maccone, Nat. Photon. 5, 222 (2011).
. V Giovannetti, S Lloyd, L Maccone, Phys. Rev. Lett. 108260405V. Giovannetti, S. Lloyd, and L. Maccone, Phys. Rev. Lett. 108, 260405 (2012).
. A Winter, IEEE Trans. Inf. Theory. 452481A. Winter, IEEE Trans. Inf. Theory 45, 2481 (2014).
M M Wilde, J M Renes, IEEE International Symposium on Information Theory Proceedings. M. M. Wilde and J. M. Renes, 2012 IEEE International Symposium on Information Theory Proceedings (2012).
. T Ogawa, H Nagaoka, IEEE Trans. Inf. Theory. 532261T. Ogawa and H. Nagaoka, IEEE Trans. Inf. Theory 53, 2261 (2007).
. L.-L Sun, S Yu, Z.-B Chen, quant-ph/1808.06416Uncertaintycomplementarity balance as a general constraint on nonlocality. L.-L. Sun, S. Yu, and Z.-B. Chen, Uncertainty- complementarity balance as a general constraint on non- locality (2018), quant-ph/1808.06416.
. P Pakonski, K Życzkowski, M Kus, J. Phys. A: Gen. Phys. 349303P. Pakonski, K. Życzkowski, and M. Kus, J. Phys. A: Gen. Phys. 34, 9303 (2001).
. K Życzkowski, M Kus, W Słomczyński, H Sommers, J. Phys. A: Gen. Phys. 363425K. Życzkowski, M. Kus, W. Słomczyński, and H. Som- mers, J. Phys. A: Gen. Phys. 36, 3425 (2003).
M A Nielsen, I L Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition. USACambridge University Press110700217610th ed.M. A. Nielsen and I. L. Chuang, Quantum Computa- tion and Quantum Information: 10th Anniversary Edi- tion (Cambridge University Press, USA, 2011), 10th ed., ISBN 1107002176.
. M Müller-Lennert, F Dupuis, O Szehr, S Fehr, M Tomamichel, J. Math. Phys. 54122203M. Müller-Lennert, F. Dupuis, O. Szehr, S. Fehr, and M. Tomamichel, J. Math. Phys. 54, 122203 (2013).
. F Leditzky, C Rouzé, N Datta, Lett. Math. Phys. 10761F. Leditzky, C. Rouzé, and N. Datta, Lett. Math. Phys. 107, 61 (2016).
. S Abe, Phys. Rev. A. 312S. Abe, Phys. Rev. A 312, 336 (2003), ISSN 0375-9601.
. M Berta, M Christandl, R Colbeck, J M Renes, R Renner, Nat. Phys. 6757M. Berta, M. Christandl, R. Colbeck, J. M. Renes, and R. Renner, Nat. Phys. 6, 757 (2009).
. P J Coles, R Colbeck, L Yu, M Zwolak, Phys. Rev. Lett. 108210405P. J. Coles, R. Colbeck, L. Yu, and M. Zwolak, Phys. Rev. Lett. 108, 210405 (2012).
. P J Coles, M Piani, Phys. Rev. A. 8922112P. J. Coles and M. Piani, Phys. Rev. A 89, 022112 (2014).
. P J Coles, M Cerezo, L Cincio, Phys. Rev. A. 10022103P. J. Coles, M. Cerezo, and L. Cincio, Phys. Rev. A 100, 022103 (2019).
. M Ozawa, Phys. Rev. A. 6742105M. Ozawa, Phys. Rev. A 67, 042105 (2003).
. M Koashi, J. Phys. Conf. Ser. 3698M. Koashi, J. Phys. Conf. Ser. 36, 98 (2006).
. J Schneeloch, C J Broadbent, S P Walborn, E G Cavalcanti, J C Howell, Phys. Rev. A. 8762103J. Schneeloch, C. J. Broadbent, S. P. Walborn, E. G. Cavalcanti, and J. C. Howell, Phys. Rev. A 87, 062103 (2013).
. S Zozor, G M Bosyk, M Portesi, J. Phys. A: Math. Theor. 47495302S. Zozor, G. M. Bosyk, and M. Portesi, J. Phys. A: Math. Theor. 47, 495302 (2014).
. A Streltsov, G Adesso, M B Plenio, Rev. Mod. Phys. 8941003A. Streltsov, G. Adesso, and M. B. Plenio, Rev. Mod. Phys. 89, 041003 (2017).
. G S Engel, T R Calhoun, E L Read, T K Ahn, T Manal, Y C Cheng, R E Blankenship, G R Fleming, Nature. 446G. S. Engel, T. R. Calhoun, E. L. Read, T. K. Ahn, T. Manal, Y. C. Cheng, R. E. Blankenship, and G. R. Fleming, Nature 446 (2007).
. E M Gauger, E Rieper, J J L Morton, S C Benjamin, V Vedral, Phys. Rev. Lett. 10640503E. M. Gauger, E. Rieper, J. J. L. Morton, S. C. Benjamin, and V. Vedral, Phys. Rev. Lett. 106, 040503 (2011).
. T Baumgratz, M Cramer, M B Plenio, Phys. Rev. Lett. 113140401T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014).
. Z.-W Liu, X Hu, S Lloyd, Phys. Rev. Lett. 11860502Z.-W. Liu, X. Hu, and S. Lloyd, Phys. Rev. Lett. 118, 060502 (2017).
. A Winter, D Yang, Phys. Rev. Lett. 116120404A. Winter and D. Yang, Phys. Rev. Lett. 116, 120404 (2016).
. Y Liu, Q Zhao, X Yuan, J. Phys. A: Math. Theor. 51Y. Liu, Q. Zhao, and X. Yuan, J. Phys. A: Math. Theor. 51, 414018 (2018), ISSN 1751-8121.
X Yuan, Q Zhao, D Girolami, X Ma, 10.1002/qute.2019000532511-9044Advanced Quantum Technologies. 2X. Yuan, Q. Zhao, D. Girolami, and X. Ma, Ad- vanced Quantum Technologies 2, 1900053 (2019), ISSN 2511-9044, URL http://dx.doi.org/10.1002/ qute.201900053.
. X.-D Yu, O Gühne, Phys. Rev. A. 9962310X.-D. Yu and O. Gühne, Phys. Rev. A 99, 062310 (2019).
. Q.-M Ding, X.-X Fang, X Yuan, T Zhang, H Lu, Phys. Rev. Research. 323228Q.-M. Ding, X.-X. Fang, X. Yuan, T. Zhang, and H. Lu, Phys. Rev. Research 3, 023228 (2021).
. J A Miszczak, Z Puchała, P Horodecki, A Uhlmann, K Zyczkowski, Quantum Info, Comput. 9J. A. Miszczak, Z. Puchała, P. Horodecki, A. Uhlmann, and K. Zyczkowski, Quantum Info. Comput. 9 (2009), ISSN 1533-7146.
For some distance measure, e.g., relative entropy, such a function is not well-defined. For some distance measure, e.g., relative entropy, such a function is not well-defined.
| []
|
[
"Similarity reduction of the modified Yajima-Oikawa equation",
"Similarity reduction of the modified Yajima-Oikawa equation"
]
| [
"Tetsuya Kikuchi \nMathematical Institute\nTohoku University\n980-8578SendaiJAPAN\n",
"Takeshi Ikeda \nDepartment of Applied Mathematics\nOkayama University of Science\n700-0005OkayamaJAPAN\n",
"† ",
"Saburo Kakei \nDepartment of Mathematics\nRikkyo University\n171-8501TokyoJAPAN\n"
]
| [
"Mathematical Institute\nTohoku University\n980-8578SendaiJAPAN",
"Department of Applied Mathematics\nOkayama University of Science\n700-0005OkayamaJAPAN",
"Department of Mathematics\nRikkyo University\n171-8501TokyoJAPAN"
]
| []
| We study a similarity reduction of the modified Yajima-Oikawa hierarchy. The hierarchy is associated with a non-standard Heisenberg subalgebra in the affine Lie algebra of type A(1) 2 . The system of equations for self-similar solutions is presented as a Hamiltonian system of degree of freedom two, and admits a group of Bäcklund transformations isomorphic to the affine Weyl group of type A (1) 2 . We show that the system is equivalent to a two-parameter family of the fifth Painlevé | 10.1088/0305-4470/36/45/008 | [
"https://arxiv.org/pdf/nlin/0304024v2.pdf"
]
| 544,364 | nlin/0304024 | 99e492d3be85f7f4f93401d8668081360076dcda |
Similarity reduction of the modified Yajima-Oikawa equation
16 Sep 2003 August 29, 2003
Tetsuya Kikuchi
Mathematical Institute
Tohoku University
980-8578SendaiJAPAN
Takeshi Ikeda
Department of Applied Mathematics
Okayama University of Science
700-0005OkayamaJAPAN
†
Saburo Kakei
Department of Mathematics
Rikkyo University
171-8501TokyoJAPAN
Similarity reduction of the modified Yajima-Oikawa equation
16 Sep 2003 August 29, 2003
We study a similarity reduction of the modified Yajima-Oikawa hierarchy. The hierarchy is associated with a non-standard Heisenberg subalgebra in the affine Lie algebra of type A(1) 2 . The system of equations for self-similar solutions is presented as a Hamiltonian system of degree of freedom two, and admits a group of Bäcklund transformations isomorphic to the affine Weyl group of type A (1) 2 . We show that the system is equivalent to a two-parameter family of the fifth Painlevé
Introduction
In applications of the theory of affine Lie algebras to integrable hierarchies, the Heisenberg subalgebras play important roles, since they correspond to the varieties of time-evolutions. Letĝ be the untwisted affine Lie algebra associated with a finite-dimensional simple Lie algebra g. Up to conjugacy, the Heisenberg subalgebras inĝ are in one-to-one correspondence with the conjugacy classes of the Weyl group of g [3]. In particular, the conjugacy class containing the Coxeter element, to which the principal Heisenberg subalgebra ofĝ is associated, leads to the Drinfel'd-Sokolov hierarchy [2], whereas the class of the identity element corresponds to the homogeneous Heisenberg subalgebra. Associated with arbitrary conjugacy class, M. F. de Groot, T. J. Hollowood, J. L. Miramontes [1] developed the theory of integrable systems called generalized Drinfel'd-Sokolov hierarchies.
When g is of type A n−1 , the conjugacy classes are parametrized by the partitions of n. In this paper we consider the modified Yajima-Oikawa hierarchy, which turns out to be a hierarchy related to the affine Lie algebra of type A (1) 2 and its non-standard Heisenberg subalgebra associated with the partition (2, 1), while the principal (resp. homogeneous ) case corresponds to the partition (3) (resp. (1, 1, 1) ).
Among the issues on integrable hierarchies, the study of similarity reduction is important. For example, M. Noumi and Y. Yamada introduced a higher order Painlevé system associated with the affine root system of type A (1) n−1 [7] and now the system is known to be equivalent to a similarity reduction of the system associated with the Coxeter class (n) of A n−1 . The aim of this paper is to investigate a similarity reduction of the modified Yajima-Oikawa hierarchy. Starting with universal viewpoints, we derive a system of ordinary differential equations for unknown functions f 0 , f 1 , f 2 , u 0 , u 1 , u 2 , g, q, r and complex paremeters α 0 , α 1 , α 2 :
α ′ 0 = α ′ 1 = α ′ 2 = 0, f ′ 0 = f 0 (u 2 − u 0 ) − α 0 , g ′ = g(u 0 − u 2 ) − qf 1 + rf 2 + α 0 + 4, f ′ 1 = f 1 (u 0 − u 1 ) − rα 1 , 3q ′ = 3q(u 1 − u 0 ) + qf 0 − f 2 , f ′ 2 = f 2 (u 1 − u 2 ) − qα 2 , 3r ′ = 3r(u 2 − u 1 ) − rf 0 + f 1 .
(1.1)
where ′ = d/dx denote the derivative with respect to the independent variable x. Under the algebraic relations
α 0 + α 1 + α 2 = −4, g = f 0 + 3qr, u 0 + u 1 + u 2 = 0, u 1 = qr, 2gu 0 = qf 1 − rf 2 − gqr − α 0 − 2,(1.2)
the system (1.1) turns out to be equivalent to the fifth Painlevé equation for y = −f 0 /(3u 1 ):
y ′′ = 1 2y + 1 y − 1 (y ′ ) 2 − y ′ x + (y − 1) 2 x 2 Ay + B y + C x y + D y(y + 1) (y − 1) ,
where the change of variable x → x 2 is employed and the parameters are given by
A = 1 2 α 2 − α 1 12 2 , B = − 1 2 α 0 4 2 , C = − α 2 − α 1 18 , D = − 1 18 .
On introducing the system (1.1), we shall describe the system in three ways:
1. Compatibility condition for a system of linear differential equations (Section 5), 2. A Hamiltonian system whose degree of freedom is two (Theorem 2), 3. Hirota bilinear equations for τ -functions (Theorem 3).
The system (1.1) has a symmetry of the affine Weyl group of type A (1) 2 as a group of Bäcklund transformations. First we give the symmetry as the compatibility of gauge transformations of linear differential equations and state it in the automorphism of the differential field K = C(α 0 , α 1 , α 2 , f 0 , f 1 , f 2 , g, q, r, u 0 , u 1 , u 2 ) with the derivation ′ : K → K defined by (1.1) and algebraic relations (1.2) (Theorem 1). Then we extend the action of affine Weyl group on K to the extended field F of K:
F = C(α 0 , α 1 , α 2 , x; τ 0 , τ 1 , τ 2 , σ 1 , σ 2 , τ ′ 0 , τ ′ 1 , τ ′ 2 , σ ′ 1 , σ ′ 2
) as a Bäcklund transformations, which is discussed in section 11 (Theorem 5).
The paper is organized as follows. In Sect.2, we review the notation related to the affine Lie algebra of type A (1) 2 . On the basis of the affine Lie algebra, we introduce the modified Yajima-Oikawa hierarchy in Sect.3. In Sect.4, we consider a condition of selfsimilarity on the solutions of the hierarchy. This condition yields a system of ordinary diferential equations, which is a main object in this paper. In Sect.5, the condition of self-similarity is also presented as a Lax-type equation. In Sect.6, we give a Weyl group symmetry of the system as a gauge transformation of the Lax equatoin (Theorem 1). In Sect.7 a Hamiltonian structure is introduced (Theorem 2). In Sect.8 we prove that our system is equivalent to a two-parameter family of the fifth Painlevé equation. In Sect.9 we introduce a set of τ -functions and give a bilinear form of differential system (Theorem 3). Then in Sect.10 we lift the action of Weyl group to the τ -functions (Theorem 4) and give a Jacobi-Trudi type formula (10.4) for the Weyl group orbit of the τ -functions. In Sect.11, we prove that the Weyl group action on the τ -functions commute with the derivation ′ = d/dx.
Preliminaries on the affine Lie algebra of type
A (1) 2
In this section, we collect necessary notions about the affine Lie algebra of type A (1) 2 . We mainly follow the notation used in [4], to which one should refer for further details.
Let g = sl 3 . The affine Lie algebraĝ is realized as a central extension of the loop algebra Lg = sl 3 (C[z, z −1 ]), together with the derivation d = z∂ ẑ
g = sl 3 C[z, z −1 ] ⊕ Cc ⊕ Cd,
where c denotes the canonical central element. Let us define the Chevalley generators E i , F i , H i (i = 0, 1, 2) for the affine Lie algebraĝ by
E 0 = zE 3,1 , E 1 = E 1,2 , E 2 = E 2,3 , F 0 = z −1 E 1,3 , F 1 = E 2,1 , F 2 = E 3,2 ,(2.1)H 0 = c + E 3,3 − E 1,1 , H 1 = E 1,1 − E 2,2 , H 2 = E 2,2 − E 3,3 , where E i,j is the matrix unit E i,j = (δ ia δ jb ) 3 a,b=1 . The Cartan subalgebra ofĝ is defined aŝ h = 2 i=0 CH i ⊕ Cd.
We introduce the simple roots α j and the fundamental weights Λ j as the following linear functionals on the Cartan subalgebraĥ:
H i , α j = a ij , H i , Λ j = δ ij (i = 0, 1, 2), d, α j = δ 0j , d, Λ j = 0 for j = 0, 1, 2, where (a ij ) 3 i=0 is the generalized Cartan matrix of type A (1) 2 defined by 2 −1 −1 −1 2 −1 −1 −1 2 .
We define a non degenerate symmetric bilinear form ( . | . ) on V =ĥ * as follows:
(α i |α j ) = a ij , (α i |Λ 0 ) = δ i0 , (Λ 0 |Λ 0 ) = 0.
We define simple reflections s i (i = 0, 1, 2) by
s i (λ) = λ − H i , λ α i , λ ∈ V.
They satisfy the fundamental relations
s 2 i = 1, s i s i+1 s i = s i+1 s i s i+1 (i = 0, 1, 2),
where the indices are understood as elements of Z/3Z. Consider the group
W = s 0 , s 1 , s 2 ⊂ GL(V ),(2.
Modified Yajima-Oikawa hierarchy
In this section we introduce the modified Yajima-Oikawa hierarchy as generalized Drinfel'd-Sokolov reduction associated to the loop algebra Lg = sl 3 (C[z, z −1 ]) , following [1]. Let us introduce the following derivation on Lg:
D = 4z ∂ ∂z − diag(−1, 0, 1). (3.1) Set Lg j = {A ∈ Lg | [D, A] = jA}.
Then we have a Z-gradation Lg = ⊕ j Lg j . Note that
deg(E 0 ) = − deg(F 0 ) = 2, deg(E j ) = − deg(F j ) = 1 (j = 1, 2).
Consider the particular element The subalgebra s is a maximal commutative subalgebra in g, which has the following basis:
γ 4j+2 = z j γ, γ 4j = z j diag(1, −2, 1) (j ∈ Z).
Then s is a graded subalgebra of Lg with respect to the gradation. We have γ 2j ∈ Lg 2j . The commutative subalgebra s is the image of a Heisenberg subalgebra inĝ associated with the conjugacy class (2, 1) ( [3], see also [10] and [5]). We put b := ⊕ j≥0 Lg j . To introduce our hierarchy, we begin with the differential operator
L := ∂ ∂x − γ − Q, where Q is an x-dependent element of b <2 . We set s ⊥ := Im (adγ) . It is clear that s ⊥ = ⊕ j s ⊥ j , where s ⊥ j := s ⊥ ∩ Lg j . There is a unique formal series U = ∞ j=1 U −j (U −j ∈ s ⊥ −j ) such that the operator L 0 := e adU (L) has the form L 0 = ∂ ∂x − γ − ∞ j=0 h −2j , h −2j ∈ s −2j .
Moreover U −j and h −2j are polynomials in the components of Q and their x derivatives. For any j > 0 we set
B 2j = e −adU γ 2j ≥0 .
The modified Yajima-Oikawa hierarchy is defined by the Lax equations
∂L ∂t 2j = [B 2j , L] (j = 1, 2, . . .).
We describe the above construction concretely. First we set
Q = u 0 r 0 0 u 1 q 0 0 u 2 , u 0 + u 1 + u 2 = 0
and solve for the first few terms of U j and h j :
U −1 = − qE 2,1 + rE 3,2 , U −2 = u 2 − u 0 4 (z −1 E 1,3 − E 3,1 ), U −3 = 3u 0 8 + 3u 1 2 − 3u 2 8 − qr r + r ′ E 1,2 + 7u 0 8 − u 1 + u 2 8 + qr q + q ′ E 2,3 , U −4 = u ′ 0 − u ′ 2 8 + q ′ r + 3qr ′ 8 + u 0 16 − 5u 1 16 + u 2 16 + 5 16 qr qr (E 1,1 − E 3,3 ), h 0 = qr − u 1 2 γ 0 , h −2 = u 2 0 + u 2 2 8 − u 0 u 2 4 − q ′ r + 3qr ′ 4 − u 0 8 − 5u 1 8 + u 2 8 + 3 8 qr qr γ −2 .
Here ′ means ∂/∂x. In fact, h 0 is a constant along all the flows and we can put h 0 = 0 (see [1]). So we fix
u 1 = qr (3.2)
from now on. By using U j 's and condition (3.2) we have
B 2 = γ 2 + u 0 r 0 0 u 1 q 0 0 u 2 , (3.3) B 4 = γ 4 + 3 −qr ′ + qru 2 r ′ − ru 2 0 qz qr ′ − q ′ r + qru 1 −q ′ − qu 0 −qrz rz q ′ r + qru 0 (3.4)
The modified Yajima-Oikawa equation is obtained by the following zero-curvature condition:
∂B 2 ∂t 4 = ∂B 4 ∂t 2 − [B 2 , B 4 ]. (3.5)
In fact this yields the following system of differential equations:
q t + 3 q ′′ + q(−qr ′ + u ′ 0 + qru 2 + u 2 2 ) = 0, (3.6) r t − 3 r ′′ − r(−q ′ r + u ′ 2 − qru 0 + u 2 0 ) = 0, (3.7) (u 0 ) t = 3(−qr ′ + qru 2 ) ′ , (u 1 ) t = 3(qr ′ − q ′ r + qru 1 ) ′ , (u 2 ) t = 3(q ′ r + qru 0 ) ′ . (3.8)
Here we identify x and t 2 , and put t = t 4 . Remark: This system of equations is related to the Yajima-Oikawa equation [11]:
Ψ t + 3 (Ψ ′′ + uΨ) = 0, (3.9) Φ t − 3 (Φ ′′ + uΦ) = 0, (3.10) u t + 6(ΨΦ) ′ = 0. (3.11)
The relation is established by the following map, which takes a solution q, r, u j (j = 0, 1, 2) of (3.6) (3.7), (3.8) into a solution Ψ, Φ, u of (3.9), (3.10), (3.11) and is an analog of the Miura map in the case of KdV and mKdV equations:
Ψ = −q ′ − qu 0 , Φ = r ′ − ru 2 , −u = u 2 0 + u 2 2 + u 0 u 2 + u ′ 0 + qr ′ .
Similarity reduction
In this section we consider a self-similarity condition on the solutions of the modified Yajima-Oikawa equation (3.6), (3.7), (3.8). These are the main object of this paper. A solution q(x, t), r(x, t), u j (x, t) (j = 0, 1, 2) is said to be self-similar if
q(λ 2 x, λ 4 t) = λ −1 q(x, t), r(λ 2 x, λ 4 t) = λ −1 r(x, t), u j (λ 2 x, λ 4 t) = λ −2 u j (x, t). (4.1)
Here we count a degree of variables by deg
x = deg t 2 = −2, deg t = deg t 4 = −4.
Note that such functions are uniquely determined by its values at fixed t, say at t = 1/4. Differentiating (4.1) with respect to λ at λ = 1, we obtain the Euler equations
2x ∂q ∂x + 4t ∂q ∂t = −q, 2x ∂r ∂x + 4t ∂r ∂t = −r, 2x ∂u j ∂x + 4t ∂u j ∂t = −2u j .
At t = 1/4 these identities become
∂q ∂t = −2 ∂(xq) ∂x + q, ∂r ∂t = −2 ∂(xr) ∂x + r, ∂u j ∂t = −2 ∂(xu j ) ∂x .
This can be written in the matrix form
∂B 2 ∂t = −2 ∂(xB 2 ) ∂x + [D, B 2 ],
where D is the derivation defined in (3.1). Substituting this last identity into the zerocurvature equation (3.5), we obtain
∂M ∂x = 4z ∂ ∂z − M, B 2 ,(4.2)
where we set
M = ε 1 f 1 g 0 ε 2 f 2 0 0 ε 3 + z 1 0 0 3q −2 0 f 0 3r 1 := diag(−1, 0, 1) + 2xB 2 + B 4 . (4.3)
The correspondence of variables is given as follows: v
ε 1 = −1 + 2xu 0 − 3q(r ′ − ru 2 ), (4.4) ε 2 = 2xu 1 + 3(qr ′ − q ′ r + qru 1 ), (4.5) ε 3 = 1 + 2xu 2 + 3r(q ′ + qu 0 ) (4.6)
and g = 2x,
f 0 = 2x − 3qr, f 1 = 2xr + 3(r ′ − ru 2 ), f 2 = 2xq − 3(q ′ + qu 0 ). (4.7)
Here we regard the variables q = q(x, 1/4), r = r(x, 1/4), u j = u j (x, 1/4) (j = 0, 1, 2) are functions only in x. Note that the definition of M has a freedom of adding a constant diagonal matrix and here we normalize
ε 1 + ε 2 + ε 3 = 0. (4.8)
Lax pair formalism
Consider the following system of linear differential equations for the column vector
− → ψ = t (ψ 1 , ψ 2 , ψ 3 ) of three unknown functions ψ i = ψ i (z, x) (i = 1, 2, 3) : 4z ∂ ∂z − → ψ = M − → ψ , ∂ ∂x − → ψ = B − → ψ . (5.1)
We assume that the matrix M is (4.3) and B = B 2 (3.3) where the variables ε j , f j , u j , q, r and g are functions in x. Then the compatibility condition of system (5.1)
4z ∂ ∂z − M, ∂ ∂x − B = 0 (5.2)
is equivalent to the relations In what follows we shall impose the following constraint on the variables:
ε ′ 1 = ε ′ 2 = ε ′ 3 = 0, g = f 0 + 3qr, f ′ 0 = f 0 (u 2 − u 0 ) − (ε 3 − ε 1 − 4), g ′ = g(u 0 − u 2 ) − qf 1 + rf 2 − ε 1 + ε 3 , f ′ 1 = f 1 (u 0 − u 1 ) − r(ε 1 − ε 2 ), 3q ′ = 3q(u 1 − u 0 ) + qf 0 − f 2 , f ′ 2 = f 2 (u 1 − u 2 ) − q(ε 2 − ε 3 ), 3r ′ = 3r(u 2 − u 1 ) − rf 0 + f 1 .u 0 + u 1 + u 2 = 0, u 1 = qr. (5.4)
The joint system (5.2) and (5.4) is the main object that we investigate in this paper.
Using system (5.3) together with the constraint, we can derive the following equation:
2gu 0 = qf 1 − rf 2 − gqr − ε 3 + ε 1 + 2. (5.5)
After the elimination of the variables f 0 , u 0 , u 1 , u 2 by (5.3), (5.4) and (5.5), we obtain a system of ODE for the unknown functions f 1 , f 2 , q, r with the parameters ε 1 , ε 2 , ε 3 . We can obtain the set of explicit formulae of f ′ 1 , f ′ 2 , q ′ , r ′ in terms of f 1 , f 2 , q, r and g, and the results are
f ′ 1 = f 1 2g (f 1 q − f 2 r) − 3 2 f 1 qr + (ε 1 − ε 3 ) f 1 2g − (ε 1 − ε 2 )r + f 1 g , (5.6) f ′ 2 = f 2 2g (f 1 q − f 2 r) + 3 2 f 2 qr + (ε 1 − ε 3 ) f 2 2g − (ε 2 − ε 3 )q + f 2 g , (5.7) q ′ = − q 2g (f 1 q − f 2 r) + q 2 r 2 − (ε 1 − ε 3 ) q 2g + gq − f 2 3 − q g , (5.8) r ′ = − r 2g (f 1 q − f 2 r) − qr 2 2 − (ε 1 − ε 2 ) r 2g − gr − f 1 3 − r g . (5.9)
In Sect.7 we present the system of ODE in the Hamiltonian form.
Remark. Using (5.5) and (5.3), we can also derive the following differential equation:
gu ′ 0 = (ε 2 − ε 3 )qr + f 2 3 (rf 0 − f 1 ) − 2u 0 . (5.10)
Bäcklund transformations
Let us pass to the investigation of a group of Bäcklund transformations. For this purpose, it is convenient to introduce the following set of parameters:
α 0 = ε 3 − ε 1 − 4, α 1 = ε 1 − ε 2 , α 2 = ε 2 − ε 3 . (6.1)
They are identified with the simple roots of the affine root system of type A
2 . We define the Bäcklund transformations for the system by considering the gauge transformations of the linear system (5.1)
s i − → ψ = G i − → ψ (i = 0, 1, 2). (6.2)
The matrices G i are given as follows:
G i = 1 + α i f i F i (i = 0, 1, 2),(6.3)
where F 0 , F 1 , F 2 are Chevalley generators (2.1) of the loop algebra sl 3 (C[z, z −1 ]). The compatibility condition of (5.1) and (6.2) is
s i (M) = G i MG −1 i + 4z ∂G i ∂z G −1 i , s i (B) = G i BG −1 i + ∂G i ∂x G −1 i . (6.4)
On the components of the matrices M, B, the actions of s i (i = 0, 1, 2) are given explicitly as in the following tables:
f 0 f 1 f 2 g q r s 0 f 0 f 1 + 3r α 0 f 0 f 2 − 3q α 0 f 0 g q r s 1 f 0 − 3r α 1 f 1 f 1 f 2 + g α 1 f 1 g q + α 1 f 1 r s 2 f 0 + 3q α 2 f 2 f 1 − g α 2 f 2 f 2 g q r − α 2 f 2 α 0 α 1 α 2 u 0 u 1 u 2 s 0 −α 0 α 1 + α 0 α 2 + α 0 u 0 + α 0 f 0 u 1 u 2 − α 0 f 0 s 1 α 0 + α 1 −α 1 α 2 + α 1 u 0 − r α 1 f 1 u 1 + r α 1 f 1 u 2 s 2 α 0 + α 2 α 1 + α 2 −α 2 u 0 u 1 − q α 2 f 2 u 2 + q α 2 f 2
The automorphisms s i (i = 0, 1, 2) generate a group of Bäcklund transformations for our differential system. To state this fact clearly, it is convenient to introduce the field K = C(α 0 , α 1 , α 2 , f 0 , f 1 , f 2 , g, q, r, u 0 , u 1 , u 2 ), (6.5) where the generators satisfy the following algebraic relations:
α 0 + α 1 + α 2 = −4, f 0 = g − 3qr, u 0 + u 1 + u 2 = 0, u 1 = qr, 2gu 0 = qf 1 − rf 2 − gqr − ε 3 + ε 1 + 2.
We have the automorphisms s i (i = 0, 1, 2) of the field K defined by the above table.
Note that the field K is thought to be a differential field with the derivation ′ : K → K defined by (5.3).
Theorem 1
The automorphism s 0 , s 1 , s 2 of K define a representation of the affine Weyl group W (2.2) on the field K such that the action of the each element w ∈ W commutes with the derivation of the differential field K.
Theorem 1 is proved by straightforward computations. Note that the independent variable x = g/2 is fixed under the action of W.
Hamiltonian structure
We shall equip K (6.5) with the Poisson algebra structure { , } : K × K → K defined as follows:
{
, } f 1 f 2 q r f 1 0 g 1 0 f 2 −g 0 0 −1 q −1 0 0 0 r 0 1 0 0
That is, {f 1 , f 2 } = g and so on. Note that the Poisson structure comes from the Lie algebra structure ofĝ (see [9] for an exposition). We can describe the action of s i (i = 0, 1, 2) on the generators f = f j , u j , q, r, g (j = 0, 1, 2) of K by
s i (f ) = f + α i f i {f i , f }.
We introduce the function h by
h := 1 2 (f 1 q 2 r + f 2 qr 2 ) − 1 4g (f 2 1 q 2 + f 2 2 r 2 + q 2 r 2 g 2 ) + qr 2g − 1 3 f 1 f 2 + g 3 − α 1 + α 2 2g f 1 q + g 3 + α 1 + α 2 2g f 2 r − g 3 − α 1 − α 2 2g
qrg.
Then the differential system (5.6)-(5.9) can be expressed
f ′ 1 = {h, f 1 } + f 1 g , q ′ = {h, q} − q g , (7.1) f ′ 2 = {h, f 2 } + f 2 g , r ′ = {h, r} − r g .
Let us introduce the variables
p 1 = f 1 , q 1 = q, p 2 = f 2 g − q, q 2 = −gr.
It is easy to show that
{p i , q j } = δ ij , {p i , p j } = {q i , q j } = 0 (i, j = 1, 2).
Theorem 2 Let H be the function defined as
xH = − 1 4 p 1 p 2 q 1 q 2 − 1 8 p 2 1 q 2 1 + p 2 2 q 2 2 − 1 2 p 1 q 2 1 q 2 − 1 4 (α 1 + α 2 + 2) p 1 q 1 − 1 4 (α 1 + α 2 − 2) p 2 q 2 − α 1 2 q 1 q 2 − 2x 2 3 (q 2 + p 1 )p 2
Then the system of ODEs (5.6), (5.7), (5.8), (5.9) is equivalent to the Hamiltonian system
dq 1 dx = ∂H ∂p 1 , dq 2 dx = ∂H ∂p 2 , dp 1 dx = − ∂H ∂q 1 , dp 2 dx = − ∂H ∂q 2 . (7.2) Proof. We define H = h − f 1 q + f 2 r g + qr
and rewrite this in the coordinate p j , q j (j = 1, 2). Then the equations (7.1) can be expressed as (7.2).
The behavior of the Hamiltonian under the Bäcklund transformations is given by the simple formulae
s 0 (H) =H + 6qr α 0 f 0 , s j (H) =H (j = 1, 2),
where we setH = xH + a with the correction term
a = 1 24 (α 1 − α 2 )(α 1 − α 2 − 4).
Reduction to the fifth Painlevé equation
In this section, we show the system (5.3) is equivalent to a two-parameter family of the fifth Painlevé equation. By linear change of the independent variable, we ensure the normalization
f 0 + f 1 r + f 2 q + 3 q ′ q − r ′ r = 3g = 6x (8.1)
holds. After the elimination of u 0 and u 2 , we have
f ′ 0 = − f 0 3 f 1 r − f 2 q + f 0 u ′ 1 u 1 − α 0 (8.2) f 1 r ′ = − f 1 3r f 2 q − f 0 + 3u ′ 1 u 1 − α 1 , (8.3) f 2 q ′ = − f 2 3q f 0 − f 1 r + 3u ′ 1 u 1 − α 2 . (8.4)
Here we introduce a new variable y := − f 0 3u 1 .
Notice the relations
y − 1 = − 2x 3u 1 , y ′ y − 1 = 1 x − u ′ 1 u 1 (8.5)
holds by f 0 = g − 3qr = 2x − 3u 1 . Then we rewrite (8.2) as
y ′ = − y 3 f 1 r − f 2 q + α 0 3u 1 ,(8.y ′′ = 1 2y + 1 y − 1 (y ′ ) 2 − y ′ x + (y − 1) 2 8x 2 ε 2 2 y − α 2 0 y − 2x 2 y 9 − 4x 2 y 9(y − 1) − (α 2 − α 1 )y 3 + ε 2 y 3 . (8.7)
We put ξ = x 2 , then the equation (8.7) can be brought into the fifth Painlevé equation
y ξξ = 1 2y + 1 y − 1 (y ξ ) 2 − 1 ξ y ξ + (y − 1) 2 ξ 2 Ay + B y + C ξ y − y(y + 1) 18(y − 1) , where A = ε 2 2 32 , B = − α 2 0 32 , C = − ε 2 6 .
Note that ε 2 = (α 2 − α 1 )/3 holds by (4.8) and (6.1).
τ -functions
We introduce the τ -functions τ 0 ,τ 1 ,τ 2 , σ 1 and σ 2 to be the dependent variables satisfying the following equations:
f 1 r = 2x + 3 σ ′ 2 σ 2 − τ ′ 0 τ 0 , f 2 q = 2x − 3 σ ′ 1 σ 1 − τ ′ 0 τ 0 , q = − σ 1 τ 1 , r = σ 2 τ 2 . (9.1)
To fix the freedom of overall multiplication by a function in the defining equation (9.1) for τ 0 , τ 1 , τ 2 , σ 1 and σ 2 , we impose the equation
log τ 2 0 τ 2 1 τ 2 2 σ 1 σ 2 ′′ + u 2 0 + u 2 2 + u 0 − f 1 3r + 2x 3 2 + u 2 − f 2 3q + 2x 3 2 − 2x 9 4x − f 1 r − f 2 q − α 1 − α 2 9 = 0. (9.2)
The differential equations for the variables q and r in the system (5.3) lead to
u 0 = τ ′ 1 τ 1 − τ ′ 0 τ 0 , u 2 = τ ′ 0 τ 0 − τ ′ 2 τ 2 (9.3)
respectively. Here we have used the relations
u 1 = qr = − σ 1 σ 2 τ 1 τ 2 , f 0 = 2x − 3qr = 2x + 3 σ 1 σ 2 τ 1 τ 2 .
If the equations (9.3) are satisfied, we have
u 1 = τ ′ 2 τ 2 − τ ′ 1 τ 1 ,(9.4)
by u 0 + u 1 + u 2 = 0 and therefore have the following formula of the variable f 0 in terms of the τ -functions:
f 0 = 2x + 3 τ ′ 1 τ 1 − τ ′ 2 τ 2 . (9.5)
Let D x and D 2 x be Hirota's bilinear operators:
D x F · G := F ′ G − F G ′ , D 2 x F · G := F ′′ G − 2F ′ G ′ + F G ′′ .
In this notation, the relation u 1 = qr, for example, can be written in
D x τ 1 · τ 2 = σ 1 σ 2 . (9.6)
We introduce a system of bilinear equations that leads to our differential system (5.3).
Theorem 3 Let τ 0 , τ 1 , τ 2 , σ 1 , σ 2 be a set of functions that satisfies the following system of Hirota bilinear equations: Proof. We can verify that the differential equations for q and r are satisfied if we assume the existence of the τ -functions such that equations (9.1), (9.3) holds. The differential equations for f 0 is written as
3D 2 x − 2xD x + 1 6 (α 0 − 4α 1 − 2) τ 0 · τ 1 = 0, (9.7) 3D 2 x − 2xD x − 1 6 (α 0 − 4α 2 − 2) τ 2 · τ 0 = 0, (9.8) 3D 2 x − 2xD x + 1 6 (α 1 − α 2 + 6) τ 1 · σ 2 = 0, (9.9) 3D 2 x − 2xD x + 1 6 (α 1 − α 2 − 6) σ 1 · τ 2 = 0,(9.3(g ′′ 1 − g ′′ 2 ) + 2 = (3(g ′ 1 − g ′ 2 ) + 2x)(2g ′ 0 − g ′ 1 − g ′ 2 ) − α 0 ,(9.11)
where g j = log τ j , (j = 0, 1, 2). This equation is obtained if we subtract (9.7) from (9.8).
The differential equations for f 1 and f 2 can be rewritten as
f 1 r ′ = f 1 r u 0 − u 1 − r ′ r − α 1 f 2 q ′ = f 2 q u 1 − u 2 − q ′ q − α 2 ,(9.12)
respectively. In terms of the τ -functions, these equations read
3(h ′′ 2 − g ′′ 0 ) + 2 = (3(h ′ 2 − g ′ 0 ) + 2x)(2g ′ 1 − g ′ 0 − h ′ 2 ) − α 1 , (9.13) 3(g ′′ 0 − h ′′ 1 ) + 2 = (3(g ′ 0 − h ′ 1 ) + 2x)(2g ′ 2 − g ′ 0 − g ′ 1 ) − α 2 ,(9.14)
where h 1 = log σ 1 , h 2 = log σ 2 . In fact, from (9.7) and (9.9) we can eliminate g ′′ 1 to obtain (9.13). In the similar way from (9.8) and (9.10), we can eliminate g ′′ 2 to obtain (9.14). We remark that the normalization of τ -functions (9.2) is obtained by taking the sum of four equations in this theorem.
Jacobi-Trudi type formula
In this section we lift the action of W to the τ -functions. Consider the field extension K = K(τ 0 , τ 1 , τ 2 , σ 1 , σ 2 ). Then we can prove the next Theorem by a direct computation.
Theorem 4
We extend each automorphism s i of K to an automorphism of the field K = K(τ 0 , τ 1 , τ 2 , σ 1 , σ 2 ) by the formulae s i (τ j ) = τ j (i = j), s i (σ k ) = σ k (i = k) and
s 0 (τ 0 ) = f 0 τ 2 τ 1 τ 0 , s 1 (τ 1 ) = f 1 τ 0 τ 2 τ 1 , s 1 (σ 1 ) = −(f 1 q + α 1 ) τ 0 τ 2 τ 1 , (10.1) s 2 (τ 2 ) = f 2 τ 1 τ 0 τ 2 , s 2 (σ 2 ) = (f 2 r − α 2 ) τ 1 τ 0 τ 2 . (10.2)
Then these automorphisms define a representation of W on K.
Following [6], we will describe the Weyl group orbit of the τ -functions (see also [9]). For any w ∈ W and k = 0, 1, 2, there exists a rational function φ
(k) w ∈ K such that w(τ k ) = φ (k) w i=0,1,2 τ (α i |w(Λ k )) i . (10.3)
We shall give an expression of φ where σ i (i ∈ Z) is the adjacent transposition (i, i + 1). For a Maya diagram M and w ∈ W , we see that w(M) ⊂ Z is also a Maya diagram of the same charge. For any w ∈ W and k = 0, 1, 2, let λ = (λ 1 , . . . , λ r ) be the partition corresponding to the Maya diagram M = w(Z <k ). We set
N (k) λ = i<j i∈M c ,j∈M (ε i − ε j ),
where we impose the relation ε i − ε i+3 = −4 (i ∈ Z), so we have N (k) λ ∈ C[α 0 , α 1 , α 2 ]. We can apply the following formula due to Y.Yamada [12]:
φ w (Λ k ) = N (k) λ det g (k−i+1) λ j −j+i 1≤i,j≤r .
(10.4) = π(g (0)
p ) and g
(2) p = π 2 (g (0) p ) by the automorphism π: π(f ij ) = f i+1,j+1 , π(ε j ) = ε j+1 .
The formula (10.4) is valid since the action of W = s 0 , s 1 , s 2 in our setting is reduced from the action of A ∞ (cf. [9]):
s i (α i ) = −α i , s i (α i±1 ) = α i±1 + α i , s i (α j ) = α j (j = i, i ± 1),
where α j := ε j − ε j+1 (j ∈ Z) and
s k (f i,j ) = f i,j + (δ k+1,i f k,j − δ j,k f i,k+1 ) α k f k .
11 Differential field of τ -functions
In this section we give supplementary discussions on the affine Weyl group action. In particular, we consider a differential field of τ -functions that naturally contains the fields K and K. The field F we consider can be presented as
C(α 0 , α 1 , α 2 , x; τ 0 , τ 1 , τ 2 , σ 1 , σ 2 , τ ′ 0 , τ ′ 1 , τ ′ 2 , σ ′ 1 , σ ′ 2 ) (11.1)
with some relations discussed below. Then the set of bilinear equations in Theorem 3 makes F into the differential field. To show some basic facts on F , we introduce some intermediate fields.
Let F denote the extended field of C(α 0 , α 1 , α 2 , x) obtained by adjoining the variables g ′ 0 , g ′ 1 , g ′ 2 , h ′ 1 , h ′ 2 with the following relations:
3(g ′ 0 − 2h ′ 2 + h ′ 1 )(g ′ 1 − g ′ 2 ) + 2x(g ′ 0 − 2g ′ 1 + g ′ 2 ) + α 1 + 1 = 0, (11.2) 3(h ′ 2 − 2h ′ 1 + g ′ 0 )(g ′ 1 − g ′ 2 ) + 2x(g ′ 1 − 2g ′ 2 + g ′ 0 ) + α 2 + 1 = 0. (11.3)
As in the proof of Theorem 3, we will identify g ′ j with (log τ j ) ′ and h ′ 1 , h ′ 2 with (log σ 1 ) ′ , (log σ 2 ) ′ respectively. Note that the relations (11.2), (11.3) correspond to (4.4), (4.5), (4.6). It is easy to see F = C(α 0 , α 1 , α 2 , x)(g ′ 0 , g ′ 1 , g ′ 2 ), and g ′ 0 , g ′ 1 , g ′ 2 are algebraically independent over C(α 0 , α 1 , α 2 , x). So if we fix g ′′ j ∈ F (j = 0, 1, 2) in an arbitrary way, then we have a derivation on F. Now we want to introduce a derivation on F in such a way that is consistent with the bilinear equations. Actually we can prove the following lemma by lengthy but straightforward computations:
Lemma 1 There exists a unique derivation on F such that the set of bilinear equations in Theorem 3 holds.
Consider the extended field F := F (τ 0 , τ 1 , τ 2 , σ 1 , σ 2 ) with a relation τ ′ 1 τ 2 − τ 2 τ ′ 1 = σ 1 σ 2 . We can naturally extend the derivation by τ ′ j = g ′ j τ j , σ ′ k = h ′ k σ k (j = 0, 1, 2, k = 1, 2). Then we have the previous presentation (11.1). Now the next lemma is a direct consequence of Theorem 3.
Lemma 2
We have a natural embedding of the differential fields K ⊂ F . Our next task is to extend the affine Weyl group action on K = K(τ 0 , τ 1 , τ 2 , σ 1 , σ 2 ) (Theorem 4) to F . The following two lemmas can be easily verified.
Lemma 3 By the following formulae, we can introduce an action of the affine Weyl group W on F as a group of automorphisms:
s 0 (τ ′ 0 ) s 0 (τ 0 ) = τ ′ 0 τ 0 − α 0 f 0 , s 1 (τ ′ 1 ) s 1 (τ 1 ) = τ ′ 1 τ 1 − α 1 f 1 σ 2 τ 2 , s 1 (σ ′ 1 ) s 1 (τ 1 ) = σ ′ 1 τ 1 − α 1 f 1 τ ′ 0 τ 0 , s 2 (τ ′ 2 ) s 2 (τ 2 ) = τ ′ 2 τ 2 + α 2 f 2 σ 1 τ 1 , s 2 (σ ′ 2 ) s 2 (τ 2 ) = σ ′ 2 τ 2 − α 1 f 1 τ ′ 0 τ 0 ,
and s i (τ ′ j ) = τ ′ j (i = j), s i (σ ′ k ) = σ ′ k (i = k). Moreover this action is an extension of the action of W on K.
Lemma 4 For i, j = 0, 1, 2 and k = 1, 2 we have s i (τ ′ j ) = s i (τ j ) ′ , s i (σ ′ k ) = s i (σ k ) ′ . Remark. Although we have introduced the Weyl group action on the τ -functions in an ad hoc manner, these formulae can be derived systematically by using the gauge matrices G i (6.3), if we identify the τ -functions with the components of a dressing matrix. We will give an explanation of this point in a separate article.
The goal of this section is the following fact:
Theorem 5 The derivation of F commutes with the action of W on F .
We have derived a two-parameter family of the fifth Painlevé equation as a similarity reduction of the modified Yajima-Oikawa hierarchy, which is related to a non-standard Heisenberg subalgebra of A
2 ). By a suitable modification of our construction, it may be possible to recover a missing parameter and get the fifth Painlevé with the full symmetry of type W (A
3 ). Combinatorial and/or representation theoretical structure of the hierarchy is also deserves to be investigated. A combinatorial aspect of representation associated with the Yajima-Oikawa hierarchy is studied by S. Leidwanger in [5]. It seems that the work is closely related some family of polynomial solutions of the fifth Painlevé equation. We hope that we discuss these issues in future publications.
s be the centralizer of γ in Lg s = Ker (adγ) = {A ∈ Lg | [γ, A] = 0}.
forget the relation (4.3) of M and B 1 , B 2 and start from the Lax equation (5.3), we can recover some of the relations of variables. For instance, differentiating both-hand side of g = f 0 + 3qr and eliminate the variables except g ′ by means of (5.3), we get g ′ = 2 and therefore assume g = 2x.
(8.6), elimination of the variables f 1 , f 2 , q, r, u 1 by (8.1), (8.3), (8.4), (8.5), (8.6) and the definition of the constant ε 2 (4.5) leads to the following equation of y:
10) together with (9.6). If we define the functions f 0 , f 1 , f 2 , q, r, u 0 , u 1 and u 2 by the formulae (9.1), (9.3), (9.4) then this set of functions satisfies our ODE system (5.3) together with algebraic equations (5.4).
terms of the Jacobi-Trudi type determinant.A subset M of Z is called a Maya diagram if M ∩ Z ≥0 and M c ∩ Z <0 are finite sets. We define an integer c(M) := ♯ (M ∩ Z ≥0 ) − ♯ (M c ∩ Z <0 )called the charge of M. If c(M) = r, we can express M as {i k |k < r} by using an strictly increasing sequence i k (k < r) such that i k = k for k ≪ r. Then we associate a partition λ = (λ 1 , λ 2 , . . .) given by λ j = i r−j+1 − (r − j + 1), (j = 1, 2, . . .).The Weyl group W = s 0 , s 1 , s 2 can be realized as a subgroup of the group of bijections w : Z → Z by setting s k = j∈Z σ 3j+k−1 (k = 0, 1, 2),
The system admits a group of Bäcklund transformations of type W (A
AcknowledgmentsThe authors are grateful to Masatoshi Noumi, Yasuhiko Yamada, Kanehisa Takasaki, Koji Hasegawa, Gen Kuroki and Ralph Willox for fruitful discussions and kind interest.Here g(k)p (k ∈ Z/3Z, p ∈ Z >0 ) are the determinant of p × p matrix described as follows. First we define g (0) p bywhere the components areA straightforward verification of this fact may require quite a bit of calculations, because the second derivatives of τ -functions are determined implicitly by the bilinear equations. To avoid the complexity, we make use of the fact F = K(k), which is easily seen from (9.1), (9.3), and (9.4), where we setAs for the first derivatives of τ -functions, we have already lemma 4. Therefore, in order to prove Theorem 5, it suffices to show the next lemma.Lemma 5Proof. By Lemma 3, we haveWe can rewrite the right hand sides of (11.5) and (11.6) intoby using (9.1), (10.1) and(10.2). On the other hand, the normalization condition (9.2) readsThen we can verify (11.4) by applying (6.4) to s i (k ′ ) and the ODE (5.3) to s i (k) ′ .
Generalized Drinfel'd-Sokolov hierarchies. M F De Groot, T J Hollowood, J L Miramontes, Commun. Math. Phys. 145M. F. de Groot, T. J. Hollowood, J. L. Miramontes, "Generalized Drinfel'd-Sokolov hierarchies", Commun. Math. Phys. 145 (1992), 57-84.
Lie algebras and equations of Korteweg-de Vries type. V G Drinfel'd, V V Sokolov, J. Sov. Math. 30V. G. Drinfel'd, V. V. Sokolov, "Lie algebras and equations of Korteweg-de Vries type", J. Sov. Math. 30 (1985), 1975-2036.
112 Constructions of the basic representation of the loop group of E 8. V G Kac, D H Peterson, Aymposium on anomalies, geometry and topology. W. A. Bardeen, A. R. WhiteSingaporeWorld ScientificV. G. Kac, D. H. Peterson, "112 Constructions of the basic representation of the loop group of E 8 " In Aymposium on anomalies, geometry and topology. W. A. Bardeen, A. R. White (eds). World Scientific, Singapore, 1985.
Infinite dimensional Lie algebras. V G Kac, Cambridge University Pressthird editionV. G. Kac, Infinite dimensional Lie algebras, third edition. Cambridge University Press, 1990
On the various realizations of the basic representation of A (1) n−1 and the combinatorics of partitions. S Leidwanger, J. Algebraic Combin. 142S. Leidwanger, "On the various realizations of the basic representation of A (1) n−1 and the combinatorics of partitions", J. Algebraic Combin. 14 (2001), no. 2, 133-144.
Affine Weyl groups, discrete dynamical systems and Painlevé equations. M Noumi, Y Yamada, Commun. Math. Phys. 199M. Noumi and Y. Yamada, "Affine Weyl groups, discrete dynamical systems and Painlevé equations", Commun. Math. Phys. 199 (1998), 281-295.
Higher order Painlevé equations of type A (1) l. M Noumi, Y Yamada, Funkcial. Ekvac. 41M. Noumi and Y. Yamada, "Higher order Painlevé equations of type A (1) l ", Funkcial. Ekvac. 41 (1998), 483-503.
Symmetries in the fourth Painlevé equation and Okamoto polynomials. M Noumi, Y Yamada, Nagoya Math. J. 153M. Noumi, Y. Yamada, "Symmetries in the fourth Painlevé equation and Okamoto polynomials", Nagoya Math. J. 153(1999), 53-86.
An introduction to birational Weyl group actions. M Noumi, Symmetric Functions 2001: Surveys of Developments and Perspectives. Cambridge, U.K.Kluwer Academic PublishersProceeding of the NATO ASIM. Noumi, "An introduction to birational Weyl group actions" in Symmetric Func- tions 2001: Surveys of Developments and Perspectives (Ed. S.Fomin), Proceeding of the NATO ASI held in Cambridge, U.K., June 25-July 6, 2001), 179-222, Kluwer Academic Publishers, 2002.
Bosonic and fermionic realizations of the affine algebraĝl n. F Kroode, J Van De Leur, Commun. Math. Phys. 137F. ten Kroode, J. van de Leur, "Bosonic and fermionic realizations of the affine algebraĝl n ", Commun. Math. Phys. 137 (1991), 67-107.
Formation and interaction of sonic-Langmur solitons. N Yajima, M Oikawa, Prog. Theor. Phys. 56N. Yajima, M. Oikawa, "Formation and interaction of sonic-Langmur solitons", Prog. Theor. Phys. 56 (1976), 1719-1739.
Determinant formulas for the τ -functions of the Painlevé equations of type A. Y Yamada, Nagoya Math. J. 156Y. Yamada, "Determinant formulas for the τ -functions of the Painlevé equations of type A", Nagoya Math. J. 156 (1999), 123-134.
| []
|
[
"Radiation from a semi-infinite unflanged planar dielectric waveguide",
"Radiation from a semi-infinite unflanged planar dielectric waveguide"
]
| [
"B U Felderhof \nInstitut für Theorie\nStatistischen Physik\nRWTH Aachen University Templergraben\n5552056AachenGermany\n"
]
| [
"Institut für Theorie\nStatistischen Physik\nRWTH Aachen University Templergraben\n5552056AachenGermany"
]
| []
| Radiative emission from a semi-infinite unflanged planar dielectric waveguide is studied for the case of TM polarization on the basis of an iterative scheme. The first step of the scheme leads to approximate values for the reflection coefficients and electromagnetic fields inside and outside the waveguide. It is shown that for the related problems of reflection from a step potential in one-dimensional quantum mechanics and of Fresnel reflection of an electromagnetic plane wave from a half-space the iterative scheme is in accordance with the exact solution. | 10.1007/978-1-4939-1179-0_1 | [
"https://arxiv.org/pdf/1309.3927v2.pdf"
]
| 118,780,301 | 1309.3927 | 0d210f2bf54d42d201685e8341c958da441a0333 |
Radiation from a semi-infinite unflanged planar dielectric waveguide
21 Sep 2013 (Dated: May 11, 2014)
B U Felderhof
Institut für Theorie
Statistischen Physik
RWTH Aachen University Templergraben
5552056AachenGermany
Radiation from a semi-infinite unflanged planar dielectric waveguide
21 Sep 2013 (Dated: May 11, 2014)arXiv:1309.3927v2 [physics.optics] PACS numbers: 41.20.Jb, 42.25.Bs, 42.79.Gn, 43.20.+g * Electronic address: [email protected]
Radiative emission from a semi-infinite unflanged planar dielectric waveguide is studied for the case of TM polarization on the basis of an iterative scheme. The first step of the scheme leads to approximate values for the reflection coefficients and electromagnetic fields inside and outside the waveguide. It is shown that for the related problems of reflection from a step potential in one-dimensional quantum mechanics and of Fresnel reflection of an electromagnetic plane wave from a half-space the iterative scheme is in accordance with the exact solution.
INTRODUCTION
In a classic paper Levine and Schwinger [1] studied the radiation of sound from an unflanged circular pipe. Later they extended their theory to electromagnetic radiation [2]. Their work constituted the first major advance in the theory of diffraction after Sommerfeld's exact solution of the problem of plane wave diffraction by an ideally conducting half-plane [3]. In the theory of diffraction of sound, microwaves, or light, the radiation is assumed to propagate in uniform space with reflection by rigid objects or idealized boundaries. For a lucid introduction to the theory of diffraction we refer to Sommerfeld's lecture notes [4]. The early theory of diffraction was reviewed by Bouwkamp [5]. Later developments are discussed by Born and Wolf [6]. A brief review of the principles and applications of open-ended waveguides with idealized walls was presented by Gardiol [7].
The invention of the dielectric waveguide by Hondros and Debye [8] has led to the development of optical fibers and the subsequent advances in telecommunication. In the theoretical determination of running wave solutions to Maxwell's equations in a spatially inhomogeneous medium the radiation is assumed to propagate in a guiding structure of infinite length. The problem of emergence of radiation from a semi-infinite waveguide into a half-space is of obvious technical interest. In the case of sound the analysis is based on the exact solution of Levine and Schwinger [1], [9][10][11][12]. For a dielectric waveguide the observation of the emerging radiation can be used as a tool to study the nature of the driving incident wave [13].
It is advantageous to simplify the theoretical analysis by the use of planar symmetry. The theory of Levine and Schwinger was extended to planar geometry by Heins [14,15]. In the following we study radiation emerging from a semi-infinite planar dielectric waveguide. As an intermediate step the wavefunction in the exit plane must be calculated. In this case the integral equation technique of Schwinger [16] cannot be implemented, because of the complicated nature of the integral kernel. We evaluate the emitted radiation and the coefficients of reflection back into the waveguide approximately in a first step of an iterative scheme.
We show in two appendices that the iterative scheme converges to the exact solution in the related problems of reflection by a step potential in one-dimensional quantum mechanics and of Fresnel reflection of an electromagnetic plane wave by a half-space. For the planar dielectric waveguide it does not seem practically possible to go beyond the first step of the iterative scheme.
In a numerical example we study a planar waveguide consisting of a slab of uniform dielectric constant, bounded on both sides by a medium with a smaller dielectric constant [17][18][19]. In the case studied the first step of the iterative scheme leads to a modification of the wavefunction at the exit plane which is relatively small in comparison with the incident wave. Hence we may expect that the calculation provides a reasonable approximation to the exact solution.
II. PLANAR OPEN END GEOMETRY
We employ Cartesian coordinates (x, y, z) and consider a planar waveguide in the halfspace z < 0 with stratified dielectric constant ε(x) and uniform magnetic permeability µ 1 . In the half-space z > 0 the dielectric constant is uniform with value ε ′ and the magnetic permeability is µ 1 . We consider solutions of Maxwell's equations which do not depend on the coordinate y and depend on time t through a factor exp(−iωt). Waves traveling to the right in the left half-space will be partly reflected at the plane z = 0, and partly transmitted into the right half-space. The solutions of Maxwell's equations may be decomposed according to two polarizations . For TE-polarization the components E x , E z , and H y vanish, and the equations may be combined into the single equation
∂ 2 E y ∂x 2 + ∂ 2 E y ∂z 2 + εµ 1 k 2 E y = 0 (TE). (2.1)
We have used gaussian units, and k = ω/c is the vacuum wavenumber. For TM-polarization the components E y , H x , and H z vanish, and the equations may be combined into the single equation
∂ 2 H y ∂x 2 + ∂ 2 H y ∂z 2 − 1 ε dε dx ∂H y ∂x + εµ 1 k 2 H y = 0 (TM). (2.2)
We assume that the profile ε(x) is symmetric, ε(−x) = ε(x), and has a simple form with ε(x) increasing monotonically for x < 0 from value ε 1 to a maximum value ε 2 at x = 0. An example of the geometry under consideration is shown in Fig. 1. In the example the dielectric constant in the left half-space equals a constant ε 2 for −d < x < d and a constant ε 1 < ε 2 for x < −d and x > d.
For definiteness we consider only TM-polarization. It is convenient to denote the magnetic field component H y (x, z) for z < 0 as u(x, z) and for z > 0 as v(x, z). The continuity conditions at z = 0 are
u(x, 0−) = v(x, 0+), 1 ε(x) ∂u(x, z) ∂z z=0− = 1 ε ′ ∂v(x, z) ∂z z=0+ . (2.3)
We consider a solution u 0n (x, z) of Eq. (2.2) given by a guided mode solution u 0n (x, z) = ψ n (x) exp(ip n z), (2.4) where ψ n (x) is the guided mode wavefunction, and p n the guided mode wavenumber. We assume p n > 0, so that the wave u 0n (x, z) exp(−iωt) is traveling to the right. The complete solution takes the form
u n (x, z) = u 0n (x, z) + u 1n (x, z), v n (x, z),(2.5)
where u 1n (x, z) and v n (x, z) must be determined such that the continuity conditions Eq. (2.3) are satisfied. The function u 1n (x, z) describes the reflected wave, and v n (x, z) describes the wave radiated into the right-hand half-space. Since the right-hand half-space is uniform the solution v n (x, z) takes a simple form, and can be expressed as
v n (x, z) = ∞ −∞ F n (q) exp(iqx + i ε ′ µ 1 k 2 − q 2 z) dq.
(2.6)
The contribution from the interval − √ ε ′ µ 1 |k| < q < √ ε ′ µ 1 |k| corresponds to waves traveling to the right, the contribution from |q| > √ ε ′ µ 1 |k| corresponds to evanescent waves.
Similarly the solution u 1n (x, z) in the left half-space can be expressed as
u 1n (x, z) = nm−1 m=0 R mn ψ m (x) exp(−ip m z) + ∞ 0 R n (q)ψ(q, x) exp(−i ε 1 µ 1 k 2 − q 2 z) dq, (2.7)
where the sum corresponds to guided waves traveling to the left, with n m the number of such guided modes possible at the given frequency ω, and the integral corresponds to waves radiating towards the left. We require that the mode solutions are normalized such that [20]
∞ −∞ ψ * m (x)ψ n (x) ε(x) dx = δ mn , ∞ −∞ ψ * m (x)ψ(q, x) ε(x) dx = 0, ∞ −∞ ψ * (q, x)ψ(q ′ , x) ε(x) dx = δ(q − q ′ ). (2.8)
The guided mode solutions {ψ m (x)} can be taken to be real. Orthogonality follows from Eq. (2.2). We show in the next section how the functions u 1n (x, z) and v n (x, z) may in principle be evaluated from an iterative scheme. The coefficients {R mn } and the amplitude function R n (q) also follow from the scheme.
III. ITERATIVE SCHEME
The iterative scheme is based on successive approximations to the scattering solution. Thus we write the exact solution as infinite sums
u n (x, z) = ∞ j=0 u (j) n (x, z), v n (x, z) = ∞ j=0 v (j) n (x, z), (3.1) with the terms u (j+1) n (x, z), v (j+1) n (x, z) determined from the previous u (j) n (x, z), v (j)
n (x, z). In zeroth approximation we identify u (0) n (x, z) with the incident wave,
u (0) n (x, z) = ψ n (x) exp(ip n z). (3.2)
The corresponding v (0)
n (x, z) will be determined by continuity at the exit plane z = 0. From Eq. (3.2) we have u (0) n (x, 0−) = ψ n (x). This has the Fourier transform
φ n (q) = 1 2π ∞ −∞ ψ n (x) exp(−iqx) dx. (3.3)
Using continuity of the wavefunction at z = 0 and the expression (2.6) we find correspondingly
v (0) n (x, z) = ∞ −∞ φ n (q) exp(iqx + i ε ′ µ 1 k 2 − q 2 z) dq,(3.4)
so that in zeroth approximation F (0) n (q) = φ n (q). Clearly the zeroth approximation does not satisfy the second continuity equation in Eq. (2.3), and we must take care of this in the next approximation.
For the difference of terms in Eq. (2.3) we find
ρ (0) n (x) = −i ε(x) p n ψ n (x) + i ε ′ ∞ −∞ ε ′ µ 1 k 2 − q 2 φ n (q) exp(iqx) dq. (3.5)
By symmetry ρ
n (x) is symmetric in x for n even, antisymmetric in x for n odd.
The next approximation u (1) n (x, z) can be found by comparison with the solution of the problem where the profile ε(x) extends over all space and radiation is generated by a source ε(x)ρ(x)δ(z) with a Sommerfeld radiation condition, so that radiation is emitted to the right for z > 0 and to the left for z < 0. This antenna solution can be expressed as
u A (x, z) = ∞ −∞ K(x, x ′ , z)ρ(x ′ ) dx ′ , (3.6)
with kernel K(x, x ′ , z). The latter can be calculated from the Fourier decomposition
δ(z) = 1 2π ∞ −∞ e ipz dp,(3.7)
in terms of the integral
K(x, x ′ , z) = 1 2π ∞ −∞ G(x, x ′ , p) e ipz dp,(3.8)
with the prescription that the path of integration in the complex p plane runs just above the negative real axis and just below the positive real axis. The Green function G(x, x ′ , p) can be found from the solution of the one-dimensional wave equation,
d 2 G dx 2 − 1 ε dε dx dG dx + (εµ 1 k 2 − p 2 )G = ε(x)δ(x − x ′ ). (3.9)
The solution takes the form [20] G
(x, x ′ , p) = ε(x) f 2 (x < , p)f 3 (x > , p) ∆(f 2 , f 3 , p) ε(x ′ ), (3.10) where x < (x > )
is the smaller (larger) of x and x ′ , and the remaining quantities will be specified in the next section. The Green function satisfies the symmetry properties 11) and the reciprocity relation
G(x, x ′ , −p) = G(x, x ′ , p), G(−x, −x ′ , p) = G(x, x ′ , p),(3.G(x, x ′ , p) = G(x ′ , x, p). (3.12)
Consequently the kernel K(x, x ′ , z) has the properties
K(x, x ′ , −z) = K(x, x ′ , z), K(−x, −x ′ , z) = K(x, x ′ , z), (3.13) as well as K(x, x ′ , z) = K(x ′ , x, z). (3.
14)
The function u (1) n (x, z) is now identified as
u (1) n (x, z) = − ∞ −∞ K(x, x ′ , z)ρ (0) n (x ′ ) dx ′ . (3.15)
The minus sign is needed to provide near cancellation of the source density between the zeroth and first order solutions, ρ
(0) n (x) + ρ (1)
n (x) ≈ 0. We find the first order function F (1) n (q) by Fourier transform from the value at z = 0 in the form
F (1) n (q) = 1 2π ∞ −∞ u (1) n (x, 0)e −iqx dx. (3.16)
The corresponding function v (1) n (x, z) is found from Eq (2.6). The first order source density ρ (1) n (x) is found to be
ρ (1) n (x) = −1 ε(x) ∂u (1) n (x, z) ∂z z=0 + i ε ′ ∞ −∞ ε ′ µ 1 k 2 − q 2 F (1) n (q) exp(iqx) dq. (3.17)
In principle the first order function u (1) n (x, 0) in the exit plane z = 0 may be regarded as the result of a linear operator R (1) acting on the state ψ n (x) given by the incident wave. The iterated solution then corresponds to the action with the operator R = R (1)
(I − R (1) ) −1 ,
where I is the identity operator. In order j of the geometric series corresponding to the operator R the wavefunctions u
(j) 1n (x, z) and v (j)
n (x, z) in the left and right half-space can be found by completing the function u
(j) 1n (x, 0) = v (j)
1n (x, 0) in the exit plane by left and right running waves respectively.
Assuming that the scheme has been extended to all orders we obtain the solutions
u 1n (x, z) = u n (x, z) − u (0) n (x, z) and v n (x, z) given by Eq. (3.1). By construction at each step u (j) n (x, 0) = v (j)
n (x, 0). In the limit we must have ∞ j=0 ρ (j) n (x) = 0, (3.18) so that the continuity conditions Eq. (2.3) are exactly satisfied. In Appendix A we show how the iterative scheme reproduces the exact solution for reflection from a step potential in one-dimensional quantum mechanics. In Appendix B we show the same for Fresnel reflection.
In the integral in Eq. (3.15) it is convenient to perform the integral over p first, since ρ
R (1) mn = i 2p m ∞ −∞ ψ m (x)ρ (0) n (x) dx. (3.19)
The second term in Eq. (2.7) corresponds to the remainder of the integral, after subtraction of the simple pole contributions. The function R
n (q) will be discussed in the next section. In the calculation of F
F (1) n (q) = nm−1 m=0 R (1) mn φ m (q) + δF (1) n (q),(3.20)
where δF (1) n (q) is the contribution from the remainder of the integral over p, after subtraction of the simple pole contributions.
Formally, in the complete solution Eq. (2.7) the reflection coefficients R mn and the amplitude function R n (q) are found as R mn = (ψ m , u 1n (0)), R n (q) = (ψ(q), u 1n (0)). (3.21) with the scalar product as given by Eq. (2.8). The first step of the iterative scheme yields
R (1) mn = (ψ m , u (1) n (0)), R (1) n (q) = (ψ(q), u (1) n (0)). (3.22)
The continuum states ψ(q) can be discretized in the usual way, so that the expressions in Eq. (3.22) can be regarded as elements of a matrix R (1) . As indicated above, the iterative scheme corresponds to a geometric series, so that the reflection coefficients in Eq. (3.21) can be found as elements of the matrix
R = (I − R (1) ) −1 − I,(3.23)
where I is the identity matrix. Finally the function F n (q) can be found from the corresponding state u n (0) as in Eq. (3.16). The function F n (q) can be related to the radiation scattered into the right halfspace. The scattering angle θ is related to the component q by
sin θ = q/ ε ′ µ 1 k 2 .
(3.24) Defining the scattering cross section σ n (θ) by σ n (θ) sin θdθ = |F n (q)| 2 qdq, (3.25)
we find the relation σ n (θ) = ε ′ µ 1 k 2 ε ′ µ 1 k 2 − q 2 |F n (q)| 2 .
(3.26)
In lowest approximation the cross section is proportional to the absolute square of the Fourier transform of the guided mode ψ n (x). To higher order the cross section is affected by the reflection into other modes.
IV. CONTINUOUS SPECTRUM
The calculation of the function R
n (q) corresponding to the contribution from the continuous spectrum requires a separate discussion. The wave equation (3.9) is related to a quantummechanical Schrödinger equation for a particle in a potential. The bound states of the Schrödinger problem correspond to the guided modes, and the scattering states correspond to a continuous spectrum of radiation modes. The eigenstates of the Hamilton operator of the Schrödinger problem satisfy a completeness relation which can be usefully employed in the waveguide problem.
Explicitly the homogeneous one-dimensional Schrödinger equation corresponding to Eq. (3.9) via the relation ψ(x) = ε(x)f (x) reads [20]
d 2 f dx 2 − V (x)f = p 2 f,(4.1)
where the function V (x) is given by
V (x) = −εµ 1 k 2 + √ ε d 2 dx 2 1 √ ε . (4.2)
By comparison with the quantummechanical Schrödinger equation we see that
U(x) = k 2 1 + V (x),(4.3)
where k 1 = √ ε 1 µ 1 k, may be identified as the potential. The bound state energies correspond to {k 2 1 − p 2 n }. It is convenient to assume that the dielectric profile ε(x) equals ε 1 for x < −x 1 and x > x 1 , so that the potential U(x) vanishes for |x| > x 1 . We define three independent solutions of the Schrödinger equation (4.1) with specified behavior for |x| > x 1 . The behavior of the function f 1 (x, p) is specified as
f 1 (x, p) = e iq 1 x , for x < −x 1 , f 1 (x, p) = W 11 e iq 1 x + W 21 e −iq 1 x , for x > x 1 ,(4.4)
with wavenumber
q 1 = k 2 1 − p 2 . (4.5)
The behavior of the function f 2 (x, p) is specified as
f 2 (x, p) = e −iq 1 x , for x < −x 1 , f 2 (x, p) = W 12 e iq 1 x + W 22 e −iq 1 x , for x > x 1 . (4.6)
Similarly, the behavior of the function f 3 (x, p) is specified as
f 3 (x, p) = W 22 e iq 1 x + W 12 e −iq 1 x , for x < −x 1 , f 3 (x, p) = e iq 1 x , for x > x 1 . (4.7)
The coefficients W 12 and W 22 are elements of the transfer matrix of the planar structure,
W = W 11 W 12 W 21 W 22 = 1 T ′ T T ′ − RR ′ R ′ −R 1 . (4.8)
Because of the assumed symmetry of the dielectric profile we have in the present case R ′ = R and T ′ = T , so that W 21 = −W 12 . Moreover
W 11 W 22 + W 2 12 = 1. (4.9)
The functions f 2 (x, p) and f 3 (x, p) were used in the calculation of the Green function in Eq. (3.10). The denominator in that expression is given by
∆(f 2 , f 3 ) = 2iq 1 W 22 (p, k). (4.10)
From the solution of the inhomogeneous Schrödinger equation it follows that the completeness relation of the normal mode solutions may be expressed as [21] nm−1
n=0 ψ n (x)ψ n (x ′ ) ε(x)ε(x ′ ) + 1 2π ∞ −∞ f 2 (x, k 2 1 − q 2 )f * 2 (x ′ , k 2 1 − q 2 ) |W 22 ( k 2 1 − q 2 , k)| 2 dq = δ(x − x ′ ). (4.11)
Correspondingly the Green function may be decomposed as
G(x, x ′ , p) = nm−1 n=0 ψ n (x)ψ n (x ′ ) p 2 n − p 2 + 1 2π ε(x)ε(x ′ ) ∞ −∞ f 2 (x, k 2 1 − q 2 )f * 2 (x ′ , k 2 1 − q 2 ) (k 2 1 − q 2 − p 2 ) |W 22 ( k 2 1 − q 2 , k)| 2 dq. (4.12)
Hence we find for the antenna kernel K(x, x ′ , z) from Eq. (3.8) by use of the integration prescription
K(x, x ′ , z) = nm−1 n=0 −i 2p n ψ n (x)ψ n (x ′ ) e ipn|z| − i 4π ε(x)ε(x ′ ) ∞ 0 f 2 (x, k 2 1 − q 2 )f * 2 (x ′ , k 2 1 − q 2 ) k 2 1 − q 2 |W 22 ( k 2 1 − q 2 , k)| 2 e i √ k 2 1 −q 2 |z| dq.
(4.13)
The first order left-hand wavefunction is therefore found from Eq. (3.15) as
u (1) n (x, z) = nm−1 m=0 i 2p m (ψ m , ερ (0) n )ψ m (x) e −ipmz + i 4π ε(x)ε(x ′ ) ∞ 0 ( √ εf 2 ( k 2 1 − q 2 ), ερ (0) n ) k 2 1 − q 2 |W 22 ( k 2 1 − q 2 , k)| 2 f 2 (x, k 2 1 − q 2 )e −i √ k 2 1 −q 2 z dq,
(4.14)
with scalar product as given by Eq. (2.8). The wavefunction is the sum of guided modes running to the left and of radiation into the left-hand half-space. The first term agrees with the reflection coefficient given by Eq. (3.19). By symmetry the matrix element (ψ m , ερ (0) n ) vanishes unless m and n are both even or both odd. The integral provides an alternative expression for the remainder δu (1) n (x, z). From Eq. (4.11) we may identify
ψ(q, x) = 1 √ 2π|W 22 ( k 2 1 − q 2 , k)| ε(x)f 2 (x, k 2 1 − q 2 ). (4.15)
With this definition the function R (1) n (q) is given by
R (1) n (q) = i 2 √ 2π ( √ εf 2 ( k 2 1 − q 2 ), ερ (0) n ) k 2 1 − q 2 |W 22 ( k 2 1 − q 2 , k)| = i 2 k 2 1 − q 2 (ψ(q), ερ (0) n ). (4.16)
The second line has the same structure as Eq. (3.19). Although the decomposition in Eq. (4.14) is of theoretical interest, the calculation of the function u (1) n (x, z) is performed more conveniently as indicated in Eq. (3.15), with the integral over p performed first.
V.
NUMERICAL EXAMPLE
We demonstrate the effectiveness of the scheme on a numerical example. We consider a flat dielectric profile defined by ε(x) = ε 2 for −d < x < d and ε(x) = ε 1 for |x| > d with values ε 2 = 2.25 and ε 1 = 2.13. In the right half-space we put ε ′ = 1, and we put µ 1 = 1 everywhere. The geometry is shown in Fig. 1.
By symmetry the guided modes in infinite space are either symmetric or antisymmetric in x. The explicit expressions for the mode wavefunctions can be found by the transfer matrix method [20]. At each of the two discontinuities the coefficients of the plane waves exp(iq i x) and exp(−iq i x) are transformed into coefficients of the plane waves exp(iq j x) and exp(−iq j x) by a matrix involving Fresnel coefficients given by
t ij = 2ε j q i ε i q j + ε j q i , r ij = ε j q i − ε i q j ε i q j + ε j q i , (i, j) = (1, 2), (5.1) with wavenumbers q j = k 2 j − p 2 , k j = √ ε j µ 1 k. (5.2)
The wavenumbers p n (k) of the guided modes are found as zeros of the transfer matrix element W 22 (p, k), which takes the explicit form
W 22 (p, k) = e 2iq 1 d cos 2q 2 d − i ε 2 1 q 2 2 + ε 2 2 q 2 1 2ε 1 ε 2 q 1 q 2 sin 2q 2 d . (5.3)
The guided mode wavefunctions {ψ n (x)}, their Fourier transforms {φ n (q)}, and the Green function G(x, x ′ , p) can be found in explicit form. In Fig. 2 we show the ratio of wavenumbers p n (k)/k as a function of kd for the first few guided modes. We choose the frequency corresponding to kd = 12. In that case there are two symmetric modes, denoted as TM0 and TM2, and one antisymmetric mode, denoted as TM1. We assume the incident wave to be symmetric in x. Then it is not necessary to consider the antisymmetric mode. In Fig. 3 we show the corresponding normalized wavefunctions ψ 0 (x) and ψ 2 (x). In Fig. 4 we show their Fourier transforms φ 0 (q) and φ 2 (q). The wavenumbers at kd = 12 are p 0 = 17.955/d and p 2 = 17.624/d. The edge of the continuum is given by k 1 d = 17.513, and the corresponding value for ε 2 is k 2 d = 18.
In Fig. 5 we show the source density −iρ
(0)
n (x) of the zeroth approximation for n = 0, 2 as a function of x, as given by Eq. (3.5). The coefficients of the simple pole contributions can be calculated from Eq. (3.19). We find numerically for the discrete part R For the first correction to the emitted radiation we need to calculate the function u (1) n (x, 0). The kernel K(x, x ′ , 0) in Eq. (3.15) can be evaluated numerically. On account of the symmetry in ±p it is sufficient to calculate twice the integral along the positive real p axis, with path of integration just below the axis. In the numerical integration over p in Eq. (3.8) the simple poles at {p m } cause problems. In order to avoid the simple poles we therefore integrate instead along a contour consisting of the line from 0 to k 1 just below the axis, a semi-circle in the lower half of the complex p plane centered at (k 1 + k 2 )/2 and of radius (k 2 − k 1 )/2, and the line just below the real axis from k 2 to +∞. In Fig. 6 we plot as an example the real part of K(x, 0, 0) as a function of x. The plot of the imaginary part is similar.
In Fig. 7 we show the imaginary part of the function u (1) 0 (x, 0), as calculated from Eq. (3.15). This is nearly identical with the contribution from the simple poles at p 0 and p 2 , which is also shown in Fig. 7. In Fig. 8 we show the real part of the function u (1) 0 (x, 0). Here the simple poles do not contribute. The magnitude of the wavefunction at the origin u (1) 0 (0, 0) = −0.149 − 0.005i may be compared with that of the zeroth approximation u (0) 0 (0, 0) = 1.346. This shows that the first order correction is an order of magnitude smaller than the zeroth order approximation. Consequently we may expect that the sum u (0) 0 (x, 0) + u (1) 0 (x, 0) provides a close approximation to the exact value. In Fig. 9 we show the absolute value |F
VI. DISCUSSION
In the above we have employed an iterative scheme inspired by the exact solution of two fundamental scattering problems, reflection by a step potential in one-dimensional quantum mechanics, shown in Appendix A, and Fresnel reflection of electromagnetic radiation by a half-space, shown in Appendix B. For the planar dielectric waveguide we have implemented only the first step of the iterative scheme. In the numerical example shown in Sec. V even this first step leads to interesting results. The method is sufficiently successful that it encourages application in other situations.
In particular it will be of interest to apply the method to a circular cylindrical dielectric waveguide or optical fiber. The mathematics of the method carries over straightforwardly to this more complicated geometry, with the plane wave behavior in the transverse direction replaced by Bessel functions.
Due to symmetry the problem for both planar and cylindrical geometry can be reduced to an equation for a scalar wavefunction, so that the theory is similar to that for sound propagation. This suggests that an interesting comparison can be made with a lattice Boltzmann simulation. For a rigid circular pipe such a simulation has already been performed by da Silva and Scavone [22], with interesting results. A finite element method has been applied to a rigid open-ended duct of more general cross section [23].
Appendix A
In this Appendix we show how the iterative scheme reproduces the exact solution of the time-independent one-dimensional Schrödinger equation with a step potential. We consider the equation
− d 2 u dz 2 + V (z)u = p 2 u,(A1)
with potential V (z) = 0 for z < 0 and V (z) = V for z > 0. In proper units p 2 is the energy. We denote the solution for z > 0 as v(z). For a wave incident from the left the exact solution reads
u(z) = e ipz + Be −ipz , v(z) = Ce ip ′ z ,(A2)
where p ′ = p 2 − V , and the reflection coefficient B and transmission coefficient C are given by
B = p − p ′ p + p ′ , C = 2p p + p ′ .(A3)
The wavefunction and its derivative are continuous at z = 0. We apply the iterative scheme and put to zeroth order
u (0) (z) = e ipz , v (0) (z) = e ip ′ z .(A4)
The antenna solution u A (z) solves the equation
d 2 u A dz 2 + p 2 u A = ρδ(z) (A5)
for all z. It is given by
u A (z) = K(z)ρ, K(z) = 1 2ip e ip|z| .(A6)
To zeroth order the source ρ is
ρ (0) = − du (0) dz z=0 + dv (0) dz z=0 = −i(p − p ′ ).(A7)
We put the first order solution equal to
u (1) (z) = −K(z)ρ (0) = p − p ′ 2p e −ipz , v (1) (z) = p − p ′ 2p e ip ′ z .(A8)
Note the minus sign in −K(z)ρ (0) . The value at the exit z = 0 is sufficient to calculate the coefficients B and C from the geometric series
B = ∞ j=1 p − p ′ 2p j , C = ∞ j=0 p − p ′ 2p j .(A9)
By continuation one finds for the wave function at order j for j ≥ 1 Geometry of the planar waveguide. Plot of the reduced wavenumber p n (k)/k of the lowest order guided waves for n = 0, 1, 2, as functions of kd for values of the dielectric constant given in the text. Plot of the wavefunctions ψ 0 (x) and ψ 2 (x) of the guided modes with n = 0 (no nodes) and n = 2 (two nodes) as functions of x/d. Plot of the Fourier transform φ 0 (q) and φ 2 (q) of the wavefunctions of the guided modes with n = 0 (solid curve) and n = 2 (dashed curve) as functions of qd. Plot of the real part of the kernel K(x, 0, 0), as given by Eq. (3.8), as a function of x/d. Plot of the real part of the first order wavefunction u Plot of the imaginary part of the first order wavefunction u (1) 0 (x, 0) at the exit plane as a function of x/d. Plot of the absolute value of the Fourier transform |F
u (j) (z) = p − p ′ 2p j e −ipz , v (j) (z) = p − p ′ 2p j e ip ′ z .(A10)
n
(x ′ ) does not depend on p. The pole at −p m , arising from a zero of the denominator ∆ in Eq. (3.10), yields the first order reflection coefficient[20]
and compare with the zeroth approximation |F
q)| = |φ 0 (q)|. By use of Eq. (3.26) the absolute square of the transform yields the angular distribution of radiation emitted into the right-hand half-space.
Figure captions
Figure captions
Fig. 1
1Fig. 1
Fig. 2
2Fig. 2
Fig. 3
3Fig. 3
Fig. 4
4Fig. 4
Fig. 5
5Plot of the source densities −iρ
x), as given by Eq. (3.5), as functions of x/d.
Fig. 6
6Fig. 6
Fig. 7
7Fig. 7
at the exit plane as a function of x/d (solid curve), compared with the contribution of the two guided waves R
Fig. 8
8Fig. 8
Fig. 9
9Fig. 9
FIG. 1:
Hence for j ≥ 1 the source at order j isso that the sum over all j vanishes,as it should. Alternatively one can write directly from Eq. (A8)Adding this to u (0) (z), v (0) (z) one reproduces Eq. (A2).Appendix BIn this Appendix we show how the iterative scheme reproduces the exact solution for Fresnel reflection from a half-space. We consider infinite space with dielectric constant ε for z < 0 and ε ′ for z > 0. The magnetic permeability equals µ 1 everywhere. We consider waves independent of y and TM-polarization. Then the magnetic field component H y (x, z) satisfies the scalar equation Eq. (2.2). We put H y (x, z) = u(x, z) for z < 0 and H y (x, z) = v(x, z) for z > 0. The continuity conditions at z = 0 areFor a plane wave incident from the left the exact solution readswith p = εµ 1 k 2 − q 2 , p ′ = ε ′ µ 1 k 2 − q 2 , and reflection coeficient B and transmission coefficient C given byWe apply the iterative scheme and put to zeroth orderThe antenna solution u A (x, z) solves the equationfor all (x, z). For ρ(x) = ρ q e iqx it is given byTo zeroth order the source ρ q isWe put the first order solution equal toNote the minus sign in −e iqx K(z)ρ (0) q . The value at the exit z = 0 is sufficient to calculate the coefficients B and C from the geometric seriesBy continuation one finds for the wave function at order j for j ≥ 1Hence for j ≥ 1 the source at order j isso that the sum over all j vanishes,as it should. Alternatively one can write directly from Eq. (B8)Adding this to u (0) (x, z), v (0) (x, z) one reproduces Eq. (B2). We note that the zeroth and first order source densities are related byHence we find B = 1 − M −1 , C = 1 + B.This suggests that more generally the complete solution of the scattering problem may be found from the relation between the zeroth and first order source densities.
. H Levine, J Schwinger, Phys. rev. 73383H. Levine and J. Schwinger, Phys. rev. 73, 383 (1948).
. H Levine, J Schwinger, Comm. Pure Appl. Math. 3355H. Levine and J. Schwinger, Comm. Pure Appl. Math. 3, 355 (1950).
. A Sommerfeld, Math. Ann. 47317A. Sommerfeld, Math. Ann. 47, 317 (1896).
. A Sommerfeld, Optics. Akademische VerlagsgesellschaftA. Sommerfeld, Optics (Akademische Verlagsgesellschaft, Leipzig, 1964).
. H Bouwkamp, Rep. Prog. Phys. 1735H. Bouwkamp, Rep. Prog. Phys. 17, 35 (1954).
M Born, E Wolf, Principles of Optics. OxfordPergamon PressM. Born and E. Wolf, Principles of Optics (Pergamon Press, Oxford, 1975).
Gardiol in Advances in electronics and electron physics. F E , P. W. Hawkes63139F. E. Gardiol in Advances in electronics and electron physics, edited by P. W. Hawkes, 63, 139 (1985).
. D Hondros, P Debye, Ann. d. Phys. 32465D. Hondros and P. Debye, Ann. d. Phys. 32, 465 (1910).
. G F Homicz, J A Lordi, J. Sound Vib. 41283G. F. Homicz and J. A. Lordi, J. Sound Vib. 41, 283 (1975).
. P Joseph, C L Morfey, J. Acoust. Soc. Am. 1052590P. Joseph and C. L. Morfey, J. Acoust. Soc. Am. 105, 2590 (1999).
M S Howe, Hydrodynamics and Sound. CambridgeCambridge University PressM. S. Howe, Hydrodynamics and Sound (Cambridge University Press, Cambridge, 2007).
S W Rienstra, A Hirschberg, An Introduction to Acoustics unpublished, available via internet. S. W. Rienstra and A. Hirschberg, An Introduction to Acoustics unpublished, available via internet (2013).
. D M S Soh, J Nilsson, S Baek, C Codemard, Y Jeong, V Philippov, J. Opt. Soc. Am. 211241D. M. S. Soh, J. Nilsson, S. Baek, C. Codemard, Y. Jeong, and V. Philippov, J. Opt. Soc. Am. A21, 1241 (2004).
. A E Heins, Quart. Appl. Math. 6157A. E. Heins, Quart. Appl. Math. 6, 157 (1948).
. P C Clemmow, Proc. Roy. Soc. A. 205286P. C. Clemmow, Proc. Roy. Soc. A 205, 286 (1951).
F E Borgnis, C H Papas, Electromagnetic Waveguides and Resonators. BerlinSpringerF. E. Borgnis and C. H. Papas, Electromagnetic Waveguides and Resonators (Springer, Berlin, 1958).
D Marcuse, Theory of Dielectric Optical Waveguides. New YorkAcademicD. Marcuse, Theory of Dielectric Optical Waveguides (Academic, New York, 1974).
H Kogelnik, Integrated Optics. T. TamirBerlinSpringer7H. Kogelnik, in Integrated Optics, Topics Appl. Phys. 7, ed. by T. Tamir (Springer, Berlin, 1979).
P Lorrain, D R Corson, F Lorrain, Electromagnetic Fields and Waves. New YorkFreemanP. Lorrain, D. R. Corson, and F. Lorrain, Electromagnetic Fields and Waves (Freeman, New York, 1988).
. A Bratz, B U Felderhof, G Marowsky, Appl. Phys. B. 50393A. Bratz, B. U. Felderhof, and G. Marowsky, Appl. Phys. B 50, 393 (1990).
R G Newton, Scattering theory of waves and particles. New YorkMcGraw-HillR. G. Newton, Scattering theory of waves and particles (McGraw-Hill, New York, 1966).
. A R Silva, G P Scavone, J. Phys. A. 40397A. R. da Silva and G. P. Scavone, J. Phys. A 40, 397 (2007).
. W Duan, R Kirby, J. Acoust. Soc. Am. 1313638W. Duan and R. Kirby, J. Acoust. Soc. Am. 131, 3638 (2012).
| []
|
[
"Accessing topological order in fractionalized liquids with gapped edges",
"Accessing topological order in fractionalized liquids with gapped edges"
]
| [
"Thomas Iadecola \nPhysics Department\nBoston University\n02215BostonMassachusettsUSA\n",
"Titus Neupert \nPrinceton Center for Theoretical Science\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"Claudio Chamon \nPhysics Department\nBoston University\n02215BostonMassachusettsUSA\n",
"Christopher Mudry \nCondensed Matter Theory Group\nPaul Scherrer Institute\nCH-5232Villigen PSISwitzerland\n"
]
| [
"Physics Department\nBoston University\n02215BostonMassachusettsUSA",
"Princeton Center for Theoretical Science\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Physics Department\nBoston University\n02215BostonMassachusettsUSA",
"Condensed Matter Theory Group\nPaul Scherrer Institute\nCH-5232Villigen PSISwitzerland"
]
| []
| We consider manifestations of topological order in time-reversal-symmetric fractional topological liquids (TRS-FTLs), defined on planar surfaces with holes. We derive a general formula for the topological ground state degeneracy of such a TRS-FTL, which applies to cases where the edge modes on each boundary are fully gapped by backscattering terms. The degeneracy is exact in the limit of infinite system size, and is given by q N h , where N h is the number of holes and q is an integer that is determined by the topological field theory. When the degeneracy is lifted by finite-size effects, the holes realize a system of N h coupled spin-like q-state degrees of freedom. In particular, we provide examples where "artificial" Z q quantum clock models are realized. We also investigate the possibility of measuring the topological ground state degeneracy with calorimetry, and briefly revisit the notion of topological order in s-wave BCS superconductors. arXiv:1407.4129v2 [cond-mat.str-el] | 10.1103/physrevb.90.205115 | [
"https://arxiv.org/pdf/1407.4129v4.pdf"
]
| 118,499,401 | 1407.4129 | f8a9360dbaeea703449eda5842d0e10057f5e1b1 |
Accessing topological order in fractionalized liquids with gapped edges
Thomas Iadecola
Physics Department
Boston University
02215BostonMassachusettsUSA
Titus Neupert
Princeton Center for Theoretical Science
Princeton University
08544PrincetonNew JerseyUSA
Claudio Chamon
Physics Department
Boston University
02215BostonMassachusettsUSA
Christopher Mudry
Condensed Matter Theory Group
Paul Scherrer Institute
CH-5232Villigen PSISwitzerland
Accessing topological order in fractionalized liquids with gapped edges
(Dated: July 18, 2014)
We consider manifestations of topological order in time-reversal-symmetric fractional topological liquids (TRS-FTLs), defined on planar surfaces with holes. We derive a general formula for the topological ground state degeneracy of such a TRS-FTL, which applies to cases where the edge modes on each boundary are fully gapped by backscattering terms. The degeneracy is exact in the limit of infinite system size, and is given by q N h , where N h is the number of holes and q is an integer that is determined by the topological field theory. When the degeneracy is lifted by finite-size effects, the holes realize a system of N h coupled spin-like q-state degrees of freedom. In particular, we provide examples where "artificial" Z q quantum clock models are realized. We also investigate the possibility of measuring the topological ground state degeneracy with calorimetry, and briefly revisit the notion of topological order in s-wave BCS superconductors. arXiv:1407.4129v2 [cond-mat.str-el]
I. INTRODUCTION
The robust ground state degeneracy (GSD) that arises in topologically ordered systems [1][2][3] has been an object of intense study over the past quarter-century. Interest in such states of matter has been motivated in large part by the desire to access quasiparticles with non-Abelian statistics, whose nontrivial braiding could be used as a platform for quantum computation. 4 Nevertheless, to date there has been no definitive experimental proof that such non-Abelian quasiparticles exist, nor has there been any direct observation of topological GSD.
There have been several theoretical proposals for the experimental detection of topological degeneracy. One set of proposals for the (putative) non-Abelian ν = 5/2 quantum Hall state focuses on measuring the contribution of the GSD to the electronic portion of the entropy at low temperatures. Observable signatures of this contribution include the thermopower 5,6 and the temperature dependence of the electrochemical potential and orbital magnetization. 7 The thermopower has been measured on several occasions 8,9 with no conclusive signatures. Abelian fractional quantum Hall (FQH) states 10 are also topologically ordered, but the bulk GSD in these systems is only accessible on closed surfaces (e.g., the torus). This is unnatural for experiments, which are confined to finite planar systems, although a recent proposal 11 suggests a transport measurement in a bilayer FQH system that avoids this handicap by effectively altering the topology of the system.
In this paper, we propose that time-reversal-symmetric fractional topological liquids (FTLs) may constitute a promising alternative platform for realizing the topological GSD in experimentally accessible geometries. FTLs with time-reversal symmetry (TRS) have an effective description in terms of doubled Chern-Simons (CS), or so-called BF, theories. 12 Examples of time-reversalsymmetric FTLs with topological order include fractional quantum spin Hall systems, [13][14][15] Kitaev's toric code, 16 and even the s-wave BCS superconductor. 3,17 In the present work we emphasize FTLs whose edge states in planar geometries can be completely gapped without breaking TRS, which is possible when certain criteria are satisfied. 18,19 In these cases, the degenerate ground state manifold is well separated from excited states and the GSD on punctured planar surfaces is accessible experimentally.
Our program for this paper is as follows. We first derive a general formula for the GSD of a doubled CS theory defined on a plane with N h holes, in cases where all helical edge modes are gapped by backscattering terms. 20 This topological degeneracy increases exponentially with the number of holes, and is exact in the limit where all holes are infinitely large and infinitely far apart. We then consider finite-sized systems, where the degeneracy is split exponentially by quasiparticle tunneling processes. In this setting, we argue that the holes themselves realize an effective spin-like system, whose Hilbert space consists of what was formerly the degenerate ground state manifold. We then examine calorimetry as a possible experimental probe of the degeneracy. We argue that, for suitable materials, the contribution of the GSD to the low-temperature heat capacity could be observed experimentally, even in the presence of the expected phononic and electronic backgrounds. Finally, we also briefly revisit the notion of topological order in s-wave superconductors, which was suggested by Wen 3 and investigated in detail by Hansson et al. in Ref. 17. We argue that, for a thin-film superconductor with (3+1)-dimensional electromagnetism, there is indeed a ground state degeneracy, which is related to flux quantization. However, this degeneracy is lifted in a power-law fashion, rather than exponentially, and is therefore not topological in the canonical sense of Refs. 1-3.
II. THE TOPOLOGICAL DEGENERACY
In this section we derive a formula for the ground state degeneracy of a TRS-FTL with gapped edges. We begin with some preliminary information before moving on to the derivation.
A. Definitions and notation
A general time-reversal-symmetric doubled Chern-Simons theory in (2+1)-dimensional space and time has the form 19
L CS = 1 4π K ij µνρ a i µ ∂ ν a j ρ + e 2π Q i µνρ A µ ∂ ν a i ρ , (2.1a)
where i, j = 1, · · · , 2N , µ, ν, ρ = 0, 1, 2, and summation on repeated indices is implied. Here, the 2N × 2N matrix K ij is symmetric, invertible, and integer-valued. The fully antisymmetric Levi-Civita tensor µνρ appears with the convention 012 = 1. The components A µ of the electromagnetic gauge potential are restricted to (2+1)dimensional space and time, and the vector Q has integer entries that measure the charges of the various CS fields a i µ in units of the electron charge e. The theory contains N Kramers pairs of CS fields, which transform into one another under the operation of time-reversal. We will therefore be particularly interested in scenarios where the 2N × 2N matrix K has the following block form, which is consistent with TRS, as was shown in Ref. 19,
K = κ ∆ ∆ T −κ , (2.1b)
where the N × N matrices κ = κ T and ∆ = −∆ T . TRS further imposes that the charge vector possess the block form (see Ref. 19)
Q = .
(2.1c)
The theory (2.1) can also be re-expressed in terms of an equivalent BF theory 21 by defining the linear transfor-
mationã i µ . . = R ij a j µ , where R . . = 1 1 1 −1 , (2.2a)
with 1 the N × N identity matrix. This linear transformation induces the K-matrix and charge vector When defined on a manifold with boundary, the CS theory (2.1a) has an associated theory of 2N chiral bosons φ i at the edge. In the most generic case, the boundary of the system consists of a disjoint union of an arbitrary number of edges, each with a Lagrangian density of the form (in the absence of the gauge field A µ ) 19
K . . = R −1 K R = 0 κ κ T 0 , (2.2b) κ . . = κ − ∆, (2.2c) Q . . = R −1 Q = 0 . (2.L E = 1 4π K ij ∂ t φ i ∂ x φ j − V ij ∂ x φ i ∂ x φ j + L T , (2.3)
where K ij is the same 2N × 2N matrix as before and the matrix V ij encodes non-universal information specific to a particular edge. The Lagrangian density L T generically contains all inter-channel tunneling operators,
L T = T ∈L U T (x) cos T T K φ(x) − ζ T (x) , (2.4) where T is a 2N -dimenisonal integer vector, φ T = (φ 1 · · · φ 2N ),T T i Q = 0, ∀i (charge conservation), (2.5a) T T i K T j = 0, ∀ i, j (Haldane criterion). (2.5b)
Strictly speaking, the criterion (2.5a) need not hold in a general system, such as (for example) in the case of a superconductor. In this case, one replaces charge conservation with charge conservation mod 2, so that T T i Q is only constrained to be even. In the next section, we will focus on cases where the criteria (2.5) are satisfied.
B. Calculation of the degeneracy
The ground state degeneracy on the torus of a multicomponent Abelian Chern-Simons theory of the form (2.1a) is known on general grounds to be given by | det K|. 1,10, 23 We now present an argument that, for a doubled CS theory with a gapped edge and a K-matrix of the form (2.1b), the ground-state degeneracy of the theory on the annulus
A . . = [0, π] × S 1 (2.6a)
is given by the formula
GSD = | det K| = Pf ∆ κ −κ ∆ T . (2.6b)
Note that | det K| is the square of an integer, 19,21 so the GSD in this case is also an integer. We will now prove this result.
Gauge invariance in a system with gapped edges
To proceed, we rewrite the Lagrangian density (2.1a), in the absence of the electromagnetic gauge potential A µ , in terms of two separate sets of N CS fields α i and β i ,
L CS = µνρ 4π κ ij α i µ ∂ ν α j ρ − β i µ ∂ ν β j ρ +∆ ij α i µ ∂ ν β j ρ − β i µ ∂ ν α j ρ .
(2.7)
Here, i, j ∈ {1, · · · , N }, and the "new" CS fields are defined as
α i µ (x, t) ≡ a i µ (x, t) and β i µ (x, t) ≡ a i+N µ (x, t)
. We define the CS action on the annulus to be
S CS . . = dt A d 2 x L CS (x, t). (2.8)
Its transformation law under any local gauge transformation of the form
α i µ → α i µ + ∂ µ χ i α , β i µ → β i µ + ∂ µ χ i β , (2.9a)
where χ i α and χ i β are real-valued scalar fields, is
S → S + δS (2.9b)
with the boundary contribution
δS CS . . = dt ∂A dx µ µνρ 4π κ ij χ i α ∂ ν α j ρ − χ i β ∂ ν β j ρ +∆ ij χ i α ∂ ν β j ρ − χ i β ∂ ν α j ρ .
(2.9c)
Here, the boundary ∂A of A is the disjoint union of two circles (∂A . . = S 1 S 1 ) and dx µ . . = µ0σ d σ , with d σ the line element along the boundary.
There are two ways to impose gauge invariance in the doubled Chern-Simons theory S CS . On the one hand, if the criteria (2.5) do not hold, we must demand that there exist a gapless edge theory with the action S E = −δS CS on the boundary ∂A of the annulus. On the other hand, if the criteria (2.5) hold, gauge invariance on the annulus can be achieved by demanding that the anomalous term δS CS = 0 identically. The latter option is accomplished if the following two conditions hold for all i = 1, · · · , N ,
χ i α | ∂A = χ i β | ∂A , α i µ | ∂A = β i µ | ∂A .
(2.10)
Using the above conditions, it is possible to show that Eq. (2.6b) follows in much the same way as does its counterpart on the torus, as we show in the next section. Before proceeding with the full argument, we first provide an intuitive picture of why this is, for the case where ∆ = 0 in Eq. (2.1b). In this case, Eq. (2.7) describes two decoupled CS liquids, one with K-matrix κ and the other with K-matrix −κ. We can imagine that the two CS liquids live on separate copies of the annulus A, which are coupled by the tunneling processes that gap out the edges. The conditions in Eq. (2.10) ensure that the two
i µ ij ↵ i µ + ij FIG. 1: (Color online)
Gluing argument for the special case ∆ = 0. In this case, the CS theory consists of two independent copies, with equal and opposite K-matrices. The tunneling processes (dotted lines) that gap out each pair of counterpropagating edge modes couple the two annuli, and the conditions (2.10) ensure that the two copies of the theory can be consistently "glued" together.
coupled annuli can be "glued" together into a single surface, on which lives a composite CS theory with a GSD given by | det κ| (see Fig. 1). Remarkably, these gluing conditions are also sufficient to treat the general case, where ∆ = 0 (see next section). The gluing conditions (2.10) generalize readily to the case of a system with the topology of an N h -punctured disk. In this generalization, the boundary ∂A is the disjoint union of N h +1 copies of S 1 (∂A = S 1 S 1 · · · S 1 ). Since each of these edges is gapped, anomaly cancellation enforces independent gluing conditions for each copy of S 1 . Using these conditions, it is possible to show that the GSD on the annulus is given by | det K| N h /2 .
Wilson loops, large gauge transformations, and their algebras
We can now use the gluing conditions (2.10), arising as they do from the need to cancel the anomalous boundary term (2.9c), to construct Wilson loop operators, which can in turn be used to determine the dimension of the ground state subspace. To do this, we choose to work with the BF form of the CS action, defined in Eqs. (2.2). We denote the transformed set of CS fields by
a i ±,µ . . = α i µ ± β i µ , (2.11a) so that L CS = µνρ 4π κ ij a i +,µ ∂ ν a j −,ρ + κ T ij a i −,µ ∂ ν a j +,ρ , (2.11b)
where the matrix κ is defined in Eq. (2.2c). In this new basis, the gluing conditions (2.10) become Dirich-let boundary conditions on the (−) fields,
χ i − | ∂A = 0, a i −,µ | ∂A = 0, (2.12)
for i = 1, · · · , N . Rewriting the Lagrangian density in the gauge a i ±,0 = 0 (this can be done using a gauge transformation obeying the gluing conditions), we obtain
L CS = 1 4π κ ij a i +,2 ∂ 0 a j −,1 − a i +,1 ∂ 0 a j −,2 +κ T ij a i −,2 ∂ 0 a j +,1 − a i −,1 ∂ 0 a j +,2 (2.13a)
supplemented by the 2N constraints (i = 1, · · · , N )
∂ 1 a i +,2 − ∂ 2 a i +,1 = 0, ∂ 1 a i −,2 − ∂ 2 a i −,1 = 0. (2.13b)
The constraints (2.13b) are met by the decompositions
a i ±,1 (x 1 , x 2 , t) = ∂ 1 χ i ± (x 1 , x 2 , t) +ā i ±,1 (x 1 , t), (2.14a) a i ±,2 (x 1 , x 2 , t) = ∂ 2 χ i ± (x 1 , x 2 , t) +ā i ±,2 (x 2 , t), (2.14b) of the CS fields provided χ i ± (x 1 , x 2 , t) are everywhere smooth functions of x 1 and x 2 , whileā i ±,1 (x 1 , t)
and a i ±,2 (x 2 , t) are independent of x 2 and x 1 , respectively. Furthermore, the geometry of an annulus is implemented by the boundary conditions
χ i ± (x 1 , x 2 + 2π, t) = χ i ± (x 1 , x 2 , t) (2.15a)
for the fields parametrizing the pure gauge contributions and
χ i − (0, x 2 , t) = χ i − (π, x 2 , t) = 0, (2.15b) a i −,1 (0, t) =ā i −,1 (π, t) = 0, (2.15c) a i −,2 (x 2 , t)| x 1 =0 =ā i −,2 (x 2 , t)| x 1 =π =ā i −,2 (x 2 , t) = 0, (2.15d)
for the gluing conditions. The coordinate system employed in these definitions is depicted in Fig. 2. The next step is to show that the barred variables decouple from the remaining (pure gauge) degrees of freedom. This can be done by inserting the decomposition (2.14) into the action and using the boundary conditions (2.15). We can now consider the action governing the barred variables alone,
S top = 1 2π dt κ ij A i 2Ȧ j 1 , (2.16a)
where, for all i = 1, · · · , N , we have defined the global degrees of freedom
A i 1 (t) . . = π 0 dx 1ā i −,1 (x 1 , t), (2.16b) A i 2 (t) . . = 2π 0 dx 2ā i +,2 (x 2 , t). (2.16c) x 2 x 1 x 1 = 0 x 1 = ⇡ FIG. 2: (Color online) Coordinate system on the annulus A = [0, π] × S 1 .
The inner boundary is at x1 = 0, while the outer boundary is at x1 = π. The coordinate x2 is defined on the circle S 1 .
In Eq. (2.16a), we employ the notationȦ j
1 = ∂ t A j 1 ≡ ∂ 0 A j 1 .
According to the topological action (2.16a), the variable κ ij A i 2 /(2π) is canonically conjugate to the variable A j 1 . Canonical quantization then gives the equaltime commutation relations
A i 1 , A j 2 = 2πi κ −1 ij , A i 1 , A j 1 = A i 2 , A j 2 = 0, (2.17)
for i, j = 1, · · · , N . We may now define Wilson loop operators
W i 1 . . = e iA i 1 , W i 2 . . = e iA i 2 , (2.18a)
whose algebra is found to be
W i 1 W j 2 = e −2πi κ −1 ij W j 2 W i 1 , W i 1 , W j 1 = W i 2 , W j 2 = 0. (2.18b)
There is still a set of symmetries that imposes constraints on the dimension of the Hilbert space associated with S top . In particular, the path integral is invariant under the "large gauge transformations"
A i 1,2 → A i 1,2 + 2π (2.19)
for any i = 1, · · · , N . The large gauge transformations are implemented by the operators
U i 1 . . = e +i κ ij A j 2 , U i 2 . . = e −i κ ij A j 1 , (2.20a)
which satisfy the algebra
U i 1 U j 2 = e −2πi κ ij U j 2 U i 1 , U i 1 , U j 1 = U i 2 , U j 2 = 0, (2.20b)
for any i, j = 1, · · · , N . Because we require that κ is an integer matrix, this means that
U i 1 , U j 2 = U i 1 , U j 1 = U i 2 , U j 2 = 0 (2.21)
for all i, j = 1, · · · , N . Hence, all U i 1 , U i 2 with i = 1, · · · , N can be diagonalized simultaneously. Since any one of U i 1 and U i 2 generates a transformation that leaves the path integral invariant, the vacua of the theory must be eigenstates of any one of U i 1 and U i 2 for i = 1, · · · , N .
Dimension of the ground-state subspace
In order to determine the GSD of the theory, it suffices to determine the number of eigenstates of any one of U i 1 and U i 2 for i = 1, · · · , N . To do this, we follow the argument of Wesolowski et al., 23 which can be adapted to our case with only minor modifications.
First, we define the eigenstates of any one of U i 1 and U i 2 for i = 1, · · · , N by
U i 1 |Ψ = e iγ i 1 |Ψ , U i 2 |Ψ = e iγ i 2 |Ψ . (2.22)
Since A i 1 and A j 2 do not commute, we may choose to represent the state |Ψ in the basis for which A i 1 is diagonal by
ψ({A i 1 }) . . = {A i 1 }|Ψ . (2.23)
The representation ψ({A i 2 }) follows from the representation ψ({A i 1 }) by a change of basis to the one in which A i 2 is diagonal. The large gauge transformations (2.20a) are represented by
U i 1 . . = e 2π ∂/∂A i 1 , U i 2 . . = e −i κ ij A j 1 ,(2.24)
in the basis (2.23). The eigenvalue problem then becomes
U i 1 ψ({A i 1 }) . . = ψ A 1 1 , · · · , A i 1 + 2π, · · · , A N 1 ≡ e iγ i 1 ψ({A i 1 }), (2.25a) U i 2 ψ({A i 1 }) . . = e −i κ ij A j 1 ψ({A i 1 }) ≡ e iγ i 2 ψ({A i 1 }). (2.25b)
Equation (2.25a) implies that we can write the following series for ψ,
ψ({A i 1 }) ≡ ψ(A 1 ) = e iγ 1 ·A 1 /2π n d(n) e in·A 1 , (2.26)
where n = (n 1 , · · · , n N ) T ∈ Z N , A 1 = (A 1 1 , · · · , A N 1 ) T ∈ R N , and γ 1 = (γ 1 1 , · · · , γ N 1 ) T ∈ R N . Second, we seek the constraints on the real-valued coefficients d(n) entering the expansion (2.26) that, as we shall demonstrate, fix the dimension of the ground-state subspace. To this end, we extract from the N ×N matrix κ that was defined in Eq. (2.2c) the family
κ =. . k T 1 . . . k T N (2.27a)
of N vectors from Z N and from its inverse κ −1 the family
κ −1 =. . 1 · · · N (2.27b)
of N vectors from Q N . By construction, these vectors satisfy
k i · j = δ ij . (2.27c)
Using these vectors, we observe that inserting the series (2.26) into the left-hand side of Eq. (2.25b) gives
U i 2 ψ(A 1 ) = e iγ 1 ·A 1 /(2π) e −ik i ·A 1 n d(n) e in·A 1 = e iγ 1 ·A 1 /(2π) n d(n + k i ) e in·A 1 = e iγ i 2 ψ(A 1 ), (2.28) which implies d(n + k i ) = e iγ i 2 d(n) (2.29)
for all i = 1, · · · , N . The constraint (2.29) is automatically satisfied by demanding that
d(n) = e iγ 2 ·(κ −1 ) T nd (n) (2.30a) withd (n) =d(n + k i ), (2.30b) since γ 2 · (κ −1 ) T k i = γ j 2 ( j · k i ) = γ i 2 . (2.30c)
Hence, insertion of (2.30a) into the expansion (2.26) that solves the eigenvalue problem (2.25a) gives the expansion
ψ(A 1 ) = e iγ 1 ·A 1 /(2π) n e iγ 2 ·(κ −1 ) T nd (n) e in·A 1 (2.31)
that solves the eigenvalue problem (2.25b). Third, condition (2.30b) implies that the set of vectors {n} forms a lattice with basis vectors {k i }. The number of inequivalent points in the lattice is therefore given by r .
. = det k 1 · · · k N = det κ T = | det κ|. (2.32) This means that we can decompose any n as
n = v m + p i k i ,(2.33)
where p i ∈ Z and we have introduced r linearly independent vectors v m . We can therefore rewrite
ψ(A 1 ) = r m=1d m f m (A 1 ), (2.34a) whered m . . =d(v m + p i k i ) =d(v m ), (2.34b) and f m (A 1 ) . . = e iγ 1 ·A 1 /(2π) × p 1 ,··· ,p N e iγ 2 ·(κ −1 ) T (v m +p i k i ,) e i(v m +p i k i )·A 1 .
(2.34c)
Since any ψ(A 1 ) in the ground-state manifold can be written in this way, we have demonstrated that there are r = | det κ| linearly independent ground-state wavefunctions f m (A 1 ) in the topological Hilbert space. In other words, we have shown that
GSD = | det κ| = | det K|,(2.
III. APPLICATIONS
With the results of Sec. II in hand, we now explore some of the consequences of Eq. (2.6b). We begin by examining the fate of the topological degeneracy in finitesized systems, before considering the possibility of using calorimetry to detect experimental signatures of the degeneracy. We close the section by re-evaluating the proposed 17 topological field theory for the s-wave BCS superconductor in light of the results of this paper.
A. Finite systems: clock models and beyond
On closed manifolds, the topological degeneracy is exact only in the limit of infinite system size. This is a result of the fact, pointed out by Wen and Niu, 2 that quasiparticle tunneling events over distances of the order of the system size lift the degeneracy exponentially. This observation was also confirmed numerically for the case . = | det K|) in the limit of infinite system size, there are two kinds of tunneling events that can lift the degeneracy. These are (1) tunnelings that encircle a single hole and (2) tunnelings between boundaries. Below we argue that, in a finite-sized system with N h holes, the array of N h coupled q-state degrees of freedom can be modeled as a spin-like system [see Fig. 3
(a) d D R (b) W i 2,j W i 2,j+1 W i 1,j+1 W i 1,j W i 1,j j+1
(a)].
To see how this arises, we first note that for a system with N h holes it is possible to define a set of Wilson loops for each hole. Analogously to Eqs. (2.18a), we define
W i 1,j . . = exp i C 1,j d ·ā i − (x, t) (3.1a) W i 2,j . . = exp i C 2,j d ·ā i + (x, t) , (3.1b)
where the open curve C 1,j connects the j-th hole to the outer boundary, and the closed curve C 2,j encircles the j-th hole [see Fig. (3)
H eff . . = N i=1 N h j=1 h i 1,j W i 1,j + h i 2,j W i 2,j + N h k=1 J i jk W i 1,jk + · · · , (3.2)
where the omitted terms include higher powers of the Wilson loops as well as all necessary Hermitian conjugates. In practice, however, all couplings in H eff are exponentially small in the shortest available length scale, which limits the tunneling rates. For example, J i jk ∝ e −c d jk /ξ , where c is a constant of order one, d jk is the distance between holes j and k [see Fig. 3(a)], and ξ is a length scale associated with quasiparticle tunneling. 26 It is interesting to note that the Hamiltonian H eff admits a certain amount of external control-the holes can be arranged in arbitrary ways, and the magnitudes of the couplings can be tuned by changing the length scales R, d jk , and D. In particular, many terms in H eff can be tuned to zero by varying these length scales. We will make use of this freedom below.
To illustrate in what sense the effective Hamiltonian (3.2) can be thought of as a spin-like system, we consider a specific class of examples. In particular, we consider the family of TRS-FTLs defined by
K . . = q 0 0 −q , Q . . = 2 2 , (3.3)
where q is an even integer. One verifies using Eq. (2.5) that a single tunneling term of the form (2.4) with T = (1, −1) T is sufficient to gap out the counterpropagating edge modes without breaking TRS as defined in Ref. 19. In this case, Eq. (2.6b) predicts a q-fold degeneracy per hole. To obtain the explicit effective Hamiltonian, we define
σ j . . = W 1,j , τ j . . = W 2,j , (3.4a)
whose only nonvanishing commutation relations arise from the algebra
σ j τ j = e −2πi/q τ j σ j . (3.4b)
One can check by writing down explicit representations of σ j and τ j that they also satisfy σ q j = τ q j = 1.
(3.5)
For example, in the case q = 2 we may use Pauli matrices, e.g., 6) and in the case q = 4 we may use
σ j = σ z , τ j = σ x ,(3.σ j . . = diag 1, e −i π/2 , e −i π , e −i 3π/2 , (3.7a) τ j . . = 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 . (3.7b)
For a system with N h holes of size R arranged in a onedimensional chain with lattice spacing d, the effective Hamiltonian in the limit D d, R becomes that of a one-dimensional Z q quantum clock model (see Ref. 27 and references therein),
H eff . . = N h −1 i=1 J i σ † i σ i+1 + H.c. + N h i=1 h i (τ i + H.c.) , (3.8) where J i ∝ e −c 1 d/ξ and h i ∝ e −c 2 R/ξ ,
with the real constants c 1 and c 2 of order unity. For simplicity, we have constrained the couplings J i and h i to be real, although their magnitude and sign is allowed to vary from hole to hole (hence the subscripts i). Note that in the above Hamiltonian, terms linear in σ j do not appear, as the associated couplings are suppressed by factors of order e −c 3 D/ξ e −c 1 d/ξ , e −c 2 R/ξ . Similarly, longer-range two-body terms, as well as higher powers of the σ j and τ j , are also omitted, as they correspond to higher-order tunneling processes.
The Hamiltonian of the clock model (3.8
) is invariant under the symmetry operation
H eff → S H eff S −1 (3.9a) generated by S . . = N h i=1 τ † i . (3.9b)
Indeed, under the conjugation by S, τ † j → τ † j and σ † j → e −2πi/q σ † j for all j. This Z q symmetry can be thought of as a remnant of the q N h -fold topological degeneracy of the TRS-FTL, which would be present in the limit d, R, D → ∞.
B. Probing the topological degeneracy with calorimetry
In this section, we consider experimental avenues to detect the topological degeneracy of a punctured TRS-FTL. We focus our attention on calorimetry as a possible probe. In a sample with N h holes, the ground state degeneracy provides a contribution S GSD = N h k B ln q, where k B is the Boltzmann constant and q = √ det K, to the total entropy S tot . If the areal density of holes is kept fixed, then for a sample of length L, we have S GSD ∼ L 2 for the topological contribution, which is extensive. This suggests that, were a suitable material to be discovered, one might be able to detect the topological degeneracy of a punctured TRS-FTL by measuring its heat capacity. Such a measurement is feasible with current technology, as membrane-based nanocalorimeters enable the determination of heat capacities C V in µg samples (and smaller), to an accuracy of δC V /C V ∼ 10 −4 -10 −5 down to temperatures of order 100 mK. [28][29][30][31] We first determine the topological contribution to the heat capacity for some particular examples. To do this, we return to the class of TRS-FTLs defined in Eq. (3.3). The heat capacity in this case is easiest to determine from the clock model of Eq. (3.8) in the paramagnetic limit J i → 0, which is achieved for d R [see Fig. 3(a)]. Setting h i = h for convenience, we see that the clock model can be rewritten, after a change of basis, as
H eff = h N h i=1 σ i + σ † i = 2h N h i=1 cos 2π q n i , (3.10)
where n i = 0, · · · , q−1. Consequently the partition function is given by
Z = q−1 n=0 e −2β h cos(2π n/q) N h ,(3.11)
where β . . = 1/(k B T ) and T is the temperature. The topological heat capacity at constant volume, C top V , is then determined from the partition function by standard methods. For example,
C top V = N h h 2 k B T 2 × 4 sech 2 2 h k B T , q = 2, 2 sech 2 h k B T , q = 4, 9 cosh h k B T +cosh 3 h k B T +8 2 cosh h k B T +cosh 2 h k B T 2 , q = 6,
(3.12) and so on.
To date, there has been no experimental realization of a TRS-FTL or fractional topological insulator. Since background contributions to the heat capacity are materialdependent, it is difficult to provide a precise estimate of the observable effect. However, we can nevertheless identify some constraints on the possible materials that would favor such a measurement.
To do this, let us estimate the various background contributions to the heat capacity of a TRS-FTL. First, we note that, because any TRS-FTL must have a gap ∆, the electronic contribution C el V to the heat capacity is The topological contribution is shown (above background) for q = 2, 4, and 6. The parameters used for the topological contribution were ν = 5 × 10 −6 (∼ 22000 2 holes) and h/k B ≈ 0.321 K, which leads to a maximum excess (for k = 6) of ∼ 30% over the background (blue curve) near T = 0.1 K.
C el V ∝ ∆ T e −η ∆/(k B T ) ,(3.
where η is a constant of order one. The exponential suppression of C el V implies that this contribution is always negligible at sufficiently small temperatures.
However, one must also consider the phononic contribution, which follows a Debye power law at low temperatures. This contribution scales with the sample volume, which could be three-dimensional if the TRS-FTL is formed in a heterostructure, as is the case in quantum Hall systems. This fact, which was noted in Ref. 7, poses the greatest challenge to detecting the topological contribution to the heat capacity, which scales with the area of the two-dimensional sample. In principle, however, one may assume that the TRS-FTL lives in a strictly twodimensional sample, or at least in a thin film. In this case, we have that the phononic contribution C ph V to the heat capacity is
C ph V ∝ k B (T /T D ) 2 , (3.13b)
where T D is the Debye temperature (100 K, say). 32 We verified numerically, by simulating a square lattice of masses and springs, that the presence or absence of holes has little effect on the phonon spectrum as long as the holes are sufficiently small. We therefore expect the Debye law to hold both with and without holes, as long as one takes into account the excluded volume due to the holes. The total heat capacity is obtained by adding the three contributions:
C V (T ) = N a C top V (T ) + ν C ph V (T ) + 1 N a C el V (T ) , (3.13c)
where N a is the number of atoms in the sample and ν . . = N h /N a determines the number of holes. The above formula leads to the estimate of the specific heat curve presented in Fig. 4. A square array of 22000 holes on a side produces an excess of up to 30% (for q = 6) on top of the background at T = 0.1 K, which is well above the experimental error δC V /C V ∼ 10 −4 . We now comment on possible difficulties with this measurement. Perhaps the most important of these is the fact that the energy scales J i and h i entering Eq. (3.8) are unknown. It may be possible to circumvent this issue by exploiting the exponential sensitivity of the couplings to the length scales R and d. For example, one could prepare samples with d R to eliminate the first term in Eq. (3.8), and compare results for different values of R to determine whether it is possible to resolve the effect. As long as h .1 ∆, it should be possible to tune R such that the effect is visible.
C. Are superconductors topologically ordered?
In an insightful paper, it was argued by Hansson et al. in Ref. 17 that ordinary s-wave BCS superconductors are topologically ordered. In fact, it was shown that, when the electromagnetic gauge field is treated dynamically and confined to (2+1) dimensional space and time, the superconductor admits a description in terms of a BF theory like the one defined in Eqs. (2.2), with
K = 0 2 2 0 .
(3.14)
Furthermore, it was shown that the edge states that arise when the above theory is defined in a finite planar geometry are generically gapped by Cooper pair creation terms. The proposed theory is consistent with the time-reversal symmetry of the s-wave superconductor and captures the statistical phase of π that is acquired by an electron upon encircling a vortex. This effective theory, which is the same as that of the Z 2 lattice gauge theory in its deconfined phase, predicts a four-fold GSD on the torus, whose exponential splitting in finite systems was verified numerically in Refs. 24 and 25. Since the theory defined by Eq. (3.14) falls squarely within the class of theories studied in this paper, it is tempting to draw the conclusion that the s-wave superconductor exhibits a two-fold GSD on the annulus. Below we argue that, while this is indeed the case, the degeneracy is not exponential but power-law in nature, and therefore is not what one might call a topological degeneracy in the canonical sense of Refs. 1-3. The reason for this is that the topological nature of the superconductor results from the dynamics of the electromagnetic gauge field, which, in a real planar superconductor, is not confined to the sample itself, but rather extends through all three spatial dimensions. Consequently, the true electromagnetic gauge field that is present in the superconductor can be measured by local external probes.
To see how this coupling to the environment lifts the degeneracy in a power-law fashion, let us consider the ori- gin of the two-fold degeneracy. Recall that for an annular superconductor (a thin-film mesoscopic ring, for example), the phase of the superconducting order parameter winds by 2π around the hole if a flux quantum φ 0 = h/2e is trapped inside. This indicates that the electronic spectrum of the superconductor cannot be used to distinguish between cases where an even (φ = 0 mod φ 0 ) or odd (φ = 1 mod φ 0 ) number of flux quanta penetrate the hole. This is precisely the origin of the degeneracy. However, because the electromagnetic field also exists outside the sample, there is an additional electromagnetic energy cost associated with having a flux quantum trapped in the hole. If we assume for simplicity that the flux is distributed uniformly over the hole (radius R) and does not penetrate into the superconductor, then the energy cost is proportional to
V d 3 r |B| 2 = φ 2 0 2π R 2 L z ,(3.15)
where V is the interior of the cylinder in Fig. 5, and L z is the height of the cylinder. Strictly speaking, because the magnetic field lines must close outside the annulus, one needs to replace L z by a length scale bounded from below by the outer radius of the annulus. This energy cost vanishes as 1/R for R, L z → ∞, which means that the ground state degeneracy is lifted as a power law, rather than exponentially. The reason underlying this power-law splitting is the fact that the electromagnetic gauge field is not an emergent gauge field in the same sense as the Chern-Simons fields that are present in, say, a fractional topological insulator with gapped edges. To elaborate on this distinction, we first recall that the topological degeneracy derived in Ref. 17 arises from a dynamical treatment of the electromagnetic gauge field in (2+1)-dimensional space and time. The topological sectors in which this degeneracy is encoded reside in the Hilbert space of the electromagnetic gauge field, which is in turn entangled with the Hilbert space of the electronic degrees of freedom. Since the photonic degrees of freedom in a real annular superconductor also exist outside the sample, there is nothing to prevent the environment from fixing a topological sector. For example, the presence of an external magnetic field in the hole can privilege one topological sector over the other by fixing the flux through the hole.
It is crucial to contrast this with the case of a "true" TRS-FTL, where the Chern-Simons fields arise naturally from electron-electron interactions. In this case, the topological sectors reside in the Hilbert space of the electrons alone, and the CS fields do not exist outside the sample. Inserting an electromagnetic flux through the hole of an annular TRS-FTL switches between topological sectors, but does not betray any information about the identity of the initial or final sector. For this reason, the degeneracy of different topological sectors is completely protected from the environment in the limit of infinite system size.
IV. SUMMARY AND CONCLUSION
In this paper we have derived a general formula for the topological ground state degeneracy of a timereversal symmetric, multi-component, Abelian Chern-Simons theory. The formula, which holds when the edge states of the theory are gapped, says that the GSD of the system on a planar surface with N h holes is given by | det K| N h /2 , where K is the K-matrix. We then examined the situation where this topological degeneracy is split exponentially by finite-size effects, and found that the set of N h holes admits a description in terms of an effective spin-like system whose couplings can be tuned by varying the sizes and arrangement of the holes. We also examined calorimetry as a means of detecting the topological degeneracy. The proposed experiment would measure the contribution of the topological degeneracy to the heat capacity at low temperatures, which we argued could be visible on top of the expected electronic and phononic backgrounds as long as the host material is sufficiently thin. Finally, in light of these results, we revisited the notion that ordinary s-wave superconductors are topologically ordered. We argued that, while thinfilm superconductors do indeed possess a ground state degeneracy on punctured planar surfaces, this degeneracy is lifted in a power-law, rather than an exponential, fashion due to the (3+1)-dimensional nature of the electromagnetic gauge field.
We close by pointing out several possible extensions of this work. First, we note that our results concerning the ground state degeneracy should still apply to TRS-FTLs where the backscattering terms of Eq. (2.4) do not respect time-reversal symmetry. We could therefore also have considered in this paper fractional topological insulators whose protected edge modes are gapped by perturbations that break TRS, as is done in Refs. 33 and 34. Second, it would be interesting to determine what other kinds of "artificial" spin-like systems could be realized in TRS-FTLs with more complicated K-matrices than those in the class of Eq. (3.3). It is conceivable that remnants of the topological degeneracy may manifest themselves as exotic properties of these less conventional models. Finally, we must point out that a fractionalized twodimensional state of matter with time-reversal symmetry has not yet been discovered experimentally, and that the search for such a state must remain a priority.
FIG. 3 :
3(Color online) A punctured TRS-FTL with gapped edges. (a) Schematic representation of an "artificial" spinlike system. In the limit D d, R, each hole (white square) carries with it a q-fold topological degeneracy that is split exponentially by tunneling processes that encircle (red lines) or connect (green lines) the holes.(b) Wilson loops defined in Eqs. (3.1). The dashed line represents the product of the two Wilson loops above it, which connects the two holes. of the (2+1)-dimensional Abelian Higgs model on the torus by Vestergren et al. in Refs. 24 and 25.A similar splitting occurs for manifolds with boundary, like those studied in this work. For a planar system with many holes, each of which carries a q-fold degeneracy (where q .
online) Total heat capacity for a monolayer TRS-FTL with N a = 10 14 .
online) Trapping a flux quantum inside a superconducting ring. Confining the flux inside the ring costs no energy for the electrons inside the superconductor, but there is an electromagnetic energy cost obtained by integrating the enclosed magnetic field intensity over the interior of the dashed cylinder, which we denote V.
35 )
35with K defined in Eq.(2.1b). This is precisely the result advertised in Eq. (2.6b). Note that because κ is an integer-valued matrix, it has an integer-valued determinant. Consequently, | det K| = | det κ| is an integer.An extension of this argument to the case of a plane with N h holes, along the lines discussed at the end of Sec. II B 1, yields a set of Wilson loops like those in Eqs. (2.18a) for each hole. [See, e.g., Eqs. (3.1) in the next section.] Since these sets of Wilson loops are completely independent, one obtains a degeneracy of size | det K| N h /2 .
AcknowledgmentsWe are grateful to Kurt Clausen, Eduardo Fradkin, Hans Hansson, Shivaji Sondhi, Chenjie Wang, and Frank Wilczek for enlightening discussions. Upon completion of this work, we were made aware by Shinsei Ryu of Ref.35
. X.-G Wen, Phys. Rev. B. 407387X.-G. Wen, Phys. Rev. B 40, 7387 (1989).
. X.-G Wen, Q Niu, Phys. Rev. B. 419377X.-G. Wen and Q. Niu, Phys. Rev. B 41, 9377 (1990).
. X.-G Wen, Int. J. Mod. Phys. B. 51641X.-G. Wen, Int. J. Mod. Phys. B 5, 1641 (1991).
. C Nayak, S H Simon, A Stern, M Freedman, S. Das Sarma, Rev. Mod. Phys. 801083C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. 80, 1083 (2008).
. K Yang, B I Halperin, Phys. Rev. B. 79115317K. Yang and B. I. Halperin, Phys. Rev. B 79, 115317 (2009).
. Y Barlas, K Yang, Phys. Rev. B. 85195107Y. Barlas and K. Yang, Phys. Rev. B 85, 195107 (2012).
. N R Cooper, A Stern, Phys. Rev. Lett. 102176807N. R. Cooper and A. Stern, Phys. Rev. Lett. 102, 176807 (2009).
. W E Chickering, J P Eisenstein, L N Pfeiffer, K W West, Phys. Rev. B. 81245319W. E. Chickering, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 81, 245319 (2010).
. W E Chickering, J P Eisenstein, L N Pfeiffer, K W West, Phys. Rev. B. 8775302W. E. Chickering, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 87, 075302 (2013).
. X.-G Wen, A Zee, Phys. Rev. B. 462290X.-G. Wen and A. Zee, Phys. Rev. B 46, 2290 (1992).
. M Barkeshli, Y Oreg, X.-L Qi, arXiv:1401.3750e-printM. Barkeshli, Y. Oreg, and X.-L. Qi, e-print arXiv:1401.3750 (2014).
. M Freedman, C Nayak, K Shtengel, K Walker, Z Wang, Ann. Phys. 310428M. Freedman, C. Nayak, K. Shtengel, K. Walker, and Z. Wang, Ann. Phys. 310, 428 (2004).
. C L Kane, E J Mele, Phys. Rev. Lett. 95226801C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005).
. C L Kane, E J Mele, Phys. Rev. Lett. 95146802C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 146802 (2005).
. B A Bernevig, S.-C Zhang, Phys. Rev. Lett. 96106802B. A. Bernevig and S.-C. Zhang, Phys. Rev. Lett. 96, 106802 (2006).
. A Y Kitaev, Ann. Phys. 303A. Y. Kitaev, Ann. Phys. 303, 2 (2003).
. T H Hansson, V Oganesyan, S L Sondhi, Ann. Phys. 313497T. H. Hansson, V. Oganesyan, and S. L. Sondhi, Ann. Phys. 313, 497 (2004).
. M Levin, A Stern, Phys. Rev. Lett. 103196803M. Levin and A. Stern, Phys. Rev. Lett. 103, 196803 (2009).
. T Neupert, L Santos, S Ryu, C Chamon, C Mudry, Phys. Rev. B. 84165107T. Neupert, L. Santos, S. Ryu, C. Chamon, and C. Mudry, Phys. Rev. B 84, 165107 (2011).
A formula for the GSD on the cylinder of a TRS-FTL with gapped edge states was also derived in Ref. 36. However, that formula relies on detailed knowledge of the edge theory, whereas the formula derived in this paper relies only on knowledge of the bulk CS theory and the stability of the edge theory against backscattering terms. We have verified that our formula agrees with that derived in Ref. 36 for the specific examples discussed in this paperA formula for the GSD on the cylinder of a TRS-FTL with gapped edge states was also derived in Ref. 36. However, that formula relies on detailed knowledge of the edge the- ory, whereas the formula derived in this paper relies only on knowledge of the bulk CS theory and the stability of the edge theory against backscattering terms. We have verified that our formula agrees with that derived in Ref. 36 for the specific examples discussed in this paper.
. L Santos, T Neupert, S Ryu, C Chamon, C Mudry, Phys. Rev. B. 84165138L. Santos, T. Neupert, S. Ryu, C. Chamon, and C. Mudry, Phys. Rev. B 84, 165138 (2011).
. F D M Haldane, Phys. Rev. Lett. 742090F. D. M. Haldane, Phys. Rev. Lett. 74, 2090 (1995).
. D Wesolowski, Y Hosotani, C.-L Ho, Int. J. Mod. Phys. A. 9969D. Wesolowski, Y. Hosotani, and C.-L. Ho, Int. J. Mod. Phys. A 9, 969 (1994).
. A Vestergren, J Lidmar, T H Hansson, Europhys. Lett. 69256A. Vestergren, J. Lidmar, and T. H. Hansson, Europhys. Lett. 69, 256 (2005).
. A Vestergren, J Lidmar, Phys. Rev. B. 72174515A. Vestergren and J. Lidmar, Phys. Rev. B 72, 174515 (2005).
In the semiclassical approximation employed in Ref. 2, ξ ∼ (m * ∆) −1/2 , where m * is the effective mass of the quasiparticle and ∆ is the gap to quasiparticle excitations. In the semiclassical approximation employed in Ref. 2, ξ ∼ (m * ∆) −1/2 , where m * is the effective mass of the quasiparticle and ∆ is the gap to quasiparticle excitations.
. P Fendley, J. Stat. Mech. Theor. Exp. 11020P. Fendley, J. Stat. Mech. Theor. Exp. P11020 (2012).
. J.-L Garden, Thermochim. Acta. 49216J.-L. Garden et al., Thermochim. Acta 492, 16 (2009).
. F R Ong, O Bourgeois, S E Skipetrov, J Chaussy, Phys. Rev. B. 74140503F. R. Ong, O. Bourgeois, S. E. Skipetrov, and J. Chaussy, Phys. Rev. B 74, 140503 (2006).
. S Tagliati, V M Krasnov, A Rydh, Rev. Sci. Instrum. 8355107S. Tagliati, V. M. Krasnov, and A. Rydh, Rev. Sci. In- strum. 83, 055107 (2012).
. S Tagliati, A Rydh, J. Phys.: Conf. Ser. 40022120S. Tagliati and A. Rydh, J. Phys.: Conf. Ser. 400, 022120 (2012).
If instead we used the three-dimensional Debye formula, we would have C ph V ∼ T 3 , which would produce an even smaller contribution at low temperatures. so long as the sample is not too thickIf instead we used the three-dimensional Debye formula, we would have C ph V ∼ T 3 , which would produce an even smaller contribution at low temperatures, so long as the sample is not too thick.
. N H Lindner, E Berg, G Refael, A Stern, Phys. Rev. X. 241002N. H. Lindner, E. Berg, G. Refael, and A. Stern, Phys. Rev. X 2, 041002 (2012).
. J Motruk, A M Turner, E Berg, F Pollmann, Phys. Rev. B. 8885115J. Motruk, A. M. Turner, E. Berg, and F. Pollmann, Phys. Rev. B 88, 085115 (2013).
. J Wang, X.-G Wen, arXiv:1212.4863e-printJ. Wang and X.-G. Wen, e-print arXiv:1212.4863 (2012).
. C Wang, M Levin, Phys. Rev. B. 88245136C. Wang and M. Levin, Phys. Rev. B 88, 245136 (2013).
| []
|
[
"Manipulating electron waves in graphene using carbon nanotube gating",
"Manipulating electron waves in graphene using carbon nanotube gating"
]
| [
"Shiang-Bin Chiu \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n\nDepartment of Physics\nNational Taiwan University\n10617TaipeiTaiwan\n",
"Alina Mreńca-Kolasińska \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n\nFaculty of Physics and Applied Computer Science\nAGH University of Science and Technology\nal. Mickiewicza 3030-059KrakówPoland\n",
"Ka Long Lei \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n",
"Ching-Hung Chiu \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n",
"Wun-Hao Kang \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n",
"Szu-Chao Chen \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n",
"Ming-Hao Liu \nDepartment of Physics\nNational Cheng Kung University\n70101TainanTaiwan\n"
]
| [
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan",
"Department of Physics\nNational Taiwan University\n10617TaipeiTaiwan",
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan",
"Faculty of Physics and Applied Computer Science\nAGH University of Science and Technology\nal. Mickiewicza 3030-059KrakówPoland",
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan",
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan",
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan",
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan",
"Department of Physics\nNational Cheng Kung University\n70101TainanTaiwan"
]
| []
| Graphene with its dispersion relation resembling that of photons offers ample opportunities for applications in electron optics. The spacial variation of carrier density by external gates can be used to create electron waveguides, in analogy to optical fiber, with additional confinement of the carriers in bipolar junctions leading to the formation of few transverse guiding modes. We show that waveguides created by gating graphene with carbon nanotubes (CNTs) allow obtaining sharp conductance plateaus, and propose applications in the Aharonov-Bohm and two-path interferometers, and a pointlike source for injection of carriers in graphene. Other applications can be extended to Bernal-stacked or twisted bilayer graphene or two-dimensional electron gas. Thanks to their versatility, CNT-induced waveguides open various possibilities for electron manipulation in graphene-based devices. | 10.1103/physrevb.105.195416 | [
"https://arxiv.org/pdf/2203.00923v2.pdf"
]
| 247,218,066 | 2203.00923 | 9068053d47ec1e11259227e52489152b60936f4f |
Manipulating electron waves in graphene using carbon nanotube gating
(Dated: May 18, 2022)
Shiang-Bin Chiu
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Department of Physics
National Taiwan University
10617TaipeiTaiwan
Alina Mreńca-Kolasińska
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Faculty of Physics and Applied Computer Science
AGH University of Science and Technology
al. Mickiewicza 3030-059KrakówPoland
Ka Long Lei
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Ching-Hung Chiu
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Wun-Hao Kang
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Szu-Chao Chen
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Ming-Hao Liu
Department of Physics
National Cheng Kung University
70101TainanTaiwan
Manipulating electron waves in graphene using carbon nanotube gating
(Dated: May 18, 2022)
Graphene with its dispersion relation resembling that of photons offers ample opportunities for applications in electron optics. The spacial variation of carrier density by external gates can be used to create electron waveguides, in analogy to optical fiber, with additional confinement of the carriers in bipolar junctions leading to the formation of few transverse guiding modes. We show that waveguides created by gating graphene with carbon nanotubes (CNTs) allow obtaining sharp conductance plateaus, and propose applications in the Aharonov-Bohm and two-path interferometers, and a pointlike source for injection of carriers in graphene. Other applications can be extended to Bernal-stacked or twisted bilayer graphene or two-dimensional electron gas. Thanks to their versatility, CNT-induced waveguides open various possibilities for electron manipulation in graphene-based devices.
I. INTRODUCTION
Graphene's linear dispersion relation, resembling the one of photons, inspired plethora of applications of graphene for electron optics. External gates can be used to locally tune the Fermi energy, which, by analogy to optics, plays the role of the refractive index. Moreover, graphene can be smoothly modulated between electron and hole conduction, thus it is possible to create junctions between regions of opposite polarity. Thanks to this flexible control of the carrier density, electrostatically defined optical elements such as lenses [1][2][3][4][5], collimators [6,7], Fabry-Pérot [8][9][10] and Mach-Zehnder interferometers [11,12] or microcavities [13][14][15][16][17] are realizable in graphene and have been widely explored both theoretically and experimentally. Furthermore, unlike photons, carriers in graphene are charged, which opens up opportunities for applications beyond the regular optics, including manipulation with external magnetic field for transverse magnetic focusing [18][19][20] or the Aharonov-Bohm effect [21][22][23].
The possibility of spatial variation of the potential profile can be utilized to form electron waveguides, with three regions of varying carrier density being counterparts of materials with different refractive indices in the optical fiber. By analogy to the total internal reflection of light in the waveguide core having the refractive index higher than the surrounding cladding, in a channel induced electrostatically, electrons incident below the critical angle are trapped and propagate along the channel [24]. In addition to this optical fiber guiding (OFG), when the polarity at the interface is inverted, a bipolar pnp or npn junction is formed which can impose additional carrier confinement, leading to formation of few guiding modes.
Few-mode guiding in graphene has been widely discussed in theory [25][26][27][28][29][30][31] and successfully realized in experiments [32,33] which employed narrow electrostatic gates. How-ever, in waveguides induced by electrostatic gates the interface is bound to be irregular. The possibility to circumvent these limitations is by using a carbon nanotube (CNT) as a gate, which can induce a sharp and regular interface. Moreover, the CNT shape can be controlled to some extent [34,35] allowing for flexible design of the waveguide geometry. Recent advancement in the fabrication of nanostructures, and, in particular, efficient transfer and manipulation of CNTs for the assembly of nanodevices [36][37][38][39][40], opens up possibilities for precise control over the CNT position and orientation. Using CNT as a gate for graphene has been proposed in theoretical works [41][42][43] as well as realized experimentally in the capacitive measurement of graphene's local density of states [44] and Coulomb drag between graphene and CNT [45]. However, no transport investigations of guiding by a CNT-induced channel have been conducted so far.
In this work, we consider the quantum transport of carriers in a system gated by charged CNT [ Fig. 1(a)]. We demonstrate the versatility of CNT-induced guiding channels in graphene, which can be utilized to form extremely narrow and sharp channels, electrostatically defined quantum rings [33,46], pointlike sources [3], interferometers, and other building blocks for nanodevices. Their application is not limited to single-layer graphene (SLG), and, as we show in the following, it can also be utilized in other materials, including Bernal-stacked bilayer graphene (BLG), decoupled twisted bilayer graphene (dtBLG), and semiconductor nanostructures hosting two-dimensional electron gas (2DEG).
II. QUANTIZED ELECTRON WAVEGUIDE
A. Electrostatics
In the following, we study two-dimensional systems (SLG, BLG, dtBLG and 2DEG) placed above a global back gate at voltage V bg and gated from the top by a CNT at voltage V cnt . Figure 1(a) shows the 3D design of the considered device. Although the experimental design differs between graphene devices and 2DEG, here for the sake of comparison we consider the same device geometry for each system: a graphene system sandwiched between two hBN layers [blue in Fig. 1(a)] and placed on a SiO 2 substrate [light gray in Fig. 1(a)], or 2DEG embedded in a medium with equivalent dielectric constants as those in the graphene device. The CNT is connected to electrodes marked in pink, and graphene to metallic contacts marked in yellow. The CNT is separated from graphene by an hBN sheet d t = 4 nm thick, with the dielectric constant ε hBN = 3. The bottom hBN layer is d b = 20 nm thick, and the SiO 2 substrate d SiO 2 = 285 nm thick, and we adopt the dielectric constant for SiO 2 ε SiO 2 = 3.8. The back gate capacitance is obtained from the parallel-plate capacitor model, C bg /e = ε 0 /e d b /ε hBN + d SiO 2 /ε SiO 2 = 6.7676 × 10 10 cm −2 V −1 , where ε 0 is the vacuum permittivity, and −e is the electron charge.
For the electrostatic modeling of a straight CNT placed along the x direction, instead of a full three-dimensional structure we assume the system is invariant in the x direction, and model the potential profile in the transverse direction only, performing 2D electrostatic simulation in the y-z coordinates. The electric potential distribution u(y, z) induced by the charged CNT for V cnt = 1 V is shown in Fig. 1(b). The numerically obtained C cnt (y)/e is presented in Fig. 1(c) [47]. For comparison, the orange (blue) dashed line shows the analytical result for a uniform dielectric constant ε r = 3 (ε r = 1), given by C cnt (y)/e = ε 0 ε r 4a t /(y 2 + a 2 t )e log(κ), with a t = h 2 t − r 2 cnt , h t = d t + r cnt , κ = (h t + a t ) 2 /r 2 cnt . In the case of a curved CNT, considered in Sec. III, the potential profile is calculated as described in Appendix A. The potential profile induced by two crossed CNTs, mentioned in Sec. III B, is adopted from 3D finite-element modeling with the full x, y, and z dependence, which yields C cnt (x, y)/e.
Given the gates capacitance, the carrier density is calculated from
n = (C bg V bg +C cnt V cnt )/e(1)
for graphene free of intrinsic doping, where C cnt is a function of coordinates, yielding a position-dependent n. We assume graphene is described by the dispersion relation E = ±hv F k, whereh is the reduced Planck constant, v F ≈ 10 6 m/s is the Fermi velocity of graphene, and we adopthv F ≈ 3 √ 3/8 eVnm. The on-site energy which we input into the Hamiltonian is calculated from
U = −sgn(n)hv F π|n|.(2)
The 2DEG band structure differs from that of graphene: E =h 2 k 2 /2m * in the effective mass approximation, where we use the effective mass for GaAs m * = 0.067m 0 with m 0 being the electron mass. The on-site energies are U = −πh 2 n/m * , with n given by Eq. (1). The BLG density and on-site energies calculation follows Ref. 48, and for dtBLG we adopt the self-consistent model for zero magnetic field described in Ref.
49.
B. Transport calculation Figure 1(d) shows the device top view. For the transport calculation, to focus on the guiding effect of the CNT gate, we consider an idealized four-terminal device marked by the black dashed lines in Fig. 1(d). The contacts are simulated by semi-infinite leads, and the computational box is limited to a rectangle of size L × W = 360 nm ×360 nm, unless stated otherwise. Electrons are guided between the left source lead and the right collector lead of width w = 34 nm. The current leaking out of the guiding channel flows to the top and bottom leads.
The calculations are based on the tight-binding Hamiltonian
H = − ∑ i, j t i j c † i c j + H.c. + ∑ j U(r j )c † j c j ,(3)
where the operator c i (c † i ) annihilates (creates) an electron on the ith site located at r i = (x i , y i ), and the second sum contains on-site energies. In SLG and dtBLG, the hopping parameters t i j describe the nearest-neighbor hopping with t i j = t 0 = parameter is modified to contain the Peierls phase t i j → t i j e iφ , with φ = − ē h r j r i A · dr, where A is the vector potential such that ∇ × A = B, and the integration is from the site at r i to the site at r j . To simulate real graphene devices we adopt the scalable tight-binding model [30], with the scaled hopping parameter t = t 0 /s F and lattice spacing a = a 0 s F , where s F is the scaling factor, and we use a 0 = 1/4 √ 3 nm. The Hamiltonian (3) is applied for transport simulation within the real-space Green's function method [50], wave-function matching [51] or using the Kwant package [52] for SLG/2DEG, dtBLG, and BLG, respectively. The transport energy is chosen at E = 0. At zero temperature the conductance from lead i to lead j is calculated using the Landauer formula G ji = 2e 2
T ji /h, where T = ∑ m T (m)
ji is summed over the propagating modes.
C. Single-layer graphene
The on-site energy which we input into the SLG Hamiltonian is given by Eq. (2). In single-layer graphene, it also plays the role of the refractive index, and, in analogy to optics, the refraction at the interface is described by the Snell's law E in sin(θ in ) = E out sin(θ out ), where E in (E out ) is the energy within (outside) the channel. The total internal reflection occurs when the incidence angle satisfies θ > θ c , with θ c = arcsin(E out /E in ). Thus the OFG in the channel is possible when |E in | > |E out |. It is equivalent to the requirement |k in | > |k out | written in terms of the wave vector within (outside) the channel k in (k out ). The confinement in the bipolar junction, which appears to be stronger than in OFG, is realized when E in E out < 0. In terms of the carrier densities n in and n out , it is equivalent to n in n out < 0, which in our system roughly corresponds to V bg V cnt < 0, as estimated by n in n out = C bg V bg (C bg V bg +C cnt (0)V cnt ) ≈ C bg V bg C cnt (0)V cnt since C bg is two orders of magnitude smaller than C cnt at its peak. Figure 2(a) shows the two-terminal conductance in SLG between the narrow left and right terminals as a function of the backgate and CNT voltage, calculated with s F = 2. For V bg V cnt > 0 the junction induced by the CNT is unipolar, the confinement within the channel is relatively weak since it is only due to OFG, and the bulk states have a significant contribution to conductance. In this case conductance quantization is hardly seen, one can also spot fine oscillations which correspond to resonant states in the cavity between the vertical edges of the flake. Figure 2(b) presents the spatially resolved current density J(x, y) for the voltages marked with a cross in Fig. 2(a). Here V cnt = 0, hence the potential profile in the device is uniform, and neither bipolar junction nor optical guiding occurs. Thus, a small part of the injected current is transmitted towards the right lead, however, a significant part flows out of the channel and escapes through the top and bottom leads.
In the quadrants V bg V cnt < 0, bipolar junctions are formed, and the conductance shows clear plateaus as few-mode guiding is realized in the channel. In this regime, the current shows an entirely different behavior. A representative current The band structure of graphene gated by a CNT consists of Dirac cones typical for pure graphene, corresponding to the bulk graphene beyond the channel with an almost flat potential profile, and additional discrete branches arising from the confinement within the CNT-induced channel. The energies of the states bound within the channel are within the area marked with yellow in Fig. 2 To check the impact of disorder, we performed the calculations with the disorder potential present. These results are summarized in Appendix B.
(a) (d) (e) (b) (c) (i) (j) (k) (f) (g) (h)(f)-2(h), delim- ited by E = ±hv F |k x | + V out and E =hv F |k x | + V in [
It is worth noting that for very strong potential energy variation in space, the intervalley scattering becomes relevant in a channel along the zigzag direction, and gives rise to intermediate plateaus G = 4e 2 (M − 1/2)/h. We elaborate on this in Appendix C.
The calculations presented in this section are obtained for device with the hBN thickness of 4 nm based on the experiment [44]. However, for a few-nanometer thin hBN and high CNT voltage, there is a risk of a dielectric breakdown [53]. To consider a safer design, in Appendix D we present the calculations for the case of 10, 15, and 20 nm thick hBN between the SLG and the CNT. We also consider wider injection leads in Appendix E.
D. Other two-dimensional systems
We turn our attention to the guiding effect in other systems.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)
Semiconductor two-dimensional electron gas
One fundamental difference between the 2DEG and graphene is that for the former, the dispersion relation does not exhibit the smooth transition between electron-and hole-like conductance. For the 2DEG model, the conduction band and the valence band have to be introduced explicitly. We focus on transport in the conduction band, V cnt > 0 [see Fig. 3
(b)].
The modulation of the potential profile underneath the CNT creates a potential well, leading to electron confinement and formation of discrete modes. The conductance steps at multiples of 2e 2 /h arise due to the spin degeneracy, in contrast to the quantization at 4e 2 M/h typical for graphene [see the cross sections in Figs. 3(e) and 3(f)]. An apparent effect of the confinement seen in the dispersion relation in Fig. 3(j) is the occurrence of distinct subbands which are well separated from the bulk dispersion relation. The CNT gating allows obtaining well defined conductance steps, offering an alternative to quantum point contacts (QPCs), which can be induced in 2DEG for example by split gates [54,55].
Bernal-stacked bilayer graphene
In Bernal-stacked bilayer graphene, charge carriers are decribed by massive Dirac fermion band structure consisting of two parabolic bands [56], with a band gap tunable, e.g., with external gates. The conductance map in Fig. 3(c) is obtained with s F = 4. It shows quantized steps, at 4e 2 M/h due to the spin and valley degree of freedom, as also seen in the cross section in Fig. 3(g). Conductance quantization in BLG has been previously obtained with external electrostatic gates forming QPCs, however, such an approach requires using a combination of split gates to form a narrow channel and a top gate to tune its density [57,58]. Our results show an alternative approach by CNT gating, allowing for a simpler device geometry. The lowest plateau with M = 1 extends over a broad voltage range. This feature can be used to form a quasi-1D BLG chain, robust against the voltage changes, allowing for example investigations of 1D superlattices in BLG. The band structure in Fig. 3(k) consists of the bulk BLG bands, as well as additional branches due to the CNT-induced channel.
Decoupled twisted bilayer grahpene
For dtBLG we consider the top and bottom layer oriented such that the transport direction is along the armchair and zigzag lattice orientation, respectively. This choice corresponds to the relative rotation angle of 30 • between the sheets, which was found to lead to the interlayer decoupling near the Dirac point [59][60][61]. However, the two graphene sheets are atomically close to each other and the electric charge present on the layers causes effective gating between them [49,62].
The dtBLG device conductance shown in Fig. 3(d) is calculated for the computational box of size L ×W = 160 nm × 170 nm, with s F = 1. In the bipolar region, V bg V cnt < 0, it exhibits two sets of plateaus, dispersing at a different rate with V cnt and V bg . The cross section in Fig. 3(h) reveals conductance quantized at 4e 2 M/h. Figure 3(l) shows two overlaid band structures: of the top and bottom layer, plotted with gray and black lines, respectively. The band structure of the bottom, zigzagterminated layer, is shifted by k x = − 2π 3a , such that only one of the Dirac cones in the zigzag ribbon band structure is visible in the plot, centered at k x = 0. In both band structures one can spot discrete branches detached from the Dirac cones, corresponding to the guiding modes, but due to the electrostatic interlayer coupling described above, the onset of the guiding modes in each layer occurs at a slightly different V cnt . The dt-BLG conductance in Figs. 3(d) and 3(h) is a sum of two individual layers conductance with the steps occurring at different CNT voltage values.
The individual layers contributions to conductance are shown in Figs. 4(a) and 4(b). The difference in the dispersion of plateaus in the two layers can be immediately understood when comparing the top and bottom layer density profiles in a representative case of V bg = −18 V, V cnt = 3.3 V shown in Fig. 4(c). The carrier density of the top layer is significantly higher than of the bottom one, due to the capacitive coupling between the layers. In particular, the CNT gates the graphene sample and induces electric charge on the top layer which in turn leads to an effective gating of the bottom layer. This effective CNT gating of the bottom layer is weaker than that of the top layer, leading to different carrier density profiles. To summarize, CNT guiding in dtBLG can be realized in two layers in parallel, such that two independent channels contribute to the conductance, with a full control over the two layers by external gates.
III. ELECTRON INTERFEROMETER
As we have shown in the previous sections, a CNT used as a gate can induce a sharp and narrow waveguide. This, as well as the flexibility of CNTs in terms of their shapes [34,35], makes them ideal candidates for building blocks of more complex devices. Here we propose a ringlike Aharonov-Bohm (AB) interferometer and a two-path interferometer.
When it comes to the characterization of interferometers, their performance is determined by the visibility of the interference pattern, defined as α = (G max −G min )/(G max +G min ), where G max and G min are the maximum and minimum conductance, respectively. As we show in this section, CNT gating is useful for obtaining conductance oscillation with high visibility in both kinds of interferometers considered here.
A. Aharonov-Bohm interferometer
We first focus on conductance of an AB interferometer induced by CNT gating and an etched graphene quantum ring. The insets of Fig. 5(a) show the geometries of the considered systems. The CNT-gated ring shape is described by a piecewise function [see Fig. 5
(a), left inset]
y = 0 , |x| > W 2 ±D cos 2πx W + 1 , otherwise ,(4)
where W = 500 √ 2 nm, D = 20π √ 2 nm, and the ring area A c = 2DW = 200 2 π nm 2 . The etched ring [ Fig. 5(a), right inset] has an inner radius R in = 160 nm and outer radius R out = 240 nm, and is attached to leads 400 nm wide. The size was chosen such that the area of a circle of radiusR = (R in + R out )/2 = 200 nm, namely A r = πR 2 , is equal to that of the CNT-gated ring. The choice of the ring geometry refers to a recent experiment [63]. The transport calculation was done with scaling factor s F = 7 (s F = 11) for the CNT-gated (etched) ring. Fig. 5(a) (orange line) as a function of magnetic field. The oscillation amplitude is nearly 4e 2 /h, with α = 99.87%, and the oscillation period is ∆B ≈ 33 mT which is in agreement with the period evaluated with the area enclosed by the two channels ∆B = h/eA c = 32.9 mT, confirming that the oscillation is due to the AB effect within the CNT-induced channels. At the points of completely destructive interference, the transmission goes mainly to the side drain leads, as evidenced by the high value of G 31 and G 41 in Fig. 5
(b). Figures 5(c)-5(d)
show the current densities at selected points indicated by the dashed lines in Fig. 5(b). The current at the minimums of G 21 is mainly scattered at the point where the two paths come together close to the right exit, as shown in Fig. 5(c). By contrast, at the maximum the guided current flows to the right lead without scattering [ Fig. 5(d)].
Etched Aharonov-Bohm ring
For the etched ring, we consider carrier density equal to n = −3.8122 × 10 12 cm −2 . The conductance as a function of magnetic field is shown in Fig. 5 (blue line). The high conductance value occurs because of multiple modes propagating in the leads and ring arms. Nevertheless, we can see clear oscillation with a period ∆B ≈ 33 mT, same as in the CNTgated ring, and in agreement with the period expected from
∆B = h/eA r .
The oscillation visibility α ≈ 1.96% in the etched ring is significantly lower than in the CNT-gated system, and we expect that this can also be the case in experiments. The experimentally measured visibility is expected to be lowered by the interference of multiple paths within the ring arms, as well as by the contact resistance and disorder, in particular the edge roughness introduced in the etching process [21,[63][64][65]. The current in a CNT-gated channel is confined to a narrow area, thus the carriers pick up nearly equivalent AB phases on their paths in the two ring arms, leading to a perfect destructive interference, as opposed to the etched ring with multiple interfering paths. Furthermore, the edge disorder is excluded in the electrostatically induced ring. Hence, overall the visibility in the experimental CNT-gated ring can exceed that observed in etched rings.
B. Two-path interferometer
Interferometers proposed recently in graphene rely on beam splitting at the pn junction close to the lattice termination [11,12,[66][67][68][69] or with the aid of the insulating ν = 0 quantum Hall state [22,23], and require high magnetic fields such that the quantum Hall edge and pn junction states are formed. In this section, we propose a two-path interferometer with a fully electrostatic beam splitter at the crossing of the two guiding channels, induced by two bent CNTs placed on top of each other. Figure 6(a) shows the considered geometry. The electron beam from one of the injectors is split at the crossing between the two channels. The two paths traversed by the electrons encircle a closed area, and at the other crossing either beam can end up in one of the detector leads on the right. The magnetic flux through the area enclosed by the two paths results in a phase difference between them which gives rise to the conductance oscillation.
The two paths are described by the functions ±[D cos(2πx/W ) + h], with D = 50 nm, W = 300 nm, and h = 10 nm. For modeling of a realistic device we take into account that at the crossing point, within x ∈ (−105, −65) nm and x ∈ (65, 105) nm [see the area marked by a square in Fig. 6(a), and enlarged in Fig. 6(b)] one CNT lies on top of the other and bends locally [see Fig. 6(c)] [70]. The capacitance within the square area is obtained from a 3D finite-element electrostatic simulation, and the resulting C cnt (x, y) [shown in Fig. 6(b)] is then combined with the straight CNT capacitance. The overall capacitance profile is shown in Fig. 6(a). to i = 1, . . . , 6 [as labeled in Fig. 6(a)], as a function of magnetic field and V cnt , for a fixed V bg = 7.85 V. We notice that the conductance between bottom left to bottom right (G 31 ) and upper left to upper right (G 42 ) is high since in this case the current guided by the channel follows a smooth trajectory and goes preferably along the straight part at the crossing point of the CNTs. Splitting of the current at the crossing still occurs and gives rise to oscillating conductance G 41 and G 32 with an amplitude ≈ 0.2 e 2 /h [see the line cut in Fig. 6(e) corresponding to the red line marked on the G 41 map of Fig. 6(d)]. The oscillation is due to the magnetic flux piercing the area enclosed by the two crossing channels, and the oscillation period ∆B ≈ 0.33 T is in agreement with the one evaluated given the loop area, ∆B = h/eA = 0.324 T. The visibility of the G 41 and G 32 oscillation α = 95.04% is high although the relatively low amplitude corroborates that the beam splitting is asymmetric. This can be improved by decreasing the crossing angle between the channels. A strong asymmetry of the conductance G 51 , G 61 , G 52 , G 62 at V bg 0 [i.e., unipolar regime -cf. Fig. 2(a)] occurs because the bulk modes injected from the left leads preferably flow to the lead 5 (6) for positive (negative) magnetic field due to the Lorentz force. The results shown here suggest the crossed CNTs can be used to work as electrostatic beam splitters that operate in moderately weak to strong magnetic fields.
i = 1 i = 2 i = 3 i = 4 i = 5 i = 6 j = 1 j = 2 V cnt (V) B (T) G i j (e 2 /h) B (T) G 41 (e 2 /h) (a) (b) (c) (d) (e)
IV. DIFFRACTIVE POINT INJECTOR
A. Truncated CNT gate Point contacts in graphene are becoming an essential component in a number of electron optical application such as Dirac fermionic optics cavities [16] and electron collimation [3,71]. In particular, approaching the limit of quantum-toclassical correspondence of the focused electron waves [3] requires a pointlike injector. However, the state-of-the-art experimental realization of point contacts is limited to 100 nm in diameter currently, using the prepatterning of the top hBN [72]. We propose an alternative scheme for a pointlike injector using a truncated CNT gate. The termination of the highly spatial-confined channel in graphene induced by the truncated CNT gate naturally forms an electron point injector that scales down to the order of 10 nm. For the modeling of the electrostatic coupling between the truncated CNT gate and graphene, we use s F = 7, and consider the capacitance in Fig. 1(c), multiplied by a smoothness function, (1 + tanh((x truncation − x)/d smooth ))/2, where x truncation = −60 nm and d smooth = 3 nm, as shown in Fig. 7(a). An example of the current density for V bg = 7.39 V and V cnt = −2 V, where a single mode emerges in the guiding channel, is shown in Fig. 7(b). The small section of the lower current density in the channel results from the standing wave due to the partial reflection from the truncation point. The angular distribution of the current density in Fig. 7(g) shows its directional characteristic: higher intensity is seen at small angles, as opposed to an ideal point injector which does not show any directional dependence. Also, in contrast to the current injected from a lead [ Fig. 7(c)], the truncated CNT forms a current source of a smaller size. Nevertheless, for certain applications a uniform current distribution is not crucial, one of the examples being collimation of electron beams, described in the next subsection.
B. Generating electron beam
To further demonstrate the utility of the truncated CNT gate, we combine the pointlike injector with a parabolicshaped pn junction to form an electron beam generator, following Ref. 3. The difference here is that the role of the pointlike injector is played by the CNT gate truncated at the focal point of the parabolic pn junction, instead of a pointlike contact [3,73]. The pn junction symmetric in the carrier density is modeled by a smooth function, n junction tanh[(x parabola − x)/d smooth ] describing the x dependence and a quadratic function and x parabola = −y 2 /4 f accounting for the y dependence, where the carrier density n junction = 5.78 × 10 11 cm −2 , the smoothness parameter d smooth = 15 nm, and the focal length f = 200 nm.
Figure 7(d) shows an exemplary carrier density n(x, y) considering V cnt = −3.5 V. The resulting current density map is shown in Fig. 7(e), where a well collimated electron beam at the right side of the parabolic pn junction can be seen. The generated electron beam, as explained already in Ref. 3, is a consequence of the negative refraction combined with the Klein collimation [6], which describes the transmission function that decays with the incidence angle, from perfect at normal incidence, known as Klein tunneling [6,73,74], to zero at a certain finite angle depending on the smoothness of the pn junction [6]. The nearly perfectly collimated electron beam is further examined by showing the x and y components of the 2D current density J = (J x , J y ). Figure 7(h) shows the line cut of J x and J y along the path marked on Fig. 7(e). The vanishing J y shows the efficient collimation of the current as a consequence of negative refraction of the pointlike source positioned at the focal point of the parabolic pn junction. The collimated current density at the right side of the junction exhibits a J x distribution that peaks around the parabola axis at y = 0, as a consequence of the Klein collimation. On the other hand, current injected directly from a lead without a CNT is not collimated, as shown in Fig. 7(f).
V. CONCLUDING REMARKS
In summary, we investigated the guiding effect in CNTgated two-dimensional systems, and found well-defined conductance plateaus when discrete guiding modes and bipolar junctions -in graphene-based systems -are formed in the channel. This mechanism of conductance quantization in SLG is an alternative to QPCs which were so far created by etching graphene rather than gating it due to the difficulty in inducing a bandgap in graphene. The conductance plateaus obtained by CNT gating are sharper compared to the plateaus in etched graphene QPCs as well as other systems, which can be electrostatically depleted to form QPCs, including BLG and 2DEG. Moreover, CNT guiding works well in curved channels, making them useful as building blocks for electro-optical devices, including quantum rings and other interferometers. Thanks to the character of carrier confinement in CNT gated channels, they are not limited to the operation at strong magnetic fields, as opposed to interferometers based on pn junctions or the insulating ν = 0 quantum Hall state. As presented here, this can also be used to create point injectors simply by gating with a CNT with an abrupt termination. CNT gating allows electrostatic confinement of carriers which is a way to exclude imperfections like edge roughness introduced in the etching process in lithographically defined gates, offering a versatile tool for electro-optical components. In Sec. III we consider devices gated with curved CNTs. For simplicity and to avoid the need for electrostatic simulation for each system, the capacitance of a curved CNT C cnt (x, y) is calculated using C(y), the transverse capacitance profile of a straight CNT, as illustrated in Fig. 8. For a CNT shape described by a function f (x), the original 1D profile is shifted in the y coordinate, and next, to take into account the channel slope f (x) = tan θ , it is scaled by cos θ . The resulting formula readsC
cnt (x, y) = C cnt ( cos θ (x)[y − f (x)]).
(A1)
Appendix B: Disorder
To check if the guiding effect is robust against disorder, we added an on-site potential ξU dis , with ξ being a random num-f (x) ber, ξ ∈ (−0.5, 0.5) [see Fig. 9(a)], and U dis is the maximum disorder strength. We calculated the conductance averaged over 200 disorder configurations for a fixed U dis (for U dis = 0.2 eV, we used 1000 configurations). Figure 9(b) shows the conductance for V bg = −18 V and s F = 6 [see Fig. 3(e) in the main text]. For moderate disorder with U dis 0.05 eV the plateaus are slightly disturbed but close to the expected value G = 4e 2 M/h. However, for considerably strong disorder of the order of U dis = 0.2 eV, the steps are destroyed. Disorder of few meV has been observed in graphene samples encapsulated in hBN [75], so the present results allow us to conclude that the quantization can be observed in realistic samples.
Appendix C: Intervalley scattering
For a potential that varies strongly on the length scale of a lattice spacing, the intervalley scattering is present. The CNTinduced potential profile is not sharp enough to cause the intervalley scattering for realistic gate voltages. However, for high gate voltages, in the case of scaled lattice, the potential variation over the lattice spacing is strong. As a result, in a channel induced along the zigzag direction we observe intermediate conductance plateaus at G = 4e 2 (M − 1/2)/h, M = 1, 2, . . . , as seen in a conductance line cut for V bg = −18 V presented in Fig. 10, for s F = 1 and s F = 2.
Appendix D: Thicker hBN
For a direct connection with the experimental work in Ref. 44, we consider 4 nm hBN layer between the CNT and graphene. However, for a safer design, a thicker hBN is required to prevent the possible dielectric breakdown. We performed calculations for the hBN thicknesses of 10, 15 and 20 nm, for which the breakdown field was shown to be about 13 MVcm −1 , 12 MVcm −1 , and 11 MVcm −1 , respectively [76]. The breakdown voltage corresponds to about 14 V, 19 V, and 22 V, respectively, and a wide gate voltage range considered in this work is more reasonable experimentally. The conductance map obtained for the 10 nm thick hBN is presented in Fig. 11(a). The conductance plateaus are present up to about M = 3 for moderately low gate voltages. The line cuts for V bg = −18 V for the three cases are presented in Fig. 11(b). The conductance quantization is visible, although the steps become smoothened for thicker hBN.
Appendix E: Wider injection lead
Throughout this work we used narrow injector and collector leads. Here, we consider a system with the lead width w = 180 nm, and the computational box size L ×W = 1000 nm × 360 nm. In the conductance map in Fig. 12(a) the plateaus are present despite the large lead width. This is better seen in 12(b) which shows the conductance line cuts at V bg = −18 V for w = 180 nm and w = 34 nm for comparison. Figure 12(c) presents the ratio of the transmission to the top and bottom drain leads T leak to the transmission summed over the drains and right lead T sum . Whereas in the narrow lead the leakage may nearly drop to zero, in the wide lead the leakage is high because of the large number of bulk modes. However, the bulk modes could carry additional currents to the right lead, if the leads are very wide and the system is short, such that the conductance plateaus are no longer intact.
FIG. 1 .
1(a) Sketch of the considered electron system gated by CNT. (b) Spatial profile of the electrostatically simulated electric potential u(y, z) considering a grounded sample subject to a CNT gate of radius r cnt = 1 nm applied with V cnt = 1 V. (c) Capacitance profile (solid line) corresponding to (b), where dielectric constant ε hBN = 3 below the nanotube and ε vac = 1 elsewhere are considered. The blue (orange) dashed line shows the analytical result for uniform dielectric constant ε hBN = 3 (ε vac = 1). (d) Top view of the device. The dashed lines show the area considered in the transport calculation.
FIG. 2 .
2(a) Conductance between thin leads of SLG as a function of V cnt and V bg , with V bg = −13.16 V marked for the rest of the panels. Spatial profile of the current density distribution (b) without the guiding channel [V cnt = 0 marked by × on panel (a)] and (c) with the guiding channel at the lowest mode [V cnt = 3 V marked by • on panel (a)]. (d) The line cut corresponding to the red line marked on (a). (e) Current density cross section at x marked by the arrow in (c) as a function of V cnt and y, illustrating the formation of quantized guiding modes. (f)-(h) Band structures and (i)-(k) current density maps showing the range marked by the green box in (c) at gate voltage points marked in (a) and (d). The yellow area in (f)-(h) indicate the region where the states confined within the channel exist.density with a single mode available in the channel is shown inFig. 2(c), corresponding to the gate voltage marked with a circle inFig. 2(a), and demonstrating perfect guiding between the left and right lead. The conductance cross section inFig. 2(d)for a fixed V bg = −18 V [red line inFig. 2(a)] shows nearly ideal conductance quantization at values 4e 2 h M, where M is an integer, and the spacing by four arises from the spin and valley degeneracy.Figure 2(e) shows the current density cross section at x = 0, marked by an arrow inFig. 2(c), as a function of V cnt . Between V cnt ≈ 2 V and 4 V, the M = 1 modes contribute to current. At the transition from the plateau M = 1 to M = 2 [Fig. 2(d)], the second branch of guiding modes becomes available for transport. The M = 2 transverse modes wave functions exhibit one node in the center. In the corresponding current density inFig. 2(e) two maximums can be resolved, as the current propagates in both first and second branch. Similarly, at the transition from M = 2 to M = 3, the third branch opens, and three maximums of the current density profile are resolved. In this current density map, the current is carried nearly entirely within the guiding channel, with only a small fraction flowing in the bulk when the successive guiding modes open for transport (close to the transition M − 1 → M).
Figures
Figures 2(f)-2(h) show band structures calculated for a translationally invariant ribbon of width 300 nm, obtained for the CNT voltage values marked by the corresponding symbols in Figs. 2(a) and 2(d). The band structure of graphene gated by a CNT consists of Dirac cones typical for pure graphene, corresponding to the bulk graphene beyond the channel with an almost flat potential profile, and additional discrete branches arising from the confinement within the CNT-induced channel. The energies of the states bound within the channel are within the area marked with yellow in Fig. 2(f)-2(h), delimited by E = ±hv F |k x | + V out and E =hv F |k x | + V in [marked by the dotted lines in Fig. 2(f)-2(h)][25]. New guiding modes open at the energies for which the branches touch the Dirac cone. Figures 2(i)-2(k) show the representative current density maps for the cases of M = 1, 2, and 3, corresponding to the points marked with the symbols in Figs. 2(a) and 2(d).
marked by the dotted lines in Fig. 2(f)-2(h)][25]. New guiding modes open at the energies for which the branches touch the Dirac cone. Figures 2(i)-2(k) show the representative current density maps for the cases of M = 1, 2, and 3, corresponding to the points marked with the symbols in Figs. 2(a) and 2(d).
Figure 3 (
3a) shows the SLG conductance map calculated with s F = 6, andFig. 3(e) its cross section at V bg = −18 V. To check the applicability of higher scaling factors, we compare the cross sections of the conductance calculated with s F = 2 and s F = 6. Both lines are in a good agreement with the step only slightly shifted in V bg , thus we conclude with high scaling factor the results remain valid.
Figures 3(a)-3(d) show the conductance as a function of V cnt and V bg for SLG, 2DEG, BLG, and dtBLG. Figures 3(e)-3(h) in the middle row of the figure present the cross sections of conductance in each system along the red lines for fixed V bg = −18 V, and the respective band structures are plotted in the bottom row of the panel, in Figs. 3(i)-3(l), at selected points marked with the stars in Figs. 3(e)-3(h). The band structures are calculated for translationally invariant systems of width 300 nm. Below we describe the characteristics of each system.
FIG. 3 .
3(a)-(d) Two-terminal conductance in SLG, 2DEG, BLG, dt-BLG, respectively, as a function of V cnt and V bg , (e)-(h) its cross section at V bg = −18 V, and (i)-(l) the dispersion relation at a point marked by a star in (e)-(h). The nondispersive band in (l) corresponds to the edge mode in the zigzag-terminated layer of dtBLG.
FIG. 4 .
4Two-terminal conductance calculated for the (a) bottom and (b) top graphene layer, sum of which yields Fig. 3(d). (c) Density profiles of the top and bottom layer at V bg = −18 V, V cnt = 3.3 V.
FIG. 5 .
5(a) Two-terminal conductance as a function of the magnetic field B of an etched graphene quantum ring and graphene gated by bent CNTs to form the guided quantum ring, as indicated by the insets showing the ring geometries. (b) The conductance between the injector and the wide top/bottom leads of CNT-gated device in the region indicated by a rectangle in (a). (c)-(d) Current densities at a minimum and maximum of conductance of gated ring, indicated by the vertical dashed lines in (b).1. CNT-gated Aharonov-Bohm ring For the CNT-gated ring we choose V bg = 7.85 V and V cnt = −2 V, yielding the densities n in = −4.42 × 10 12 cm −2 and n out = 5.3 × 10 11 cm −2 . This corresponds to M = 1 [see Fig. 2(a)], and the guided current flows from the left narrow lead to the right one, whereas the bulk modes are absorbed by the top and bottom leads. The CNT-gated ring conductance is shown in
Figure 6 (
6d) shows the conductance G i j from leads j = 1
FIG. 6 .
6(a) The capacitance profile of the modeled two-path interferometer induced by two crossed CNTs. The solid lines show the region considered for transport calculation, the leads are labeled by numbers 1 to 6, and the dashed lines separate the leads from the scattering region. (b) Zoom of the capacitance profile around the crossing point of the CNTs within the square marked in (a). (c) Geometry of the crossed CNTs adopted for the finite element method electrostatic simulation. (d) Conductance between pairs of leads labeled in (a). (e) Cross section of G 41 in (d) along the red line at V bg ≈ −2 V.
FIG. 7 .
7(a) Modeled capacitance between the truncated CNT gate and graphene. (b),(c) Current density maps at V bg = 7.39 V and V cnt = −2 V, with and without the CNT gate, respectively. (d) Carrier density of the lensing apparatus composed of a CNT gate truncated at the focal point of a parabolic interface where the symmetric pn junction forms. (e),(f) Current density maps with and without a CNT gate, respectively. (h) The vector components of current along the blue line cut in (e). (g) The angular distribution of current along the dashed line in (b) where radius r = 30 nm, with respect to the terminal point.
ACKNOWLEDGMENTS
Financial support from Taiwan Ministry of Science and Technology (109-2112-M-006-020-MY3) is gratefully acknowledged. This research was supported in part by PL-Grid Infrastructure and Higher Education Sprout Project, Ministry of Education to the Headquarters of University of Advancement at National Cheng Kung University.Appendix A: Curved CNT capacitance profile
FIG. 8 .
8Sketch of the approximated capacitance induced by a curved CNT with the shape described by a function f (x).
FIG. 9 .
9(a) One of the random configurations of the disorder parameter ξ . (b) Conductance line cut at V bg = −18 V and s F = 6 with a few disorder strength values U dis .
FIG. 10 .
10Conductance line cuts with the channel along the zigzag direction for V bg = −18 V with s F = 1 and s F = 2.
FIG. 11 .
11(a) Conductance between thin leads of SLG as a function of V cnt and V bg with 10 nm thick hBN between the CNT and graphene. (b) Line cuts with V bg = −18 V for systems with hBN thickness 10, 15 and 20 nm.
12. (a) Conductance as a function of V cnt and V bg with 180 nm wide injection lead. (b) Line cut with V bg = −18 V.
eV, whereas in BLG additionally the interlayer hoppings t i j = 0.39 eV between the dimer sites are included. For 2DEG t i j =h 2 /2m * ∆x 2 where ∆x = 1 nm is the grid spacing, and the on-site energies contain an additional term 4h 2 /2m * ∆x 2 . To model the external magnetic field B = (0, 0, B), the hopping
The focusing of electron flow and a Veselago lens in graphene p-n junctions. V V Cheianov, V , B L Altshuler, 10.1126/science.1138020Science. 3151252V. V. Cheianov, V. Fal'ko, and B. L. Altshuler, The focusing of electron flow and a Veselago lens in graphene p-n junctions, Science 315, 1252 (2007).
Observation of negative refraction of Dirac fermions in graphene. G.-H Lee, G.-H Park, H.-J Lee, 10.1038/nphys3460Nat. Phys. 11925G.-H. Lee, G.-H. Park, and H.-J. Lee, Observation of negative refraction of Dirac fermions in graphene, Nat. Phys. 11, 925 (2015).
Creating and steering highly directional electron beams in graphene. M.-H Liu, C Gorini, K Richter, 10.1103/PhysRevLett.118.066801Phys. Rev. Lett. 11866801M.-H. Liu, C. Gorini, and K. Richter, Creating and steering highly directional electron beams in graphene, Phys. Rev. Lett. 118, 066801 (2017).
A two-dimensional Dirac fermion microscope. P Bøggild, J M Caridad, C Stampfer, G Calogero, N R Papior, M Brandbyge, 10.1038/ncomms15783Nat. Commun. 815783P. Bøggild, J. M. Caridad, C. Stampfer, G. Calogero, N. R. Pa- pior, and M. Brandbyge, A two-dimensional Dirac fermion mi- croscope, Nat. Commun. 8, 15783 (2017).
Imaging dirac fermions flow through a circular veselago lens. B Brun, N Moreau, S Somanchi, V.-H Nguyen, K Watanabe, T Taniguchi, J.-C Charlier, C Stampfer, B Hackens, 10.1103/PhysRevB.100.041401Phys. Rev. B. 10041401B. Brun, N. Moreau, S. Somanchi, V.-H. Nguyen, K. Watan- abe, T. Taniguchi, J.-C. Charlier, C. Stampfer, and B. Hackens, Imaging dirac fermions flow through a circular veselago lens, Phys. Rev. B 100, 041401(R) (2019).
Fal'ko, Selective transmission of Dirac electrons and ballistic magnetoresistance of n−p junctions in graphene. V V Cheianov, V I , 10.1103/PhysRevB.74.041403Phys. Rev. B. 7441403V. V. Cheianov and V. I. Fal'ko, Selective transmission of Dirac electrons and ballistic magnetoresistance of n−p junctions in graphene, Phys. Rev. B 74, 041403(R) (2006).
. K Wang, M M Elahi, L Wang, K M M Habib, T Taniguchi, K Watanabe, J Hone, A W Ghosh, G.-H , K. Wang, M. M. Elahi, L. Wang, K. M. M. Habib, T. Taniguchi, K. Watanabe, J. Hone, A. W. Ghosh, G.-H.
Graphene transistor based on tunable Dirac fermion optics. P Lee, Kim, 10.1073/pnas.1816119116Proc. Natl. Acad. Sci. Natl. Acad. Sci116Lee, and P. Kim, Graphene transistor based on tunable Dirac fermion optics, Proc. Natl. Acad. Sci. 116, 6575 (2019), https://www.pnas.org/content/116/14/6575.full.pdf.
Quantum interference and Klein tunnelling in graphene heterojunctions. A F Young, P Kim, 10.1038/nphys1198Nat. Phys. 5222A. F. Young and P. Kim, Quantum interference and Klein tun- nelling in graphene heterojunctions, Nat. Phys. 5, 222 (2009).
Ballistic interferences in suspended graphene. P Rickhaus, R Maurand, M.-H Liu, M Weiss, K Richter, C Schönenberger, 10.1038/ncomms3342Nat Commun. 42342P. Rickhaus, R. Maurand, M.-H. Liu, M. Weiss, K. Richter, and C. Schönenberger, Ballistic interferences in suspended graphene, Nat Commun. 4, 2342 (2013).
A ballistic pn junction in suspended graphene with split bottom gates. A L Grushina, D.-K Ki, A F Morpurgo, 10.1063/1.4807888Appl. Phys. Lett. 102223102A. L. Grushina, D.-K. Ki, and A. F. Morpurgo, A ballistic pn junction in suspended graphene with split bottom gates, Appl. Phys. Lett. 102, 223102 (2013).
Mach-Zehnder interferometry using spin-and valley-polarized quantum Hall edge states in graphene. D S Wei, T Van Der Sar, J D Sanchez-Yamagishi, K Watanabe, T Taniguchi, P Jarillo-Herrero, B I Halperin, A Yacoby, https:/arxiv.org/abs/https:/www.science.org/doi/pdf/10.1126/sciadv.1700600Science Advances. 31700600D. S. Wei, T. van der Sar, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero, B. I. Halperin, and A. Yacoby, Mach-Zehnder interferome- try using spin-and valley-polarized quantum Hall edge states in graphene, Science Advances 3, e1700600 (2017), https://www.science.org/doi/pdf/10.1126/sciadv.1700600.
Quantum Hall valley splitters and a tunable Mach-Zehnder interferometer in graphene. M Jo, P Brasseur, A Assouline, G Fleury, H.-S Sim, K Watanabe, T Taniguchi, W Dumnernpanich, P Roche, D C Glattli, N Kumada, F D Parmentier, P Roulleau, 10.1103/PhysRevLett.126.146803Phys. Rev. Lett. 126146803M. Jo, P. Brasseur, A. Assouline, G. Fleury, H.-S. Sim, K. Watanabe, T. Taniguchi, W. Dumnernpanich, P. Roche, D. C. Glattli, N. Kumada, F. D. Parmentier, and P. Roulleau, Quantum Hall valley splitters and a tunable Mach-Zehnder interferome- ter in graphene, Phys. Rev. Lett. 126, 146803 (2021).
Electrostatic confinement of electrons in an integrable graphene quantum dot. J H Bardarson, M Titov, P W Brouwer, 10.1103/PhysRevLett.102.226803Phys. Rev. Lett. 102226803J. H. Bardarson, M. Titov, and P. W. Brouwer, Electrostatic con- finement of electrons in an integrable graphene quantum dot, Phys. Rev. Lett. 102, 226803 (2009).
Edge effects in graphene nanostructures: From multiple reflection expansion to density of states. J Wurm, K Richter, I Adagideli, 10.1103/PhysRevB.84.075468Phys. Rev. B. 8475468J. Wurm, K. Richter, and I. Adagideli, Edge effects in graphene nanostructures: From multiple reflection expansion to density of states, Phys. Rev. B 84, 075468 (2011).
Creating and probing electron whispering-gallery modes in graphene. Y Zhao, J Wyrick, F D Natterer, J F Rodriguez-Nieva, C Lewandowski, K Watanabe, T Taniguchi, L S Levitov, N B Zhitenev, J A Stroscio, 10.1126/science.aaa7469Science. 348672Y. Zhao, J. Wyrick, F. D. Natterer, J. F. Rodriguez-Nieva, C. Lewandowski, K. Watanabe, T. Taniguchi, L. S. Levitov, N. B. Zhitenev, and J. A. Stroscio, Creating and probing elec- tron whispering-gallery modes in graphene, Science 348, 672 (2015).
Dirac fermion optics and directed emission from single-and bilayer graphene cavities. J.-K Schrepfer, S.-C Chen, M.-H Liu, K Richter, M Hentschel, 10.1103/PhysRevB.104.155436Phys. Rev. B. 104155436J.-K. Schrepfer, S.-C. Chen, M.-H. Liu, K. Richter, and M. Hentschel, Dirac fermion optics and directed emission from single-and bilayer graphene cavities, Phys. Rev. B 104, 155436 (2021).
Hackens, Graphene whisperitronics: Transducing whispering gallery modes into electronic transport. B Brun, V.-H Nguyen, N Moreau, S Somanchi, K Watanabe, T Taniguchi, J.-C Charlier, C Stampfer, B , 10.1021/acs.nanolett.1c03451Nano Letters. 22128B. Brun, V.-H. Nguyen, N. Moreau, S. Somanchi, K. Watan- abe, T. Taniguchi, J.-C. Charlier, C. Stampfer, and B. Hack- ens, Graphene whisperitronics: Transducing whispering gallery modes into electronic transport, Nano Letters 22, 128 (2022).
Electrically tunable transverse magnetic focusing in graphene. T Taychatanapat, K Watanabe, T Taniguchi, P Jarillo-Herrero, 10.1038/nphys2549Nat. Phys. 9225T. Taychatanapat, K. Watanabe, T. Taniguchi, and P. Jarillo- Herrero, Electrically tunable transverse magnetic focusing in graphene, Nat. Phys. 9, 225 (2013).
Electron optics with p-n junctions in ballistic graphene. S Chen, Z Han, M M Elahi, K M M Habib, L Wang, B Wen, Y Gao, T Taniguchi, K Watanabe, J Hone, 10.1126/science.aaf5481Science. 3531522S. Chen, Z. Han, M. M. Elahi, K. M. M. Habib, L. Wang, B. Wen, Y. Gao, T. Taniguchi, K. Watanabe, J. Hone, and et al., Electron optics with p-n junctions in ballistic graphene, Science 353, 1522 (2016).
Fal'ko, Minibands in twisted bilayer graphene probed by magnetic focusing. A I Berdyugin, B Tsim, P Kumaravadivel, S G Xu, A Ceferino, A Knothe, R K Kumar, T Taniguchi, K Watanabe, A K Geim, I V Grigorieva, V I , 10.1126/sciadv.aay7838Sci. Adv. 67838A. I. Berdyugin, B. Tsim, P. Kumaravadivel, S. G. Xu, A. Ce- ferino, A. Knothe, R. K. Kumar, T. Taniguchi, K. Watanabe, A. K. Geim, I. V. Grigorieva, and V. I. Fal'ko, Minibands in twisted bilayer graphene probed by magnetic focusing, Sci. Adv. 6, eaay7838 (2020).
Observation of Aharonov-Bohm conductance oscillations in a graphene ring. S Russo, J B Oostinga, D Wehenkel, H B Heersche, S S Sobhani, L M K Vandersypen, A F Morpurgo, 10.1103/PhysRevB.77.085413Phys. Rev. B. 7785413S. Russo, J. B. Oostinga, D. Wehenkel, H. B. Heersche, S. S. Sobhani, L. M. K. Vandersypen, and A. F. Morpurgo, Observa- tion of Aharonov-Bohm conductance oscillations in a graphene ring, Phys. Rev. B 77, 085413 (2008).
A tunable Fabry-Pérot quantum Hall interferometer in graphene. C Déprez, L Veyrat, H Vignaud, G Nayak, K Watanabe, T Taniguchi, F Gay, H Sellier, B Sacépé, 10.1038/s41565-021-00847-xNat. Nanotechnol. 16555C. Déprez, L. Veyrat, H. Vignaud, G. Nayak, K. Watanabe, T. Taniguchi, F. Gay, H. Sellier, and B. Sacépé, A tunable Fabry-Pérot quantum Hall interferometer in graphene, Nat. Nanotechnol. 16, 555 (2021).
Aharonov-Bohm effect in graphene-based Fabry-Pérot quantum Hall interferometers. Y Ronen, T Werkmeister, D Najafabadi, A T Pierce, L E Anderson, Y J Shin, S Y Lee, Y H Lee, B Johnson, K Watanabe, T Taniguchi, A Yacoby, P Kim, 10.1038/s41565-021-00861-zNat. Nanotechnol. 16563Y. Ronen, T. Werkmeister, D. Haie Najafabadi, A. T. Pierce, L. E. Anderson, Y. J. Shin, S. Y. Lee, Y. H. Lee, B. Johnson, K. Watanabe, T. Taniguchi, A. Yacoby, and P. Kim, Aharonov- Bohm effect in graphene-based Fabry-Pérot quantum Hall in- terferometers, Nat. Nanotechnol. 16, 563 (2021).
Gate-controlled guiding of electrons in graphene. J R Williams, T Low, M S Lundstrom, C M Marcus, 10.1038/nnano.2011.3Nat. Nanotechnol. 6222J. R. Williams, T. Low, M. S. Lundstrom, and C. M. Marcus, Gate-controlled guiding of electrons in graphene, Nat. Nan- otechnol. 6, 222 (2011).
Vasilopoulos, Confined states and direction-dependent transmission in graphene quantum wells. J M Pereira, V Mlinar, F M Peeters, P , 10.1103/PhysRevB.74.045424Phys. Rev. B. 7445424J. M. Pereira, V. Mlinar, F. M. Peeters, and P. Vasilopou- los, Confined states and direction-dependent transmission in graphene quantum wells, Phys. Rev. B 74, 045424 (2006).
Quantum Goos-Hänchen effect in graphene. C W J Beenakker, R A Sepkhanov, A R Akhmerov, J Tworzydło, 10.1103/PhysRevLett.102.146804Phys. Rev. Lett. 102146804C. W. J. Beenakker, R. A. Sepkhanov, A. R. Akhmerov, and J. Tworzydło, Quantum Goos-Hänchen effect in graphene, Phys. Rev. Lett. 102, 146804 (2009).
Guided modes in graphene waveguides. F.-M Zhang, Y He, X Chen, 10.1063/1.3143614Appl. Phys. Lett. 94212105F.-M. Zhang, Y. He, and X. Chen, Guided modes in graphene waveguides, Appl. Phys. Lett. 94, 212105 (2009).
Smooth electron waveguides in graphene. R R Hartmann, N J Robinson, M E Portnoi, 10.1103/PhysRevB.81.245431Phys. Rev. B. 81245431R. R. Hartmann, N. J. Robinson, and M. E. Portnoi, Smooth electron waveguides in graphene, Phys. Rev. B 81, 245431 (2010).
Quasi-exact solution to the Dirac equation for the hyperbolic-secant potential. R R Hartmann, M E Portnoi, 10.1103/PhysRevA.89.012101Phys. Rev. A. 8912101R. R. Hartmann and M. E. Portnoi, Quasi-exact solution to the Dirac equation for the hyperbolic-secant potential, Phys. Rev. A 89, 012101 (2014).
Scalable tight-binding model for graphene. M.-H Liu, P Rickhaus, P Makk, E Tóvári, R Maurand, F Tkatschenko, M Weiss, C Schönenberger, K Richter, 10.1103/PhysRevLett.114.036601Phys. Rev. Lett. 11436601M.-H. Liu, P. Rickhaus, P. Makk, E. Tóvári, R. Maurand, F. Tkatschenko, M. Weiss, C. Schönenberger, and K. Richter, Scalable tight-binding model for graphene, Phys. Rev. Lett. 114, 036601 (2015).
Design of graphene waveguides: Effect of edge orientation and waveguide configuration. N A Shah, V Mosallanejad, K.-L Chiu, G.-P Guo, 10.1103/PhysRevB.100.125412Phys. Rev. B. 100125412N. A. Shah, V. Mosallanejad, K.-L. Chiu, and G.-p. Guo, De- sign of graphene waveguides: Effect of edge orientation and waveguide configuration, Phys. Rev. B 100, 125412 (2019).
Guiding of electrons in a few-mode ballistic graphene channel. P Rickhaus, M.-H Liu, P Makk, R Maurand, S Hess, S Zihlmann, M Weiss, K Richter, C Schönenberger, 10.1021/acs.nanolett.5b01877Nano Lett. 155819P. Rickhaus, M.-H. Liu, P. Makk, R. Maurand, S. Hess, S. Zihlmann, M. Weiss, K. Richter, and C. Schönenberger, Guiding of electrons in a few-mode ballistic graphene channel, Nano Lett. 15, 5819 (2015).
Valley-symmetry-preserved transport in ballistic graphene with gate-defined carrier guiding. M Kim, J.-H Choi, S.-H Lee, K Watanabe, T Taniguchi, S.-H Jhi, H.-J Lee, 10.1038/nphys3804Nat. Phys. 121022M. Kim, J.-H. Choi, S.-H. Lee, K. Watanabe, T. Taniguchi, S.- H. Jhi, and H.-J. Lee, Valley-symmetry-preserved transport in ballistic graphene with gate-defined carrier guiding, Nat. Phys. 12, 1022 (2016).
Controlling the shape, orientation, and linkage of carbon nanotube features with nano affinity templates. Y Wang, D Maspoch, S Zou, G C Schatz, R E Smalley, C A Mirkin, 10.1073/pnas.0511022103Proc. Natl. Acad. Sci. 1032026Y. Wang, D. Maspoch, S. Zou, G. C. Schatz, R. E. Smal- ley, and C. A. Mirkin, Controlling the shape, orientation, and linkage of carbon nanotube features with nano affin- ity templates, Proc. Natl. Acad. Sci. 103, 2026 (2006), https://www.pnas.org/content/103/7/2026.full.pdf.
Self-organized growth of complex nanotube patterns on crystal surfaces. E Joselevich, 10.1007/s12274-009-9077-9Nano. Res. 2743E. Joselevich, Self-organized growth of complex nanotube pat- terns on crystal surfaces, Nano. Res. 2, 743 (2009).
Controlled placement of individual carbon nanotubes. X M H Huang, R Caldwell, L Huang, S C Jun, M Huang, M Y Sfeir, S P O'brien, J Hone, 10.1021/nl050886aNano Lett. 51515X. M. H. Huang, R. Caldwell, L. Huang, S. C. Jun, M. Huang, M. Y. Sfeir, S. P. O'Brien, and J. Hone, Controlled placement of individual carbon nanotubes, Nano Lett. 5, 1515 (2005).
Shaping electron wave functions in a carbon nanotube with a parallel magnetic field. M Margańska, D R Schmid, A Dirnaichner, P L Stiller, C Strunk, M Grifoni, A K Hüttel, 10.1103/PhysRevLett.122.086802Phys. Rev. Lett. 12286802M. Margańska, D. R. Schmid, A. Dirnaichner, P. L. Stiller, C. Strunk, M. Grifoni, and A. K. Hüttel, Shaping electron wave functions in a carbon nanotube with a parallel magnetic field, Phys. Rev. Lett. 122, 086802 (2019).
Deterministic transfer of optical-quality carbon nanotubes for atomically defined technology. K Otsuka, N Fang, D Yamashita, T Taniguchi, K Watanabe, Y K Kato, 10.1038/s41467-021-23413-4Nat. Commun. 123138K. Otsuka, N. Fang, D. Yamashita, T. Taniguchi, K. Watanabe, and Y. K. Kato, Deterministic transfer of optical-quality car- bon nanotubes for atomically defined technology, Nat. Com- mun. 12, 3138 (2021).
Contact spacing controls the on-current for all-carbon field effect transistors. A D Özdemir, P Barua, F Pyatkov, F Hennrich, Y Chen, W Wenzel, R Krupke, A Fediai, 10.1038/s42005-021-00747-5Communications Physics. 4246A. D.Özdemir, P. Barua, F. Pyatkov, F. Hennrich, Y. Chen, W. Wenzel, R. Krupke, and A. Fediai, Contact spacing controls the on-current for all-carbon field effect transistors, Communi- cations Physics 4, 246 (2021).
Soft-lock drawing of super-aligned carbon nanotube bundles for nanometre electrical contacts. Y Guo, E Shi, J Zhu, P.-C Shen, J Wang, Y Lin, Y Mao, S Deng, B Li, J.-H Park, A.-Y Lu, S Zhang, Q Ji, Z Li, C Qiu, S Qiu, Q Li, L Dou, Y Wu, J Zhang, T Palacios, A Cao, J Kong, 10.1038/s41565-021-01034-8Nat. Nanotechnol. Y. Guo, E. Shi, J. Zhu, P.-C. Shen, J. Wang, Y. Lin, Y. Mao, S. Deng, B. Li, J.-H. Park, A.-Y. Lu, S. Zhang, Q. Ji, Z. Li, C. Qiu, S. Qiu, Q. Li, L. Dou, Y. Wu, J. Zhang, T. Palacios, A. Cao, and J. Kong, Soft-lock drawing of super-aligned car- bon nanotube bundles for nanometre electrical contacts, Nat. Nanotechnol. 10.1038/s41565-021-01034-8 (2022).
Coulomb drag between two-dimensional and onedimensional electron gases. S K Lyo, 10.1103/PhysRevB.68.045310Phys. Rev. B. 6845310S. K. Lyo, Coulomb drag between two-dimensional and one- dimensional electron gases, Phys. Rev. B 68, 045310 (2003).
Bipolar electron waveguides in graphene. R R Hartmann, M E Portnoi, 10.1103/PhysRevB.102.155421Phys. Rev. B. 102155421R. R. Hartmann and M. E. Portnoi, Bipolar electron waveguides in graphene, Phys. Rev. B 102, 155421 (2020).
Guided modes and terahertz transitions for two-dimensional Dirac fermions in a smooth double-well potential. R R Hartmann, M E Portnoi, 10.1103/PhysRevA.102.052229Phys. Rev. A. 10252229R. R. Hartmann and M. E. Portnoi, Guided modes and terahertz transitions for two-dimensional Dirac fermions in a smooth double-well potential, Phys. Rev. A 102, 052229 (2020).
Guiding Dirac fermions in graphene with a carbon nanotube. A Cheng, T Taniguchi, K Watanabe, P Kim, J.-D Pillet, 10.1103/PhysRevLett.123.216804Phys. Rev. Lett. 123216804A. Cheng, T. Taniguchi, K. Watanabe, P. Kim, and J.-D. Pillet, Guiding Dirac fermions in graphene with a carbon nanotube, Phys. Rev. Lett. 123, 216804 (2019).
Coulomb drag between a carbon nanotube and monolayer graphene. L Anderson, A Cheng, T Taniguchi, K Watanabe, P Kim, 10.1103/PhysRevLett.127.257701Phys. Rev. Lett. 127257701L. Anderson, A. Cheng, T. Taniguchi, K. Watanabe, and P. Kim, Coulomb drag between a carbon nanotube and monolayer graphene, Phys. Rev. Lett. 127, 257701 (2021).
The electronic transport efficiency of a graphene charge carrier guider and an Aharanov-Bohm interferometer. X Wei, W.-J Zhang, S.-G Cheng, 10.1088/1361-648x/aae9d3J. Phys.: Condens. Matter. 30485302X. Wei, W.-J. Zhang, and S.-G. Cheng, The electronic trans- port efficiency of a graphene charge carrier guider and an Aha- ranov-Bohm interferometer, J. Phys.: Condens. Matter 30, 485302 (2018).
Theory of carrier density in multigated doped graphene sheets with quantum correction. M.-H Liu, 10.1103/PhysRevB.87.125427Phys. Rev. B. 87125427M.-H. Liu, Theory of carrier density in multigated doped graphene sheets with quantum correction, Phys. Rev. B 87, 125427 (2013).
Fabry-Pérot interference in gapped bilayer graphene with broken anti-Klein tunneling. A Varlet, M.-H Liu, V Krueckl, D Bischoff, P Simonet, K Watanabe, T Taniguchi, K Richter, K Ensslin, T Ihn, 10.1103/PhysRevLett.113.116601Phys. Rev. Lett. 113116601A. Varlet, M.-H. Liu, V. Krueckl, D. Bischoff, P. Simonet, K. Watanabe, T. Taniguchi, K. Richter, K. Ensslin, and T. Ihn, Fabry-Pérot interference in gapped bilayer graphene with bro- ken anti-Klein tunneling, Phys. Rev. Lett. 113, 116601 (2014).
A Mreńca-Kolasińska, P Rickhaus, G Zheng, K Richter, T Ihn, K Ensslin, M.-H Liu, 10.1088/2053-1583/ac5536Quantum capacitive coupling between large-angle twisted graphene layers, 2D Materials. 925013A. Mreńca-Kolasińska, P. Rickhaus, G. Zheng, K. Richter, T. Ihn, K. Ensslin, and M.-H. Liu, Quantum capacitive coupling between large-angle twisted graphene layers, 2D Materials 9, 025013 (2022).
Electronic Transport in Mesoscopic Systems. S Datta, Cambridge University PressCambridgeS. Datta, Electronic Transport in Mesoscopic Systems (Cam- bridge University Press, Cambridge, 1995).
Interference features in scanning gate conductance maps of quantum point contacts with disorder. K Kolasiński, B Szafran, B Brun, H Sellier, 10.1103/PhysRevB.94.075301Phys. Rev. B. 9475301K. Kolasiński, B. Szafran, B. Brun, and H. Sellier, Interference features in scanning gate conductance maps of quantum point contacts with disorder, Phys. Rev. B 94, 075301 (2016).
Kwant: a software package for quantum transport. C W Groth, M Wimmer, A R Akhmerov, X Waintal, 10.1088/1367-2630/16/6/063065New. J. Phys. 1663065C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Wain- tal, Kwant: a software package for quantum transport, New. J. Phys. 16, 063065 (2014).
Dielectric breakdown in single-crystal hexagonal boron nitride. A Ranjan, N Raghavan, M Holwill, K Watanabe, T Taniguchi, K S Novoselov, K L Pey, S J O'shea, 10.1021/acsaelm.1c00469ACS Applied Electronic Materials. 33547A. Ranjan, N. Raghavan, M. Holwill, K. Watanabe, T. Taniguchi, K. S. Novoselov, K. L. Pey, and S. J. O'Shea, Dielectric breakdown in single-crystal hexagonal boron nitride, ACS Applied Electronic Materials 3, 3547 (2021).
Quantized conductance of point contacts in a twodimensional electron gas. B J Van Wees, H Van Houten, C W J Beenakker, J G Williamson, L P Kouwenhoven, D Van Der Marel, C T Foxon, 10.1103/PhysRevLett.60.848Phys. Rev. Lett. 60848B. J. van Wees, H. van Houten, C. W. J. Beenakker, J. G. Williamson, L. P. Kouwenhoven, D. van der Marel, and C. T. Foxon, Quantized conductance of point contacts in a two- dimensional electron gas, Phys. Rev. Lett. 60, 848 (1988).
Gate-defined quantum point contact in an InSb two-dimensional electron gas. Z Lei, C A Lehner, E Cheah, C Mittag, M Karalic, W Wegscheider, K Ensslin, T Ihn, 10.1103/PhysRevResearch.3.023042Phys. Rev. Research. 323042Z. Lei, C. A. Lehner, E. Cheah, C. Mittag, M. Karalic, W. Wegscheider, K. Ensslin, and T. Ihn, Gate-defined quantum point contact in an InSb two-dimensional electron gas, Phys. Rev. Research 3, 023042 (2021).
The electronic properties of bilayer graphene. E Mccann, M Koshino, 10.1088/0034-4885/76/5/056503Rep. Progr. Phys. 7656503E. McCann and M. Koshino, The electronic properties of bi- layer graphene, Rep. Progr. Phys. 76, 056503 (2013).
Valley subband splitting in bilayer graphene quantum point contacts. R Kraft, I V Krainov, V Gall, A P Dmitriev, R Krupke, I V Gornyi, R Danneau, 10.1103/PhysRevLett.121.257703Phys. Rev. Lett. 121257703R. Kraft, I. V. Krainov, V. Gall, A. P. Dmitriev, R. Krupke, I. V. Gornyi, and R. Danneau, Valley subband splitting in bi- layer graphene quantum point contacts, Phys. Rev. Lett. 121, 257703 (2018).
Electrostatically induced quantum point contacts in bilayer graphene. H Overweg, H Eggimann, X Chen, S Slizovskiy, M Eich, R Pisoni, Y Lee, P Rickhaus, K Watanabe, T Taniguchi, V Fal'ko, T Ihn, K Ensslin, 10.1021/acs.nanolett.7b04666Nano Lett. 18553H. Overweg, H. Eggimann, X. Chen, S. Slizovskiy, M. Eich, R. Pisoni, Y. Lee, P. Rickhaus, K. Watanabe, T. Taniguchi, V. Fal'ko, T. Ihn, and K. Ensslin, Electrostatically induced quantum point contacts in bilayer graphene, Nano Lett. 18, 553 (2018).
B Deng, B Wang, N Li, R Li, Y Wang, J Tang, Q Fu, Z Tian, P Gao, J Xue, H Peng, 10.1021/acsnano.9b07091Interlayer decoupling in 30°twisted bilayer graphene quasicrystal. 141656B. Deng, B. Wang, N. Li, R. Li, Y. Wang, J. Tang, Q. Fu, Z. Tian, P. Gao, J. Xue, and H. Peng, Interlayer decoupling in 30°twisted bilayer graphene quasicrystal, ACS Nano 14, 1656 (2020).
30°-twisted bilayer graphene quasicrystals from chemical vapor deposition. S Pezzini, V Mišeikis, G Piccinini, S Forti, S Pace, R Engelke, F Rossella, K Watanabe, T Taniguchi, P Kim, C Coletti, 10.1021/acs.nanolett.0c00172Nano Lett. 203313S. Pezzini, V. Mišeikis, G. Piccinini, S. Forti, S. Pace, R. En- gelke, F. Rossella, K. Watanabe, T. Taniguchi, P. Kim, and C. Coletti, 30°-twisted bilayer graphene quasicrystals from chemical vapor deposition, Nano Lett. 20, 3313 (2020).
Parallel transport and layer-resolved thermodynamic measurements in twisted bilayer graphene. G Piccinini, V Mišeikis, K Watanabe, T Taniguchi, C Coletti, S Pezzini, 10.1103/PhysRevB.104.L241410Phys. Rev. B. 104241410G. Piccinini, V. Mišeikis, K. Watanabe, T. Taniguchi, C. Coletti, and S. Pezzini, Parallel transport and layer-resolved thermody- namic measurements in twisted bilayer graphene, Phys. Rev. B 104, L241410 (2021).
The electronic thickness of graphene. P Rickhaus, M.-H Liu, M Kurpas, A Kurzmann, Y Lee, H Overweg, M Eich, R Pisoni, T Taniguchi, K Watanabe, K Richter, K Ensslin, T Ihn, 10.1126/sciadv.aay8409Sci. Adv. 68409P. Rickhaus, M.-H. Liu, M. Kurpas, A. Kurzmann, Y. Lee, H. Overweg, M. Eich, R. Pisoni, T. Taniguchi, K. Watanabe, K. Richter, K. Ensslin, and T. Ihn, The electronic thickness of graphene, Sci. Adv. 6, eaay8409 (2020).
Exploiting Aharonov-Bohm oscillations to probe Klein tunneling in tunable pn-junctions in graphene. J Dauber, K J A Reijnders, L Banszerus, A Epping, K Watanabe, T Taniguchi, M I Katsnelson, F Hassler, C Stampfer, arXiv:2008.02556cond-mat.mes-hallJ. Dauber, K. J. A. Reijnders, L. Banszerus, A. Epping, K. Watanabe, T. Taniguchi, M. I. Katsnelson, F. Hassler, and C. Stampfer, Exploiting Aharonov-Bohm oscillations to probe Klein tunneling in tunable pn-junctions in graphene (2021), arXiv:2008.02556 [cond-mat.mes-hall].
The Aharonov-Bohm effect in a sidegated graphene ring. M Huefner, F Molitor, A Jacobsen, A Pioda, C Stampfer, K Ensslin, T Ihn, 10.1088/1367-2630/12/4/043054New. J. Phys. 1243054M. Huefner, F. Molitor, A. Jacobsen, A. Pioda, C. Stampfer, K. Ensslin, and T. Ihn, The Aharonov-Bohm effect in a side- gated graphene ring, New. J. Phys. 12, 043054 (2010).
Aharonov-Bohm oscillations and magnetic focusing in ballistic graphene rings. J Dauber, M Oellers, F Venn, A Epping, K Watanabe, T Taniguchi, F Hassler, C Stampfer, 10.1103/PhysRevB.96.205407Phys. Rev. B. 96205407J. Dauber, M. Oellers, F. Venn, A. Epping, K. Watanabe, T. Taniguchi, F. Hassler, and C. Stampfer, Aharonov-Bohm os- cillations and magnetic focusing in ballistic graphene rings, Phys. Rev. B 96, 205407 (2017).
Valley-isospin dependence of the quantum Hall effect in a graphene p−n junction. J Tworzydło, I Snyman, A R Akhmerov, C W J Beenakker, 10.1103/PhysRevB.76.035411Phys. Rev. B. 7635411J. Tworzydło, I. Snyman, A. R. Akhmerov, and C. W. J. Beenakker, Valley-isospin dependence of the quantum Hall ef- fect in a graphene p−n junction, Phys. Rev. B 76, 035411 (2007).
Edge-channel interferometer at the graphene quantum Hall pn junction. S Morikawa, S Masubuchi, R Moriya, K Watanabe, T Taniguchi, T Machida, 10.1063/1.4919380Applied Physics Letters. 106183101S. Morikawa, S. Masubuchi, R. Moriya, K. Watanabe, T. Taniguchi, and T. Machida, Edge-channel interferometer at the graphene quantum Hall pn junction, Applied Physics Let- ters 106, 183101 (2015).
Aharonov-Bohm interferometer based on n − p junctions in graphene nanoribbons. A Mreńca-Kolasińska, S Heun, B Szafran, 10.1103/PhysRevB.93.125411Phys. Rev. B. 93125411A. Mreńca-Kolasińska, S. Heun, and B. Szafran, Aharonov- Bohm interferometer based on n − p junctions in graphene nanoribbons, Phys. Rev. B 93, 125411 (2016).
Giant valley-isospin conductance oscillations in ballistic graphene. C Handschin, P Makk, P Rickhaus, R Maurand, K Watanabe, T Taniguchi, K Richter, M.-H Liu, C Schönenberger, 10.1021/acs.nanolett.7b01964Nano Letters. 175389C. Handschin, P. Makk, P. Rickhaus, R. Maurand, K. Watan- abe, T. Taniguchi, K. Richter, M.-H. Liu, and C. Schönenberger, Giant valley-isospin conductance oscillations in ballistic graphene, Nano Letters 17, 5389 (2017).
Carbon nanotubes: nanomechanics, manipulation, and electronic devices. P Avouris, T Hertel, R Martel, T Schmidt, H Shea, R Walkup, 10.1016/S0169-4332(98)00506-6Applied Surface Science. 141201P. Avouris, T. Hertel, R. Martel, T. Schmidt, H. Shea, and R. Walkup, Carbon nanotubes: nanomechanics, manipula- tion, and electronic devices, Applied Surface Science 141, 201 (1999).
Electron collimation at van der Waals domain walls in bilayer graphene. H M Abdullah, D R Da Costa, H Bahlouli, A Chaves, F M Peeters, B Van Duppen, 10.1103/PhysRevB.100.045137Phys. Rev. B. 10045137H. M. Abdullah, D. R. da Costa, H. Bahlouli, A. Chaves, F. M. Peeters, and B. Van Duppen, Electron collimation at van der Waals domain walls in bilayer graphene, Phys. Rev. B 100, 045137 (2019).
Point contacts in encapsulated graphene. C Handschin, B Fülöp, P Makk, S Blanter, M Weiss, K Watanabe, T Taniguchi, S Csonka, C Schönenberger, 10.1063/1.4935032Applied Physics Letters. 107183108C. Handschin, B. Fülöp, P. Makk, S. Blanter, M. Weiss, K. Watanabe, T. Taniguchi, S. Csonka, and C. Schönenberger, Point contacts in encapsulated graphene, Applied Physics Let- ters 107, 183108 (2015).
Die Reflexion von Elektronen an einem Potentialsprung nach der relativistischen Dynamik von Dirac. O Klein, 10.1007/BF01339716Zeitschrift für Physik. 53157O. Klein, Die Reflexion von Elektronen an einem Potential- sprung nach der relativistischen Dynamik von Dirac, Zeitschrift für Physik 53, 157 (1929).
Chiral tunnelling and the Klein paradox in graphene. M I Katsnelson, K S Novoselov, A K Geim, 10.1038/nphys384Nature Physics. 2620M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Chiral tun- nelling and the Klein paradox in graphene, Nature Physics 2, 620 (2006).
Scanning tunnelling microscopy and spectroscopy of ultra-flat graphene on hexagonal boron nitride. J Xue, J Sanchez-Yamagishi, D Bulmash, P Jacquod, A Deshpande, K Watanabe, T Taniguchi, P Jarillo-Herrero, B J Leroy, 10.1038/nmat2968Nature Materials. 10282J. Xue, J. Sanchez-Yamagishi, D. Bulmash, P. Jacquod, A. Deshpande, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero, and B. J. LeRoy, Scanning tunnelling microscopy and spec- troscopy of ultra-flat graphene on hexagonal boron nitride, Na- ture Materials 10, 282 (2011).
Layerby-layer dielectric breakdown of hexagonal boron nitride. Y Hattori, T Taniguchi, K Watanabe, K Nagashio, 10.1021/nn506645qACS Nano. 9916Y. Hattori, T. Taniguchi, K. Watanabe, and K. Nagashio, Layer- by-layer dielectric breakdown of hexagonal boron nitride, ACS Nano 9, 916 (2015).
| []
|
[
"Evaluating the Performance of ANN Prediction System at Shanghai Stock Market in the Period",
"Evaluating the Performance of ANN Prediction System at Shanghai Stock Market in the Period"
]
| [
"Barack Wanjawa [email protected] \nSchool of Computing and Informatics\nUniversity of Nairobi\nKenya\n",
"Wamkaya \nSchool of Computing and Informatics\nUniversity of Nairobi\nKenya\n"
]
| [
"School of Computing and Informatics\nUniversity of Nairobi\nKenya",
"School of Computing and Informatics\nUniversity of Nairobi\nKenya"
]
| []
| This research evaluates the performance of an Artificial Neural Network based prediction system that was employed on the Shanghai Stock Exchange for the period 21-Sep-2016 to 11-Oct-2016. It is a follow-up to a previous paper in which the prices were predicted and published before September 21. Stock market price prediction remains an important quest for investors and researchers. This research used an Artificial Intelligence system, being an Artificial Neural Network that is feedforward multi-layer perceptron with error backpropagation for prediction, unlike other methods such as technical, fundamental or time series analysis. While these alternative methods tend to guide on trends and not the exact likely prices, neural networks on the other hand have the ability to predict the real value prices, as was done on this research.Nonetheless, determination of suitable network parameters remains a challenge in neural network design, with this research settling on a configuration of 5:21:21:1 with 80% training data or 4-year of training data as a good enough model for stock prediction, as already determined in a previous research by the author. The comparative results indicate that neural network can predict typical stock market prices with mean absolute percentage errors that are as low as 1.95% over the ten prediction instances that was studied in this research. | null | [
"https://arxiv.org/pdf/1612.02666v1.pdf"
]
| 21,937,308 | 1612.02666 | f588abda76645c271b241d3c39f557d5d361f245 |
Evaluating the Performance of ANN Prediction System at Shanghai Stock Market in the Period
21-Sep-2016 to 11-Oct-2016
Barack Wanjawa [email protected]
School of Computing and Informatics
University of Nairobi
Kenya
Wamkaya
School of Computing and Informatics
University of Nairobi
Kenya
Evaluating the Performance of ANN Prediction System at Shanghai Stock Market in the Period
21-Sep-2016 to 11-Oct-20161ANNNeural NetworksPredictionShanghai Stock Exchange
This research evaluates the performance of an Artificial Neural Network based prediction system that was employed on the Shanghai Stock Exchange for the period 21-Sep-2016 to 11-Oct-2016. It is a follow-up to a previous paper in which the prices were predicted and published before September 21. Stock market price prediction remains an important quest for investors and researchers. This research used an Artificial Intelligence system, being an Artificial Neural Network that is feedforward multi-layer perceptron with error backpropagation for prediction, unlike other methods such as technical, fundamental or time series analysis. While these alternative methods tend to guide on trends and not the exact likely prices, neural networks on the other hand have the ability to predict the real value prices, as was done on this research.Nonetheless, determination of suitable network parameters remains a challenge in neural network design, with this research settling on a configuration of 5:21:21:1 with 80% training data or 4-year of training data as a good enough model for stock prediction, as already determined in a previous research by the author. The comparative results indicate that neural network can predict typical stock market prices with mean absolute percentage errors that are as low as 1.95% over the ten prediction instances that was studied in this research.
INTRODUCTION
Stock markets, and trade therein, are important for the economies of many countries.
Stock trade is an investment in the financial sector, hence affects how both local and international investors are comfortable in making investment in that economy. It has been observed that stock markets react to both local and global events. In the stock market, trade in equity (stocks) tend to be quite active due to the general low entry value of trade, unlike other stock instruments such as bonds, which tend to have a relatively high entry value.
Traders at the stock exchange usually desire to maximize their investment by buying at low values and selling at a higher value to make profits (capital gain). Investors can also benefit from payment for dividends for their stock holding. Quest for getting the best deal at the stock market has led investors to desire some form of prediction method Determination of optimum ANN configuration however has been a challenge, since each domain area needs the best selection of parameters suitable for the task e.g. data size and partitioning for testing and training sets, number of training cycles, number and size of nodes (input, hidden, output), network type etc. These parameters, as applicable in stock market prediction were determined in a previous research by the author (Wanjawa et al., 2014) as 5:21:21:1 with 80% data for training (4-year data).
This network is still considered deep, though it only has 2-hidden layers (DL4J, 2016).
Problem Statement:
How does an ANN-based prediction system perform, based on a 3-week advance prediction that it generates prior to the trades, as tested on some chosen stocks at the Shanghai Stock Exchange (SSE)?
THE ANN-BASED PREDICTION SYSTEM
The details of the model used, how it works and the experimental setup is already contained in a previous research (Wanjawa, 2016). A 4-year data set i.e. January 1, 2012 to December 31, 2015 was used for training the prediction system, while 2016 data was used for testing and also for confirming actual predictions. The experiment used 5 previous prices to predict the sixth price, and continued with such a sliding window method to predict all the prices in the period of prediction. The ANN structure is reproduced in Fig. 1 Table 2 to Table 8 below. The error (%) is calculated by comparing the predicted price against the 'close' price i.e. average trade price of the day. We observe that for this stock 600010, the error swings between a low of -1.09% to a high of -4.26%. All the predictions were lower than the respective average trade price in the period under review. The error is however lower than 5% in all cases, with mean absolute percentage error (MAPE) of 2.82% for the 10 prediction instances. These results are shown graphically in Fig. 2 below. It can be observed that for stock 600015, the error swings between a low of -0.50% to a high of +5.67%. One prediction is spot on (28-Sep-2016), two are lower than the close value, while the rest are predicted higher than the average close value. The error is lower than 6% in all cases. The MAPE over the prediction range was 2.29%. These results are also shown on Fig. 3 below. Fig. 4 below. The results above, for stock 600028, show that the prediction errors ranged between -0.40% and +3.35%. It is also quite interesting that the prediction system predicted the same figure over the duration of the prediction. However, the error was lower than 3.4% in all cases, with MAPE obtained for this stock being the lowest of all the seven tested stocks at 1.95% over the ten predictions. For stock 600031, we observe that the prediction error swings between -0.92% and +9.29%. All prediction were values lower than the average closing price. The prediction error was less than 9.3% in the prediction period, with MAPE being 5.00%. Looking at the data on Table 7, we note that the error swings between a low of -4.91%
-2.88% 03-Oct-16* - - - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-Oct-
Fig. 2 -Comparing Actual v/s Predicted for Stock 600010
- - - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-Oct-
Fig. 3 -Comparing Actual v/s Predicted for Stock 600015
- - - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-
Fig. 4 -Comparing Actual v/s Predicted for Stock 600016
- - - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-- - - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-- - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-
to a high of -10.74% for this stock 600064. This was the single stock where the prediction values tended to be a bit far from the actual average closing price, with all predictions being relatively lower in value than the actual trades. One of the predicted prices had an error of over 10%. This prediction range also realized the highest error (MAPE) of 7.67%. price movement is the maximum allowed at any day of trade, based on the previous day's closing price. This would mean that the prediction system predicted prices that were practically tradable at the SSE. However, the traders determine the actual price swings on any given days, and this swing can be very low e.g. sometimes even 0%
- - - - - - 04-Oct-16* - - - - - - 05-Oct-16* - - - - - - 06-Oct-16* - - - - - - 07-Oct-16* - - - - - - 10-
(constant price). It is therefore essential that the prediction system not only conforms to the maximum swing rule, but should also be as close to the actual average market price as possible i.e. the actual supply and demand prices. This supply/demand prices may not necessarily obey the 10% swing e.g. for Stock 600010, the swing observed was between 0% and 1.8% in the period under consideration. Additionally, the prediction system should be able to track the trend of the price movement itself (up and down over time). The ANN prediction system was not only able to predict with low error (absolute on any trade day and also MAPE over the prediction period), but also provide an acceptable price trend movement.
CONCLUSION
This research tends to make us believe that Artificial Neural Network (ANN) prediction systems achieve high prediction accuracy in typical application domains, such as stock market price prediction due to their exploitation of parallel computing to learn from training data. After gaining the knowledge from learning, the ANN system can use this intelligence on new data that it has not yet been exposed to in achieving such a regression task. A carefully designed ANN system, such as one of configuration
to guide their buy or sell decisions. Methods such as fundamental, technical and time series analysis are already being employed (Chen et al., 2007, Deng et al., 2011, Huang et al., 2011, Neto et al., 2009, Zhang et al., 2008), though artificial intelligence (AI) methods can also be used in prediction, with Artificial Neural Networks (ANNs) being the most preferred AI technology (NeuroAI, 2016).
below.Fig. 1 -ANN model For Stock Market Prediction (Source: Wanjawa et al., 2014)
Seven stocks were selected from the Shanghai Stock Exchange (SSE) by sampling
from the numerical listing of stocks in that bourse in the stock reference range 600000
to 600100. These 7 chosen stocks are shown in Table 1 below:
Table 1 -List of Tested Stocks from SSE (Source: SSE, 2016)
Code
Short
name
Short name
Full name
600010 包钢股份 BSU
Inner Mongolia BaoTou Steel Union Co.,Ltd.
600015 华夏银行 HUAXIA BANK HUA XIA BANK CO., Limited
600016 民生银行 CMBC
CHINA MINSHENG BANK
600028 中国石化 Sinopec Corp.
China Petroleum and Chemical Corporation
600031 三一重工 SANY
SANY HEAVY INDUSTRY CO.,LTD
600064 南京高科 NJGK
NANJING GAOKE COMPANY LIMITED
600089 特变电工 TBEA
TBEA CO.,LTD.
The previous research (Wanjawa, 2016) already published the prediction for the 15-day
period from 21-Sep-2016 to 11-Oct-2016. All predicted values were generated eight
trading days before September 21, 2016 i.e. on Sep. 12, 2016.
3.0
RESULTS
Since the previous research already published the predictions (Wanjawa, 2016), the
results shown here are a comparative analysis between the predictions and the actual
trades that were published by Shanghai Stock Exchange. The data used was obtained
from Yahoo Finance for the 7 stocks considered i.e. 600010 (BSU, 2016), 600015
(HUAXIA BANK, 2016), 600016 (CMBC, 2016), 600028 (SINOPEC Corp., 2016),
600031 (SANY, 2016), 600064 (NJGK, 2016) and 600089 (TBEA, 2016).
It is worth noting that trade did not take place at SSE in the period Oct. 1 to Oct. 7,
2016, since this was a holiday in China (SSE, 2016b). Since the ANN model is a next
day prediction system, the predictions for Oct. 3 & Oct. 4 are used for Oct. 10 and Oct.
11, since these are the 'next days' of trade after the Sep. 30 trade. The comparative
results for the seven chosen stocks are shown in
Table 2 -
2Comparing Actual and Predicted Prices for SSE Stock 600010600010
Predicted
Open
High
L o w
Close
Error
21-Sep-16
2.70
2.78
2.87
2.78
2.81
-3.91%
22-Sep-16
2.71
2.82
2.83
2.79
2.80
-3.21%
23-Sep-16
2.72
2.81
2.82
2.79
2.80
-2.86%
26-Sep-16
2.72
2.80
2.80
2.74
2.75
-1.09%
27-Sep-16
2.72
2.75
2.77
2.74
2.77
-1.81%
28-Sep-16
2.71
2.77
2.77
2.74
2.75
-1.45%
29-Sep-16
2.71
2.75
2.80
2.75
2.79
-2.87%
30-Sep-16
2.70
2.78
2.79
2.77
2.78
Table 3 -
3Comparing Actual and Predicted Prices for SSE Stock 600015600015
Predicted
Open
High
L o w
Close
Error
21-Sep-16
10.03
10.05
10.09
10.03
10.08
-0.50%
22-Sep-16
10.48
10.10
10.18
10.08
10.16
3.15%
23-Sep-16
10.33
10.16
10.18
10.14
10.14
1.87%
26-Sep-16
10.55
10.13
10.13
10.02
10.04
5.08%
27-Sep-16
10.48
10.02
10.10
9.98
10.08
3.97%
28-Sep-16
10.03
10.09
10.09
10.01
10.03
0.00%
29-Sep-16
10.25
10.03
10.10
10.03
10.06
1.89%
30-Sep-16
10.62
10.04
10.08
10.02
10.05
5.67%
03-Oct-16*
Table 4 -
4Comparing Actual and Predicted Prices for SSE Stock 600016600016
Predicted
Open
High
L o w
Close
Error
21-Sep-16
9.07
9.33
9.34
9.30
9.33
-2.79%
22-Sep-16
8.89
9.35
9.40
9.32
9.37
-5.12%
23-Sep-16
8.70
9.38
9.39
9.34
9.38
-7.25%
26-Sep-16
8.63
9.28
9.35
9.25
9.31
-7.30%
27-Sep-16
8.69
9.30
9.32
9.26
9.31
-6.66%
28-Sep-16
8.80
9.31
9.31
9.16
9.20
-4.35%
29-Sep-16
8.79
9.21
9.25
9.20
9.23
-4.77%
30-Sep-16
8.69
9.23
9.29
9.23
9.26
-6.16%
03-Oct-16*
81%, with a MAPE of 6.11%. All predictions tended to be lower than the actual average close for the respective date of trade. The results are shown graphically inOct-16
8.59
9.26
9.36
9.26
9.33
-7.93%
11-Oct-16
8.49
9.32
9.34
9.28
9.31
-8.81%
*SSE was closed, due to holiday
The Table 4 results for stock 600016 indicate that the prediction error ranges between -
2.79% and -8.
Table 5 -
5Comparing Actual and Predicted Prices for SSE Stock 600028600028
Predicted
Open
High
L o w
Close
Error
21-Sep-16
4.93
4.78
4.79
4.74
4.78
3.14%
22-Sep-16
4.93
4.79
4.86
4.78
4.84
1.86%
23-Sep-16
4.93
4.85
4.87
4.84
4.85
1.65%
26-Sep-16
4.93
4.84
4.85
4.78
4.80
2.71%
27-Sep-16
4.93
4.79
4.82
4.77
4.81
2.49%
28-Sep-16
4.93
4.80
4.80
4.76
4.77
3.35%
29-Sep-16
4.93
4.81
4.86
4.80
4.85
1.65%
30-Sep-16
4.93
4.83
4.86
4.82
4.86
1.44%
03-Oct-16*
Table 6 -
6Comparing Actual and Predicted Prices for SSE Stock 600031600031
Predicted
Open
High
Low
Close
Error
21-Sep-16
5.08
5.63
5.68
5.60
5.60
-9.29%
22-Sep-16
5.17
5.61
5.65
5.58
5.60
-7.68%
23-Sep-16
5.23
5.60
5.65
5.59
5.60
-6.61%
26-Sep-16
5.40
5.58
5.65
5.44
5.45
-0.92%
27-Sep-16
5.38
5.45
5.48
5.37
5.45
-1.28%
28-Sep-16
5.28
5.47
5.48
5.40
5.41
-2.40%
29-Sep-16
5.10
5.41
5.47
5.40
5.43
-6.08%
30-Sep-16
5.15
5.41
5.48
5.41
5.47
-5.85%
03-Oct-16*
Table 7 -
7Comparing Actual and Predicted Prices for SSE Stock 600064600064
Predicted
Open
High
Low
Close
Error
21-Sep-16
16.20
17.17
17.22
17.00
17.17
-5.65%
22-Sep-16
16.08
17.24
17.49
17.17
17.18
-6.40%
23-Sep-16
15.98
17.20
17.36
17.15
17.17
-6.93%
26-Sep-16
15.87
17.10
17.11
16.67
16.69
-4.91%
27-Sep-16
15.78
16.68
16.96
16.61
16.95
-6.90%
28-Sep-16
15.72
17.02
17.26
16.95
17.15
-8.34%
29-Sep-16
15.67
17.15
17.35
17.10
17.27
-9.26%
30-Sep-16
15.63
17.25
17.55
17.25
17.51
-10.74%
03-Oct-16*
-
Table 8 -
8Comparing Actual and Predicted Prices for SSE Stock 600089600089
Predicted
Open
High
Low
Close
Error
21-Sep-16
8.91
8.82
8.85
8.80
8.83
0.91%
22-Sep-16
8.89
8.87
8.91
8.82
8.84
0.57%
23-Sep-16
8.89
8.84
8.85
8.77
8.79
1.14%
26-Sep-16
8.89
8.78
8.79
8.66
8.67
2.54%
27-Sep-16
8.88
8.63
8.66
8.56
8.65
2.66%
28-Sep-16
8.88
8.67
8.67
8.55
8.58
3.50%
29-Sep-16
8.87
8.63
8.63
8.58
8.59
3.26%
30-Sep-16
8.87
8.60
8.64
8.58
8.62
2.90%
03-Oct-16*
5:21:21:1, with 4-year training data can be practical for prediction. This configuration was carefully chosen by a previous research and is proving applicable to prediction of typical stock exchanges trades such as that of Shanghai Stock Exchange (SSE). In this research, the ANN system was trained on the 4-year data for the period 2012 to 2015, then used for prediction of stock prices for trades done in 2016. The research also predicted the prices of seven(7) stocks of the SSE for the period Sep. 12 to Sep. 20 in advance (as at Sep. 12, 2016), then went on to continue the prediction for values from Sep. 21 all the way to Oct. 11, 2016. All these predictions were published by Sep. 12, 2016 and it was a matter of wait and see if the predictions would turn out true on the respective dates of trade. Considering the period Sep. 21 to Oct. 11, 2016, the results show that the ANN-based prediction system was able to predict within the 10% price swing limit for all the 7stocks in consideration (70 values) as dictated by one of the trading rules at the bourse i.e. allowing only a maximum of 10% difference between the previous close with the current day's trading values. The individual daily errors were also quite narrow e.g. between +0.91% and +3.50% for Stock 600089 for the 10 predicted instances. This research confirms that a fairly simple deep ANN configuration of only 2-hidden layers in depth can achieve good prediction results, based on the mean absolute percentage error (MAPE) realized over the prediction period for the seven test stocks that ranged from 1.95% for Stock 600028 to 7.67% for Stock 600064 considering the 10 predictions for each stock. This fairly simple design makes us believe that deep networks must not necessarily be complex, but should be carefully chosen and experimented depending on the task at hand. The future of deep ANN remains promising, especially for prediction in the stock market domain. More work need to be done in studying the effects of deepening the network beyond the 2-layers, tweaking the number of nodes and even incorporating
other stock exchange parameters such as traded volumes and market sentiment on to the prediction system with a view of seeing if the predictions obtains have significance improvements over the current system in terms of absolute daily predictions and even the MAPE over a range of predictions.
Historical Data. Yahoo Finance. BSU (600010.SSBSU (600010.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600010.SS/history?p=600010.SS
Historical Data. Yahoo Finance. CMBC (600016.SSCMBC (600016.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600016.SS/history?p=600016.SS
Forecasting Revenue Growth Rate Using Fundamental Analysis: A Feature Selection Based Rough Sets Approach. Y Chen, C Cheng, Fourth International Conference on Fuzzy Systems and Knowledge Discovery. 3Chen, Y., & Cheng, C. (2007). Forecasting Revenue Growth Rate Using Fundamental Analysis: A Feature Selection Based Rough Sets Approach. Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007), 3, 151-155.
. S Deng, T Mitsubuchi, K Shioda, T Shimada, A Sakurai, Deng, S., Mitsubuchi, T., Shioda, K., Shimada, T., & Sakurai, A. (2011).
Combining Technical Analysis with Sentiment Analysis for Stock Price Prediction. IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing. Combining Technical Analysis with Sentiment Analysis for Stock Price Prediction. 2011 IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing, 800-807.
Introduction to Deep Neural Networks. Dl4j, DL4J. (2016). Introduction to Deep Neural Networks. Retrieved Sep. 9, 2016, from https://deeplearning4j.org/neuralnet-overview.html
Using Multi-Stage Data Mining Technique to Build Forecast Model for Taiwan Stocks. C Huang, P Chen, W Pan, Neural Computing and Applications. 218Huang, C., Chen, P., & Pan, W. (2011). Using Multi-Stage Data Mining Technique to Build Forecast Model for Taiwan Stocks. Neural Computing and Applications, 21 (8), 2057-2063.
Historical Data. Yahoo Finance. HUAXIA BANK (600015.SSHUAXIA BANK (600015.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600015.SS/history?p=600015.SS
M Neto, G Calvalcanti, T Ren, Financial Time Series Prediction Using Exogenous Series and Combined Neural Networks. Proceedings of International Joint Conference on Neural Networks. Atlanta, GeorgiaNeto, M., Calvalcanti, G., & Ren, T. (2009). Financial Time Series Prediction Using Exogenous Series and Combined Neural Networks. Proceedings of International Joint Conference on Neural Networks June 14-19. Atlanta, Georgia.
Stock market prediction. Neuroai, NeuroAI. (2016). Stock market prediction. Retrieved Sep. 9, 2016, from http://www.learnartificialneuralnetworks.com/stockmarketprediction.html
Historical Data. Yahoo Finance. NJGK (600064.SSNJGK (600064.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600064.SS/history?p=600064.SS
Historical Data. Yahoo Finance. SANY (600031.SSSANY (600031.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600031.SS/history?p=600031.SS
Historical Data. Yahoo Finance. Sinopec Corp, 600028.SSSINOPEC Corp. (600028.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600028.SS/history?p=600028.SS
Trading Rules of Shanghai Stock Exchange. Shanghai Stock Exchange, Shanghai Stock Exchange. (2015). Trading Rules of Shanghai Stock Exchange. Downloaded Nov. 25, 2016, from http://english.sse.com.cn/tradmembership/rules/c/3977570.pdf
Shanghai Stock Exchange. Sse, SSE. (2016). Shanghai Stock Exchange. Retrieved Sep. 9, 2016, from http://english.sse.com.cn/products/equities/overview/
Shanghai Stock Exchange Trading Schedule. Sse, RetrievedSSE. (2016b). Shanghai Stock Exchange Trading Schedule. Retrieved Nov. 25, 2016, from http://english.sse.com.cn/tradmembership/schedule/
Shanghai Stock Exchange Trading Rules of A-share Market. Sse, SSE. (2016c). Shanghai Stock Exchange Trading Rules of A-share Market. Retrieved Nov. 25, 2016, from http://english.sse.com.cn/tradmembership/schedule/
Historical Data. Yahoo Finance. TBEA (600089.SSTBEA (600089.SS). (2016). Historical Data. Yahoo Finance. Retrieved Oct. 25, 2016, from http://finance.yahoo.com/quote/600089.SS/history?p=600089.SS
Predicting Future Shanghai Stock Market Price using ANN in the Period 21. B W Wanjawa, arXiv:1609.05394Wanjawa, B. W. (2016). Predicting Future Shanghai Stock Market Price using ANN in the Period 21-Sep-2016 to 11-Oct-2016. arXiv:1609.05394.
B W Wanjawa, L Muchemi, arXiv:1502.06434ANN Model to Predict Stock Prices at Stock Exchange Marekts. Wanjawa, B. W., & Muchemi, L. (2014). ANN Model to Predict Stock Prices at Stock Exchange Marekts. arXiv:1502.06434.
Chaotic Time Series Prediction Using a Neuro-Fuzzy System with Time-Delay Coordinates. J Zhang, H S Chung, W Lo, IEEE Transactions on Knowledge and Data Engineering. 720Zhang, J., Chung, H. S., & Lo, W. (2008). Chaotic Time Series Prediction Using a Neuro-Fuzzy System with Time-Delay Coordinates. IEEE Transactions on Knowledge and Data Engineering, 20 (7).
| []
|
[
"Interstellar Gas and a Dark Disk",
"Interstellar Gas and a Dark Disk"
]
| [
"Eric David Kramer \nDepartment of Physics\nHarvard University\n02138CambridgeMA\n",
"Lisa Randall \nDepartment of Physics\nHarvard University\n02138CambridgeMA\n"
]
| [
"Department of Physics\nHarvard University\n02138CambridgeMA",
"Department of Physics\nHarvard University\n02138CambridgeMA"
]
| []
| We introduce a potentially powerful method for constraining or discovering a thin dark matter disk in the Milky Way. The method relies on the relationship between the midplane densities and scale heights of interstellar gas being determined by the gravitational potential, which is sensitive to the presence of a dark disk. We show how to use the interstellar gas parameters to set a bound on a dark disk and discuss the constraints suggested by the current data. However, current measurements for these parameters are discordant, with the uncertainty in the constraint being dominated by the molecular hydrogen midplane density measurement, as well as by the atomic hydrogen velocity dispersion measurement. Magnetic fields and cosmic ray pressure, which are expected to play a role, are uncertain as well. The current models and data are inadequate to determine the disk's existence, but, taken at face value, may favor its existence depending on the gas parameters used. | 10.3847/0004-637x/829/2/126 | [
"https://arxiv.org/pdf/1603.03058v3.pdf"
]
| 119,087,614 | 1603.03058 | e3d1c631d15a57c0b1a1fde54c380a6e4e4ce647 |
Interstellar Gas and a Dark Disk
13 Jul 2016
Eric David Kramer
Department of Physics
Harvard University
02138CambridgeMA
Lisa Randall
Department of Physics
Harvard University
02138CambridgeMA
Interstellar Gas and a Dark Disk
13 Jul 2016
We introduce a potentially powerful method for constraining or discovering a thin dark matter disk in the Milky Way. The method relies on the relationship between the midplane densities and scale heights of interstellar gas being determined by the gravitational potential, which is sensitive to the presence of a dark disk. We show how to use the interstellar gas parameters to set a bound on a dark disk and discuss the constraints suggested by the current data. However, current measurements for these parameters are discordant, with the uncertainty in the constraint being dominated by the molecular hydrogen midplane density measurement, as well as by the atomic hydrogen velocity dispersion measurement. Magnetic fields and cosmic ray pressure, which are expected to play a role, are uncertain as well. The current models and data are inadequate to determine the disk's existence, but, taken at face value, may favor its existence depending on the gas parameters used.
Introduction
Fan, Katz, Randall, and Reece in 2013 proposed the existence of thin disks of dark matter in spiral galaxies including the Milky Way, in a model termed Double Disk Dark Matter (DDDM). In this model, a small fraction of the dark matter is interacting and dissipative, so that this sector of dark matter would cool and form a thin disk. More recently Randall & Reece (2014) showed that a dark matter disk of surface density ∼ 10 M ⊙ pc −2 and scale height ∼10 pc could possibly explain the periodicity of comet impacts on earth. It is of interest to know what values of dark disk surface density and scale height are allowed by the current data, and whether these particular values are allowed.
Since the original studies by Oort (1932Oort ( , 1960, the question of disk dark matter has been a subject of controversy. Over the years, several authors have suggested a dark disk to explain various phenomena. Kalberla et al. (2007) proposed a thick dark disk as a way to explain the flaring of the interstellar gas layer. It has also been argued that a thick dark disk is formed naturally in a ΛCDM cosmology as a consequence of sattelite mergers (Read et al. 2008). Besides these, there are also models arguing for a thin dark disk. Fan et al. (2013) put forward a model for dark matter where a small fraction of the total dark matter could be self-interacting and dissipative, necessarily forming a thin dark disk. In Kramer & Randall (2016) we investigated the constraints on such a disk from stellar kinematics. In this paper we investigate the contraint imposed by demanding consistency between measurements of midplane densities and surface densities interstellar gas.
We assume a Bahcall-type model for the vertical distributions of stars and gas (Bahcall 1984a,b,c) as in Kramer & Randall (2016), with various visible mass components, as well as a dark disk. We investigate the visible components in detail given more recent measurements of both the surface and midplane densities. A dark disk affects the relationship between the two as argued in Kramer & Randall and as we review below. Although current measurements are insufficiently reliable to place strong constraints on or identify a disk, we expect this method will be useful in the future when better measurements are achieved.
Poisson-Jeans Theory
As explained in detail in Kramer & Randall (2016), for an axisymmetric self-gravitating system, the vertical Jeans equation near the z = 0 plane reads
∂ ∂z (ρ i σ 2 i ) + ρ i ∂Φ ∂z = 0.(1)
For an isothermal population (σ i (z) = constant), the solution reduces to ρ i (z) = ρ i (0) e −Φ(R,z)/σ 2 i .
Combining this with the Poisson equation gives the Poisson-Jeans equation for the potential Φ
∂ 2 Φ ∂z 2 = 4πG i ρ i (0)e −Φ/σ 2 i ,(3)
which can also be cast in integral form (assuming z-reflection symmetry)
ρ i (z) ρ i (0) = exp − 4πG σ 2 i k z 0 dz ′ z ′ 0 dz ′′ ρ(z ′′ ) .(4)
This is the form used in our Poisson-Jeans solver.
A toy model
In Kramer & Randall (2016), we showed that the exact solution to the Poisson-Jeans equation for a thick component (in this case, the interstellar gas which is thick compared to the dark disk) with midplane density ρ 0 and vertical dispersion σ interacting with an infinitely thin (delta-function profile) dark disk was ρ(z) = ρ 0 (1 + Q 2 ) sech 2 1 + Q 2 2h (|z| + z 0 )
where
Q ≡ Σ D /4ρ 0 h,(6)h ≡ σ √ 8πGρ 0 ,(7)
and
z 0 ≡ 2h 1 + Q 2 arctanh Q 1 + Q 2 .(8)
Thus, the effect of the dark disk is to 'pinch' the density distribution of the other components, as we can see in Figure 1. Thus, although the scale height of the gas disk is proportional to its velocity dispersion according to Equation 7, a dark disk will reduce the gas disk's thickness relative to this value, and for a fixed midplane density ρ i (0), it implies that their surface densities Σ i are less than what it would be without the dark disk or any other mass component. In this approximation, the gas distribution will have a cusp at the origin, but in general, the dark disk will have a finite thickness and the solution will be smooth near z = 0. Integrating (5) gives the surface density of the visible component as
z / h -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 ρ(z) / ρ(0) 0 0.2 0.4 0.6 0.8 1 Σ D =0 Σ D =4ρ(0) hΣ vis (Σ D ) = Σ vis (0) 2 + Σ 2 D − Σ D(9)
where Σ vis (0) ≡ 4ρ 0 h is what the surface density would have been without the dark disk. This expression is monotonically decreasing with Σ D .
Another way of explaining this is that the dark disk 'pinches' the visible matter disk, reducing its thickness H vis . Since the surface density of the visible disk scales roughly as Σ vis ∼ ρ vis H vis , the effect of the dark disk is to reduce the total surface density for a given midplane density ρ vis .
Analysis
Here we explain how we compare the surface densities of the various gas components estimated in the next section to those predicted by their midplane densities and velocity dispersions in order to place self-consistency constraints on the mass model. Section 2.1 explains how the presence of the dark disk decreases the surface density of each component if the midplane density is held fixed (as it is in a Poisson-Jeans solver). Thus, given fixed midplane densities, we can assign a probability to a model with any dark disk surface density Σ D and scale height h D based on how well the predicted surface densities Σ i determined from the Poisson-Jeans solver match the observed values.
Starting with the midplane densities and dispersions of Section 4, we solved the Poisson-Jeans equation for Σ D values between 0 and 24 M ⊙ pc −2 . Each time the Poisson-Jeans equation was solved, the density distributions were integrated to give the total surface densities of H 2 and HI. Each model was then assigned a probability, according to the chi-squared distribution with 2 degrees of freedom, based on the deviation of this surface density from the measured values. We did this using different central values and uncertainties for the midplane densities. Thus, for example, using n H 2 = 0.19 cm −3 , we would take ρ H 2 +He (0) = 1.42 × m H 2 × 0.19 cm −3 = 0.013 M ⊙ pc −3 . If for a model with this value of ρ H 2 (0) and with a certain dark disk surface density value of Σ D we find an H 2 surface density Σ H 2 = 1.0 M ⊙ pc −2 , then according to Section 4.1, we should assign this model a chi-squared value χ 2 H 2 = (1.0 − 1.3) 2 /∆ 2 Σ H2 . We would then assign a probability to this model according to the Gaussian cumulative distribution, p H 2 = dχ exp(−χ 2 /2)/ √ 2π, where the limits of integration are from −∞ to −χ H 2 and χ H 2 to ∞. We would similarly compute probabilities p HI , p HII , and a combined probability p = p H 2 × p HI × p HII . We note here that this is not the absolute probability of the model given the data; rather, it is the probability of the data given the model. We define a model for which the data is less probable than 5% to be excluded.
An important question is what to use for ∆ 2 Σ . There are two uncertainties here. Namely, 1) the uncertainty in the surface density measurements, ∆Σ i , and 2) the uncertainty in our output values of Σ(ρ i , Σ D ), resulting from the uncertainty in the input midplane density measurementsρ i . Formally, this is |∂Σ i /∂ρ i |∆ρ i . Assuming Gaussian distributions for the measurementsΣ i andρ i , and a uniform prior for Σ D , one can show that
p(Σ i ,ρ i |Σ D ) ∼ dρ i exp − Σ i − Σ(ρ i , Σ D ) 2 2∆ 2 Σ i exp − (ρ i −ρ i ) 2 2∆ 2 ρ i(10)
where ρ i are the true midplane densities, and that, expanding Σ(ρ i , Σ D ) to first order in ρ i , this integrates to give an approximately Gaussian distribution forΣ i − Σ(ρ i , Σ D ), with width
∆ Σ i ≃ ∆ 2 Σ i + ∂Σ i ∂ρ i 2 ∆ 2 ρ i .(11)
We computed |∂Σ i /∂ρ i | by sampling values of ρ i and computing the output values Σ i . The effect of the uncertainties in the vertical dispersions of the different components was also included in ∆ Σ i in the same way as those inρ i . The formula for ∆ Σ i that we adopt (Equation 11) therefore contains an extra term under the square root to include this uncertainty:
∆ Σ i ≃ ∆ 2 Σ i + ∂Σ i ∂ρ i 2 ∆ 2 ρ i + ∂Σ i ∂σ i,eff 2 ∆ 2 σ i,eff .(12)
Gas parameters
The purpose of this section is to determine accurate values for ρ i (0), σ i , and Σ i (midpalne density, velocity dispersion, and surface density) for the different components of interstellar gas based on existing measurements. Using Equation 2, these can then be compared for a given dark disk model in order to check for self-consistency.
We now discuss in detail the various measurements of the gas parameters and the uncertainties in each. Our starting point is the Bahcall model used by Flynn et al. (2006, Table 2). These values are updated from the ones used in Holmberg & Flynn (2000). Values for the stellar components were updated using the values of McKee et al. (2015), and are shown in rows 5-15 in Table 2 of Kramer & Randall (2016).
In these models, the gas and stars are both separated into approximately isothermal components as in Bahcall (1984b), so that each component i is characterized by a midplane density ρ i0 and a vertical dispersion σ i . Using only these values for all of the components, we can solve the Poisson-Jeans equation (4) for the system. A major difference between our model and that of Flynn et al. (2006) is that their gas midplane densities were fixed by the values needed to give the correct surface densities in accordance with the Poisson-Jeans equation. We, on the other hand, use measured values of the midplane densities as we explain in this section.
We explain the various literature values that were included in the determination of the gas parameters. We also compare these to the recent values of McKee et al. (2015). In Section 5, the analysis is conducted separately for the values we determine by combining the results in the literature and the values obtained solely from the recent paper by McKee et al. (2015).
Molecular hydrogen
We now explain the various measurements of the molecular hydrogen volume density and surface densities and how they are corrected. As molecular hydrogen cannot be observed directly, it must be inferred from the amount of CO present, derived from the intensity of the J = 1 − 0 transition photons. These are related by the so-called X-factor, defined by
N H 2 ≡ XW CO(13)
where N H 2 = N l.o.s. is the line-of-sight column density of H 2 molecules and W CO is the total, velocity-integrated CO intensity along the line of sight (Draine 2011). Column densities perpendicular to the galactic plane can then be obtained by simple trigonometry:
N ⊥ = N l.o.s. sin b(14)
and volume densities can be obtained by dividing the intensity density in velocity space dW CO /dv by the rotation curve gradient dv/dR, or by estimating the distance along the line of sight using other means. The volume and surface densities can also both be found by fitting an assumed distribution to measurements of the gas' vertical scale height ∆z. Surface densities can then be given, for example, by
Σ H 2 = m H 2 N ⊥,H 2 = m H 2 XW CO sin b.(15)
On the other hand, a certain reference may not be measuring surface density directly. Instead, they may be measuring the emissivity,
J(r) ≡ dW CO dr(16)
from which, according to Equation 13, we can obtain the volume density as n(r) = X J(r).
If the authors also measured the vertical (z−direction) distribution of the molecular hydrogen, then the surface mass density can be obtained according to
Σ H 2 = m H 2 n H 2 (z) dz.(18)
For example, the full width at half maximum (FWHM) of the molecular hydrogen distribution gives the surface density as
Σ H 2 = m H 2 C shape n H 2 (0) × FWHM(19)
where C shape is given by 1.06, 1.13, or 1.44 for a Gaussian, sech 2 , or exponential profile respectively. For our calculations, we used C shape = 1.10 (20)
as a reasonable estimate for the shape of the distribution.
In the literature, mass values are often quoted including the associated helium, metals, and other gaseous components such as CO, etc. The amount of helium accompanying the hydrogen is typically assumed in the range 36-40% of the hydrogen alone by mass (Kulkarni & Heiles 1987;Bronfman et al. 1988). Including other gas components increases this number to about 42% (Ferrière 2001). Thus, the total mass of any component of the ISM should be about 1.42 times the mass of its hydrogen. These will be distinguished by using, e.g. Σ H 2 , Σ H 2 +He to refer to the bare values and and the values including their associated helium respectively. Thus,
Σ H 2 +He = 1.42 Σ H 2 .(21)
Note that Binney & Merrifield (1998, p.662) did not include helium in the total ISM mass. Also Read (2014) did not distinguish between HI results including and not including helium.
We now explain how we obtain midplane volume densities n H 2 (z = 0) and surface densities Σ H 2 +He from the various references in the literature. Bronfman et al. (1988) measured the molecular hydrogen over different radii within the solar circle. Their data are shown as one of the data sets in Figure 2. Averaging the values from the Northern and Souther Galactic plane in Table 4 of the latter, we find, for the measurements closest to the Sun, Σ H 2 = 2.2 M ⊙ pc −2 and n H 2 = 0.2 cm −3 . Since surface densities depend only on the total integrated intensity along the line of sight, they are independent of the value of R ⊙ , the Sun's radial position from the center of the Galaxy. On the other hand, it follows from this that old values for volume densities (which scale as R −1 ⊙ ) must be rescaled by R ⊙,old /R ⊙,new (Scoville & Sanders 1987, p.31). Since Bronfman et al. used the old value R ⊙ = 10 kpc, this value needs to be rescaled by (0.833) −1 to take into account the new value of R ⊙ = 8.33± 0.35 kpc (Gillessen et al. 2009). They also used an X-factor of X = 2.8×10 20 cm −2 (K −1 km s −1 ) −1 . We correct this using a more recent value of X = 1.8 ± 0.3 × 10 20 cm −2 (K −1 km s −1 ) −1 , obtained by Dame, Hartmann, & Thaddeus (2001). The most recent value of X, obtained by Okumura & Kamae (2009), is X = 1.76 ± 0.04 × 10 20 cm −2 (K −1 km s −1 ) −1 , although the value of Dame et al. that we use is still cited by Draine (2011) as the most reliable. These corrections give
n H 2 = 0.15 M ⊙ pc −3 and Σ H 2 = 1.4 M ⊙ pc −2 . Including helium gives Σ H 2 +He = 2.0 M ⊙ pc −2 .
On the other hand, Clemens, Sanders, & Scoville (1988), found the local CO emissivity J = dW CO /dr in the first galactic quadrant for radii through R ⊙ . For R < R ⊙ and R > R ⊙ respectively, they found these to be J = 3.1 and 2.3 K km s −1 kpc −1 , which, using X = 1.8 × 10 20 cm −2 (K −1 km s −1 ) −1 , and rescaling for R ⊙ by (0.833) −1 , gives interpolated density n H 2 (R ⊙ ) = 0.19 cm −3 . Using their FWHM measurements for H 2 , we can convert their measurements to surface density values according to Equation 19. As before, the surface density values are independent of R ⊙ . We have, interpolating to R ⊙ , Σ H 2 +He = 1.1 M ⊙ pc −2 . The rescaled data are shown in Figure 2.
Another measurement is provided by Burton & Gordon (1978), who had already measured Galactic CO emissivity J = dW CO /dr between R ∼ 2 − 16 kpc, assuming R ⊙ = 10 kpc, from which we obtain n H 2 (R) after correcting for R ⊙ , shown in Figure 2. Interpolating linearly, this gives n(R ⊙ ) = 0.31 cm −3 . Sanders, Solomon, & Scoville (1984) also measured CO in the first and second Galactic quadrants within and outside the solar circle. They used the values R ⊙ = 10 kpc and X = 3.6 × 10 20 cm −2 (K −1 km s −1 ) −1 . Their results for both volume and surface density, corrected to R ⊙ = 8.33 kpc and X = 1.8 × 10 20 cm −2 (K −1 km s −1 ) −1 , are also shown in Figure 2. In particular, after rescaling and interpolating their volume densities, we have n(0.95R ⊙ ) = 0.39 cm −3 . For surface density, we obtain Σ H 2 +He = 2.7 M ⊙ pc −2 . This is the highest value in the literature. Grabelsky et al. (1987) also measured CO in the outer Galaxy, which, with a 1.8/2.8 correction factor for X, as well as correcting R ⊙ from 10 to 8.33 kpc, their results near the Sun read n(1.05R ⊙ ) = 0.14 cm −3 and Σ H 2 (1.05R ⊙ ) = 1.4 M ⊙ pc −2 . Digel (1991) also measured H 2 in the outer Galaxy. Using his results, we find n(1.06R ⊙ ) = 0.13 cm −3 and Σ H 2 (1.06R ⊙ ) = 2.1 M ⊙ pc −2 . Dame et al. (1987), by directly observing clouds within 1 kpc of the Sun only, found local volume density n H 2 = 0.10 cm −3 and surface density Σ H 2 +He = 1.3 M ⊙ pc −2 , which, correcting for X = 2.7 to 1.8 × 10 20 cm −2 (K −1 km s −1 ) −1 , gives 0.08 cm −3 and 0.87 M ⊙ pc −2 . This volume density is lower than many other measurements, and may represent a local fluctuation in the Solar region on a larger scale than the Local Bubble. On the other hand, their surface density value is not the lowest. Luna et al. (2006)
, using X = 1.56 × 10 20 cm −2 (K −1 km s −1 ) −1 , found Σ H 2 +He (0.975R ⊙ ) = 0.24 M ⊙ pc −2 .
Correcting for X gives 0.29 M ⊙ pc −2 , which is the lowest value in the literature. However, they admit that their values beyond 0.875 R ⊙ are uncertain. Another determination from 2006 (Nakanishi & Sofue 2006) gives, after interpolation,
n H 2 (R ⊙ ) = 0.17 cm −3 and Σ H 2 (R ⊙ ) = 1.4 M ⊙ pc −2 , or Σ H 2 (R ⊙ ) = 2.0 M ⊙ pc −2 .
Figure 2 shows the various measurements described here, as well as the overall average and standard error. Although not all measurements are equally certain, in computing average values for n H 2 and Σ H 2 we treated all measurements with equal weight. We estimated the resulting uncertainty as the standard deviation divided by the the square root of the number of measurements available at each R. We found the mean values and standard errors of volume and surface densities near the Sun to be
n H 2 (R ⊙ ) = 0.19 ± 0.03 cm −3 (22) Σ H 2 +He (R ⊙ ) = 1.55 ± 0.32 M ⊙ pc −2 .(23)
This analysis has not yet taken into account the more recent observations of a significant component of molecular gas that is not associated with CO (Heyer & Dame 2015;Hessman 2015). Planck Collaboration et al. (2011) estimates this "dark gas" density to be 118% that of the COassociated H 2 . Pineda et al. (2013), on the other hand, found roughly 40% at Solar radius. We therefore include the dark molecular gas with a mean value of 79% and with an uncertainty of 39%. This gives total molecular gas estimates of
n H 2 +DG (R ⊙ ) = 0.34 ± 0.09 cm −3(24)Σ H 2 +He+DG (R ⊙ ) = 2.8 ± 0.8 M ⊙ pc −2(25)
which are the values we assume for our analysis. It should be noted, however, that in propagating the errors for dark gas, n H2 and Σ H 2 +He always vary together. We take this into account in the statistical analysis by considering only the error on the ratio Σ/ρ. The same would apply to the error in X CO although this error is much smaller.
Besides the molecular hydrogen's volume density n H 2 and surface density Σ H 2 , another important quantity is its cloud-cloud velocity dispersion σ H 2 , since this is one of the inputs in the Poisson-Jeans equation. The velocity dispersions of the molecular clouds containing H 2 can be inferred from that of their CO, which was found by Liszt & Burton (1983) to be σ H 2 = 4.2 ± 0.5 km s −1 . Belfort & Crovisier (1984) found σ CO = σ H 2 = 3.6 ± 0.2 km s −1 . Scoville & Sanders (1987) found σ H 2 = 3.8 ± 2 km s −1 . The weighted average of these is approximately given by
σ H 2 = 3.7 ± 0.2 km s −1 .(26)
The Atomic Hydrogen
We now discuss the various measurements of atomic hydrogen HI volume density n HI (z) and surface density Σ HI . These typically are made by observing emissions of hydrogen's 21 cm hyperfine transition. Kulkarni & Heiles (1987) estimate an HI surface density of 8.2 M ⊙ pc −2 near the Sun. They separate HI into the Cold Neutral Medium (CNM) and Warm Neutral Medium (WNM). Dickey & Lockman (1990), summarizing several earlier studies, describe the Galactic HI as having approximately constant properties over the range 4 kpc < R < 8 kpc. Their best estimate for the HI parameters over this range is a combination of subcomponents, one thin Gaussian component with central density n(0) = 0.40 cm −3 and FWHM = 212 pc (and surface density 2.2 M ⊙ pc −2 ), which we identify with the CNM, and a thicker component with central density n(0) = 0.17 cm −3 and surface density 2.8 M ⊙ pc −2 , which we identify as the WNM. This gives a total of Σ HI = 5.0 M ⊙ pc −2 , or Σ HI+He = 7.1 M ⊙ pc −2 . Another measurement is provided by Burton & Gordon (1978), who measured volume densities for R ∼ 2 − 16 kpc. We interpolate their data (and correct for R ⊙ = 10 kpc → 8.33 kpc) to obtain n HI = 0.49 cm −3 . Although they did not determine surface densities, we can estimate them by assuming a single Gaussian component with FWHM given . A better estimate is perhaps obtained by assuming, rather than a Gaussian distribution, a distribution with the same shape as Dickey & Lockman. This amounts to assuming an effective Gaussian FWHM of ∼ 330 pc. This gives a surface density near the Sun of Σ HI+He = 5.9 M ⊙ pc −2 . Liszt (1992), however, argues that the midplane density of Dickey & Lockman was artificially enhanced to give the correct surface density. He measures midplane density n HI = 0.41 cm −3 , which, assuming as for Burton & Gordon a Gaussian distribution with effective FWHM 330 pc, gives a surface density of only Σ HI+He = 5.1 M ⊙ pc −2 . Nakanishi & Sofue (2003) also measured the Galactic HI, from the Galactic center out to ∼ 25 kpc. Their results are shown in Figure 3. Interpolating to R ⊙ , we have n HI (R ⊙ ) = 0.28 cm −3 and Σ HI+He = 5.9 M ⊙ pc −2 , in agreement with the value of Burton & Gordon.
On the other hand, there are several authors who report much larger mass parameters for Galactic HI. They are Wouterloot et al. (1990) and Kalberla & Dedes (2008). Wouterloot et al. used 21 cm observations from outside the Solar circle. Their data are shown in Figure 3. Closest to the Sun, their data show Σ HI+He (1.06R ⊙ ) = 8.6M ⊙ pc −2 with a FWHM of 300 pc. This corresponds to a midplane density of roughly n HI = 0.73 cm −3 . The Kalberla & Dedes data (also shown in Figure 3) show Σ HI+He ≃ 10 M ⊙ pc −2 . A more refined estimate gives Σ HI+He ≃ 9 M ⊙ pc −2 (McKee et al. 2015). This is consistent with a midplane density of roughly 0.8 cm −3 . This is much higher than the value of Kalberla & Kerp (1998), who obtained n CNM = 0.3 cm −3 and n WNM = 0.1 cm −3 . However, there is reason to expect a relatively high HI midplane density. Based on extinction studies, Bohlin, Savage, & Drake (1978) find a total hydrogen nucleus density 2n H2 + n HI = 1.15cm −3 . Updating this for the newer value of the Galactocentric radius of the Sun R ⊙ as in Section 4.1, we have 2n H2 + n HI = 1.38 cm −3 . According to the average midplane density determined for molecular hydrogen in Section 4.1, n H 2 = 0.19±0.03 cm −3 , and including an additional 0.15±0.07 cm −3 for the dark molecular hydrogen, we therefore expect an atomic hydrogen density n HI = 0.70 ± 0.18 cm −3 . Optical thickness corrections, which we explain below, increase this number to 0.84 cm −3 . The results are shown in Figure 3. As in the case of molecular hydrogen, all measurements were treated with equal weight and the uncertainty was estimated as the standard error at each R.
Combining all these results, we have, in the absence of optical thickness corrections,
n HI (R ⊙ ) = 0.53 ± 0.10 cm −3 (27) Σ HI+He (R ⊙ ) = 7.2 ± 0.7 M ⊙ pc −2 .(28)
In the Dickey & Lockman (1990) model, 70% of this HI midplane density is in CNM and the remaining 30% is WNM. In Kalberla & Kerp (1998), the numbers are 75% and 25%. We will take the average of these two results, 72.5% and 27.5%, which give n CNM = 0.38 cm −3 and n CNM = 0.15 cm −3
McKee et al. (2015) pointed out that these numbers must be corrected for the optical depth of the CNM. Assuming the CNM to be optically thin leads to an underestimation of the CNM column density by a factor R CNM . McKee et al. (2015) estimate this factor to be R CNM = 1.46, which they translate, for the total HI column density, to R HI = 1.20. Correcting for this gives
n CNM ≃ 0.56 cm −3 (29) n WNM ≃ 0.15 cm −3 .(30)
with totals n HI (R ⊙ ) = 0.71 ± 0.13 cm −3
Σ HI+He (R ⊙ ) = 8.6 ± 0.8 M ⊙ pc −2 .
which we use for this analysis.
On the other hand, McKee et al. (2015) argues that the model of Heiles et al. (1981) is more accurate, and recommends increasing the amount of HI in the ISM by a factor of 7.45/6.2.
McKee et al. (2015)'s values are therefore n CNM = 0.69 cm −3 , n WNM = 0.21 cm −3 , n HI = 0.90 cm −3 , and Σ HI = 10.0 ± 1.5 M ⊙ pc −2 . Although these numbers are different from our average of conventional measurements (Equations 29-32) , it agrees with the extinction result of Bohlin et al. (1978) mentioned above once the latter is corrected for the optical depth of the CNM. To account for any discrepency, we perform our analysis separately using the values of Equations 29 to 32 and the results of McKee et al. (2015). We present both results in Section 5.
For the atomic hydrogen's velocity dispersion, Heiles & Troland (2003), found σ CNM = 7.1 km s −1 and σ WNM = 11.4 km s −1 , while Kalberla & Dedes (2008) found σ CNM = 6.1 km s −1 and σ WNM = 14.8 km s −1 . Earlier, Belfort & Crovisier (1984) measured σ HI = 6.9±0.4 km s −1 , and Dickey & Lockman (1990) found σ HI = 7.0 km s −1 but did not specify if the gas was CNM or WNM. Since these are comparable to more recent measurements of the CNM component of HI, we assume both of these to correspond to σ CNM . The averages of these values are σ CNM = 6.8 ± 0.5 km s −1 (33) σ WNM = 13.1 ± 2.4 km s −1 .
Ionized Hydrogen
Besides the H 2 and the two types of HI (CNM and WNM), there is a fourth, warm, ionized component of interstellar hydrogen, denoted HII. Holmberg & Flynn (2000) and Flynn et al. (2006) included this component. Binney & Merrifield (1998) did not include the ionized component in the value for Σ ISM , possibly because of its very large scale height. Its density is typically obtained by measuring the dispersion of pulsar signals that have passed through the HII clouds. The time delay for a pulse of a given frequency is proportional to the dispersion measure DM = n e ds
where the integral is performed along the line of sight to the pulsar, and where n e is the electron number density, equal to the number density of ionized gas. The dispersion measure perpendicular to the plane of the Galaxy, DM ⊥ = DM/ sin b, therefore corresponds to the half surface density 1/2 Σ HII . Fitting a spatial distribution (e.g. exponential profile), provides midplane density information. For its midplane density, Kulkarni & Heiles (1987) found n HII = 0.030 cm −3 ; Cordes et al. (1991) found n HII = 0.024 cm −3 ; Reynolds (1991) found n HII = 0.040 cm −3 . The average of these values is n HII = 0.031 ± 0.008 cm −3 .
This agrees with the traditional model of Taylor & Cordes (1993), refined by Cordes & Lazio (2002), who found a midplane density of n HII = 0.034 cm −3 .
For the HII surface density, Reynolds (1992) reports Σ HII+He = 1.57 M ⊙ pc −2 . This is slightly higher that what was found by Taylor & Cordes (1993), who found a one-sided column density 1/2 N ⊥,HII = 16.5 cm −3 pc, or Σ HII+He = 1.1 M ⊙ pc −2 , but it is slightly lower than the more recent value of Cordes & Lazio (2002), who found 1/2N ⊥,HII = 33 cm −3 pc, or Σ HII+He = 2.3 M ⊙ pc −2 .
Assuming an exponential profile, with the scale height of 0.9 kpc of Taylor & Cordes (1993), the Reynolds (1992) result agrees with the midplane densities of Equations 36 and 37. However, Gaensler et al. (2008) argued for a scale height of 1.8 kpc that a distribution with midplane density of
n HII = 0.014 cm −3 .(38)
Similarly, Schnitzeler (2012) also argues for large scale heights of ∼ 1.4 kpc. For DM values between 20 and 30 cm −3 pc, this gives a midplane density of ∼ 0.015 cm −3 pc, as preferred by McKee et al. (2015). As we explain in Section 5, we do not find our model to be consistent with these large scale heights, even without a dark disk. We therefore do not use HII parameters in this paper as a constraint.
For its velocity dispersion, Holmberg & Flynn (2000) used the value σ HII = 40 km s −1 . This value seems to have been inferred from scale height measurements of the electrons associated with this ionized gas from Kulkarni & Heiles (1987). From the data in Reynolds (1985), however, we find a turbulent component to the dispersion of only σ HII = 21 ± 5 km s −1 . On the other hand, temperatures between 8, 000 K and 20, 000 K give a thermal contribution of σ HII,thermal = 2.1 k B T /m p ≃ 12 − 19 km s −1 (Ferrière 2001, p.14). Summing these in quadrature gives σ HII = 25 − 29 km s −1 . As we will explain in Section 4.4, including magnetic and cosmic ray pressure contributions pushes this up to 42 km s −1 . Similarly, Kalberla (2003) also finds σ HII = 27 km s −1 while assuming p mag = p cr = 1/3 p turb , for a total effective dispersion of 35 km s −1 but did not include a thermal contribution. This gives an average total effective dispersion of 39 ± 4 km s −1 , which, removing magnetic, cosmic ray, and thermal contributions, gives a turbulent dispersion of σ HII = 22 ± 3 km s −1 .
The new gas parameter estimates, obtained in this work by incorporating a broad range of literature values, are summarized in Table 1 alongside the old (Flynn et al. 2006) values. The values of McKee et al. (2015) are also included for comparison.
Other Forces
Boulares & Cox (1990) considered the effect of magnetic forces and cosmic ray pressure on the interstellar gas. The effect of the magnetic field is a contribution to the force per unit volume on the i th component of the gas:
f i = J i × B(39)B ≡ i B i(40)
is the total magnetic field from all the gas components. Using Ampere's law, we can rewrite the z-component of the force as
f zi = 1 µ 0 ((∇ × B i ) × B) z (41) = 1 µ 0 (B · div) B iz − 1 µ 0 B · ∂B i ∂z(42)
Since according to Parker (1966), the magnetic field is, on average, parallel to the plane of the Galaxy, (B z = 0) we will make the approximation that the first term vanishes in equilibrium. The second term couples each gas component to the remaining components, since B represents the total magnetic field. However, summing all components, we have
f z ≡ i f zi = − 1 µ 0 B · ∂B ∂z (43) = − ∂ ∂z B 2 2µ 0 .(44)
We recognize the form of this expression as the gradient of the magnetic pressure p B = B 2 /2µ 0 . To include this effect in the Poisson-Jeans Equation, we note that the first term on the left-hand-side of Equation 1 has the interpretation (up to an overall mass factor) as the gradient of a 'vertical pressure'. This pressure term is a correct description of a population of stars or of gas clouds. In a warm gas, this term has the interpretation as the turbulent pressure of the gas. However, in this case, one also needs to take into account the thermal pressure of the gas
p thermal = c i n i k B T i(45)
where c i is a factor that takes into account the degree of ionization of the gas, and n i = ρ i /m p is the number density of the gas atoms. The correct Poisson-Jeans equation in this case therefore
reads ∂ ∂z (ρ i σ 2 i + ρ i c i k B T i ) + ρ i ∂Φ ∂z = 0.(46)
If we define a 'thermal dispersion' as
σ 2 i,T ≡ c i k B T i(47)
then we can rewrite this as ∂ ∂z
(ρ i (σ 2 i + σ 2 i,T )) + ρ i ∂Φ ∂z = 0.(48)
Clearly, to account for the magnetic pressure, we would include the average of the magnetic pressure term in precisely the same manner:
i ∂ ∂z (ρ i (σ 2 i + σ 2 i,T )) + ∂ ∂z B 2 2µ 0 + ρ ∂Φ ∂z = 0.(49)
where ρ is the total mass density of the gas. In the following subsections, we describe how we model this magnetic pressure term.
Magnetic Pressure: Thermal Scaling Model
An important phenomenon noted by Parker (1966) is that the magnetic field B is confined by the weight of the gas through which it penetrates. We therefore would like to solve this equation by following Parker in assuming that the magnetic pressure is proportional to the the thermal pressure term, p i = ρ i c i k B T i . Since each gas component contributes to the total thermal pressure with a different temperature T i , we write:
B 2 (z) 2µ 0 = α i ρ i (z) σ 2 i,T(50)= i ρ i (z) σ 2 i,B(51)
where α is a proportionality constant fixed by B 2 (0) and i σ 2 i,T , and where we have defined the 'magnetic dispersion' σ 2 i,B ≡ α σ 2 i,T i.e. the effective dispersion arising from the magnetic pressure. The Poisson-Jeans equation then reads
i ∂ ∂z ρ i σ 2 i + σ 2 i,T + σ 2 i,B + ρ ∂Φ ∂z = 0.(52)
The above equation admits many solutions. However, we will assume that the unsummed equation
∂ ∂z ρ i σ 2 i + σ 2 i,T + σ 2 i,B + ρ i ∂Φ ∂z = 0.(53)
holds for each component individually. This amounts to assuming that all gas components confine the magnetic field equally. Other solutions can be found by substituting σ 2 i,B → σ 2 i,B + S i (z), such that i ρ i (z)S i (z) = 0. However, if we restrict our analysis to 'isothermal' solutions (constant σ 2 i,B ) the solution S i = 0 will be unique. We can also include the effects of cosmic ray pressure in a similar way, by assuming that the partial cosmic ray pressure is also proportional to the density
p i,cr (z) = ρ i (z) σ 2 i,cr(54)
and where σ 2 i,cr = β σ 2 i,T for some other constant β. The Poisson-Jeans Equation then reads
∂ ∂z (ρ i σ 2 i,eff ) = i ∂ ∂z ρ i σ 2 i + σ 2 i,T + σ 2 i,B + σ 2 i,cr + ρ ∂Φ ∂z = 0,(55)
where we have defined
σ 2 i,eff = σ 2 i + σ 2 i,T + σ 2 i,B + σ 2 i,cr .(56)
The solution to the Poisson-Jeans Equation for each component will then be
ρ i (z) = ρ i (0) exp − Φ(z) σ 2 i,eff .(57)
Note that since the pressure is additive, the dispersions add in quadrature. Boulares & Cox (1990) estimate for the magnetic pressure p B ≃ (0.4 − 1.4) × 10 −12 dyn cm −2 . For the cosmic ray pressure, they estimate p cr ≃ (0.8 − 1.6) × 10 −12 dyn cm −2 . The dispersions for each component are shown in Table 2 for comparison, as well as the effective dispersions in this model.
There is, however, no clear evidence to support this model. Although p mag ∝ p cr ∝ nk B T has been assumed in the past (Parker 1966), this was when the entire gas was treated as a single component. Equation 50, however, is much more specific. We therefore supplement this model with a second model in the next subsection for comparison.
Magnetic Pressure: Warm Equipartition Model
Here we describe a second possible model to describe the magnetic and cosmic ray pressures in the interstellar medium. Namely, it has been observed that within the CNM, energy densities in magnetic fields and in turbulence are often roughly equal (Heiles & Troland 2003;Heiles & Crutcher 2005). Although the ratio between these energies is observed to vary greatly over different molecular clouds, this so-called "energy equipartition" seems to be obeyed on average. Physically, this happens because the turbulence amplifies the magnetic field until it becomes strong enough to dissipate through van Alfvén waves. Similarly, we expect magnetic fields to trap cosmic rays within the gas until they become too dense and begin to escape. We might therefore expect the cosmic ray and magnetic field energy densities to be similar. For these reasons, an alternative to the first model (Equation 50) would be to assume equipartition of pressure between turbulence, magnetic fields, and cosmic rays:
σ 2 i = σ 2 i,B = σ 2 i,cr(58)
for each component i. The effective dispersion, which is the sum of turbulent, magnetic, cosmic ray, and thermal contributions, would then be
σ 2 i,eff = σ 2 i + σ 2 i,B + σ 2 i,cr + σ 2 i,T(59)≃ 3 σ 2 i + σ 2 i,T(60)
for each component. One important factor that we should not overlooked here, however, is that the molecular hydrogen and CNM condense to form clouds. Thus, although turbulence, magnetic fields, and cosmic rays may affect the size of the individual clouds, we expect the overall scale height of the cold components to be determined only by the cloud-cloud dispersion and not by these forces. We therefore assume Equations 58-60 only for the warm components WNM and HII. For the cold components (H 2 and CNM), we assume σ i,eff = σ i . We perform calculations separately for the two different magnetic field and cosmic ray models. The effective dispersions in both models are shown in Table 2. As we will see, the results from both models are in good agreement with one another.
Results and Discussion
We now present the results of the analysis described above. Using the midplane densities of Section 4, we calculate according to the Poisson-Jeans equation the corresponding H 2 and HI surface densities, and from these we compute the chi-squared value, (Σ −Σ) 2 /∆ 2 Σ , from the disagreement between these values and the measured values. This is done over a range of values for Σ D and h D . We thereby determine the regions of parameter space where the disagreement exceeds the 68% and 95% bounds, as will be displayed in the plots below. The scale height h D is defined such that
ρ(Z= h D ) = ρ(Z= 0) sech 2 (1/2).(61)
We begin by determining the bounds without including the contribution from magnetic fields and cosmic rays. The result is shown in Figure 4. The uncertainty H 2 is dominated by that of the dark molecular gas, while the uncertainty of HI is dominated by that of the WNM velocity dispersion (18%). We can see that although the H 2 parameters are consistent with dark disk surface densities of (for low scale height) up to 10 − 12 M ⊙ pc −2 , the HI parameters point toward lower surface densities, and that the combined probabilities are lower than 9% for all models. These results make it apparent that the model without magnetic fields is inconsistent.
On the other hand, when we include the pressure contribution from the magnetic fields and from cosmic rays, we find that both the HI and H 2 parameters allow non-zero surface densities Σ D , with an upper bound of Σ D ≃ 10 M ⊙ pc −2 in both models, for low scale heights. Higher scale heights are consistent with even higher dark disk surface densities. The results are shown in Figure 5.
For comparison, we also include the corresponding results using the values of McKee et al. (2015). Using these values and including magnetic fields and cosmic ray contributions, the data favors a non-zero surface density for the dark disk of between 5 and 15 M ⊙ pc −2 . Note that when neglecting magnetic field and cosmic rays pressures, only low dark disk surface densities seem consistent with the data.
Ionized Hydrogen Results and Issues
As was mentioned in Section 4.3, various authors have measured DM values for the HII component of the Milky Way in the range 20-30 cm −3 pc. Older models favored low scale height with midplane densities as high as 0.034 cm −3 , while newer models favor large scale height models with midplane densities as low as 0.014 cm −3 . However, using our model, the results of our Poisson-Jeans solver are consistent with only low scale heights. Following the models described in Section 4.4, we find scale heights for the HII of 0.9 kpc for the thermal scaling model and 1.0 kpc for the warm equipartition model, assuming Σ D = 0. Incorporating a more massive dark disk makes these scale heights smaller. Possible reasons for this might be: 1) The magnetic field model must be modified to include a different value of α for HII. This could be in correspondence with the result of Beuermann et al. (1985), who found that Galactic magnetic fields contained two components, one with short scale height and one with larger scale height. These two components would likely be described by different α. We could then attribute the low scale height component to the molecular and atomic gas and the large scale height component to the ionized gas. We know of no such alternative in the warm equipartition model.
2) The isothermal assumption may not be valid for HII. In fact, as explained in Gaensler et al. (2008), the volume filling fraction of HII may also vary a lot with scale height. If this is the case, then it would be incorrect to treat the HII as an isothermal component as the degrees of freedom that the temperature describes (the gas clouds) vary with distance from the Galactic midplane.
Stability Issues and Kinematic Constraints
In Figure 7 we show the bound we obtained from the kinematics of A stars in the Solar region, accounting for nonequilibrium features of the population, namely a net displacement and vertical velocity relative to the Galactic midplane. We also note that there will exist disk stability bounds. The true analysis is subtle, but a step toward the analysis is done by Shaviv (2016b) who develops the stability criterion for a heterogeneous Milky Way disk including a thin dark matter disk. We convert his bound to a bound in the h D − Σ D plane and superimpose this bound on the gas parameter bound of the present work. We see that a disk with significant mass (Σ D ) and h D > 30 pc is consistent with all current bounds.
In addition to stability issues, Hessman (2015) has argued that there exist other issues with using the vertical Jeans equation to constrain the dynamical mass in the MW disk. In particular, spiral structure must be taken into account when performing these analyses. Indeed, Shaviv (2016a) has pointed out that the effect of spiral arm crossing is to induce a 'ringing' in the dynamics of tracer stars. However, the present analysis assumes that the time scales for this ringing are much shorter in gas components so that the analysis is valid. Spiral arm crossing could also induce nonequilibrium features in the tracer population, such as discussed in Kramer & Randall (2016), but as in Kramer & Randall, including this effect would allow for more dark matter.
Conclusion
In this paper we have shown how to use measured midplane and surface densities of various galactic plane components to constrain or discover a dark disk. Although literature values of atomic hydrogen midplane densities are discordant, their mean value is consistent with the remaining gas parameters when magnetic and cosmic ray pressures are included. Using the global averages of literature values of gas parameters that we compiled, we find the data are consistent with dark disk surface densities as high as 10 M ⊙ pc −2 for low scale height, and as low as zero. The gas parameters of McKee et al. (2015) seem to favor an even higher non-zero dark disk surface density. Current data are clearly inadequate to decide this definitively. Further measurements of visible and dark H 2 density and WNM density and dispersion, as well as further refinements of magnetic field and cosmic ray models for cold gas could allow placing more robust bounds on a dark disk.
We would like to thank Chris Flynn and Chris McKee for their comments and suggestions, and also for reviewing our results. We would also like to thank Johann Holmberg and Jo Bovy for useful discussions. Thanks also to Doug Finkbeiner, Jo Bovy, Alexander Tielens, Katia Ferrière, and Matt Walker for help on the ISM parameters. We would also like to thank our reviewer for their detailed review of our work. EDK was supported by NSF grants of LR and by Harvard FAS, Harvard Department of Physics, and Center for the Fundamental Laws of Nature. LR was supported by NSF grants PHY-0855591 and PHY-1216270. Calculations performed in MATLAB 2015a. Bronfman et al. 1988 Dame et al. 1987Digel 1991Nakanishi & Sofue 2006SSS 1984CSS 1988Grabelsky et al. 1987Luna et al. 2000 Average ± 1 standard error Shaviv (2016b). The blue shaded region, delimited by the solid blue line, denotes the parameters allowed by the kinematic bound of Kramer & Randall (2016). The grey shaded region, delimited by the solid black line, denotes the parameters allowed by the gas parameters as determined in the current paper. As in Figure 6, the dashed and solid black lines denote the 68% and 95% bounds obtained from the combined gas bound, including magnetic field and cosmic ray contributions, and using the parameters of McKee et al. (2015).
Fig. 1 .
1-A plot of the exact solutions without and with a dark disk of Q = 1. The density is 'pinched' by the disk, in accordance with Equation 5.
Fig. 2 .
2-Molecular hydrogen midplane densities and surface densities determined by various authors between 1984 and 2006.
Fig. 5 .Fig. 6 .Fig. 7 .
567-Bounds on DDDM parameter space as in Figure 4, but including contributions from magnetic fields and from cosmic rays. Black: computed assuming 'thermal scaling model'. Red: computed assuming 'warm equipartition model'. -Confidence bounds as in Figure 4 but using the values of McKee et al. (2015). Lef t: Not including magnetic field and cosmic ray contributions. Right: including magnetic field and cosmic ray contributions as in Figure 5. -The red shaded region, delimited by the solid red line, denotes the parameters allowed by the stability bound of
Table 1 :
1Old values(Flynn et al. 2006) and new values (including all the references mentioned in Section 4) estimated in this paper. We also include the values ofMcKee et al. (2015).where J i is the current density associated with gas component i, B i is the magnetic induction field due to component i, and whereComponent Flynn et al. (2006) This reference McKee et al. (2015)
n(0)
n(0)
n(0)
[cm −3 ]
[cm −3 ]
[cm −3 ]
H 2
*
0.30
0.19
0.15
HI(CNM)
0.46
0.56
0.69
HI(WNM)
0.34
0.15
0.21
HII
0.03
0.03
0.0154
* does not include dark molecular gas
Table 2 :
2Intrinsic and effective dispersions for ISM componentsComponent
σ
σ T
σ B
σ cr
σ eff (thermal scaling) σ eff (warm equipartition)
[km s −1 ] [km s −1 ] [km s −1 ] [km s −1 ]
[km s −1 ]
[km s −1 ]
H 2
3.7
0.2
0.3
0.3
3.7
6.4
HI(CNM)
6.8
0.8
1.2
1.3
7.1
11.8
HI(WNM)
13.1
6.7
10.3
10.9
21
23.7
HII
22
11.8
18.1
19
36.2
39.9
Fig. 3.-Atomic hydrogen midplane densities and surface densities determined by various authors between 1978 and 2008. Fig. 4.-Confidence bounds on DDDM parameter space as a function of h D , the dark disk sech 2 (z/2h D ) scale height, using averages and uncertainties from Sections 4.1 to 4.3. Solid lines represent 95% bounds and dashed lines represent 68% bounds.R [kpc]
2
4
6
8
10
12
14
16
18
20
n
HI
[cm -3
]
0
0.5
1
1.5
Burton & Gordon 1978
Wouterloot et al. 1990
Nakanishi & Sofue 2003
Kalberla & Dedes 2008
Dickey & Lockman 1990
Liszt 1992
Kalberla & Kerp 1998
Extinction bound
Average
± 1 standard error
Σ
HI+He
[M ⊙
pc −2
]
2
4
6
8
10
12
14
16
18
Burton & Gordon 1978
Wouterloot et al. 1990
Nakanishi & Sofue 2003
Kalberla & Dedes 1978
Dickey & Lockman 1990
Average
± 1 standard error
H 2 bound
h D [pc]
20 40 60 80 100
Σ D
[M ⊙
pc −2
]
0
5
10
15
20
25
30
HI bound
h D [pc]
20 40 60 80 100
5% < pro b < 9%
excluded
Combined bound
h D [pc]
20 40 60 80 100
. J N Bahcall, ApJ. 287926Bahcall, J. N. 1984a, ApJ, 287, 926
. ApJ. 276169-. 1984b, ApJ, 276, 169
. ApJ. 276156-. 1984c, ApJ, 276, 156
. P Belfort, J Crovisier, A&A. 136368Belfort, P., & Crovisier, J. 1984, A&A, 136, 368
. K Beuermann, G Kanbach, E M Berkhuijsen, A&A. 15317Beuermann, K., Kanbach, G., & Berkhuijsen, E. M. 1985, A&A, 153, 17
J Binney, M ; R C Merrifield, B D Savage, J F Drake, Galactic Astronomy Bohlin. 224132Binney, J., & Merrifield, M. 1998, Galactic Astronomy Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, ApJ, 224, 132
. A Boulares, D P Cox, ApJ. 365544Boulares, A., & Cox, D. P. 1990, ApJ, 365, 544
. L Bronfman, R S Cohen, H Alvarez, J May, P Thaddeus, ApJ. 324248Bronfman, L., Cohen, R. S., Alvarez, H., May, J., & Thaddeus, P. 1988, ApJ, 324, 248
. W B Burton, M A Gordon, A&A. 637Burton, W. B., & Gordon, M. A. 1978, A&A, 63, 7
. D P Clemens, D B Sanders, N Z Scoville, ApJ. 327139Clemens, D. P., Sanders, D. B., & Scoville, N. Z. 1988, ApJ, 327, 139
J M Cordes, T J W Lazio, astro-ph/0207156ArXiv Astrophysics e-prints. Cordes, J. M., & Lazio, T. J. W. 2002, ArXiv Astrophysics e-prints, astro-ph/0207156
. J M Cordes, J M Weisberg, D A Frail, S R Spangler, M Ryan, Nature. 354121Cordes, J. M., Weisberg, J. M., Frail, D. A., Spangler, S. R., & Ryan, M. 1991, Nature, 354, 121
. T M Dame, D Hartmann, P Thaddeus, ApJ. 547792Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792
. T M Dame, H Ungerechts, R S Cohen, ApJ. 322706Dame, T. M., Ungerechts, H., Cohen, R. S., et al. 1987, ApJ, 322, 706
. J M Dickey, F J Lockman, ARA&A. 28215Dickey, J. M., & Lockman, F. J. 1990, ARA&A, 28, 215
. S W Digel, Cambridge, MAHarvard UniversityPhD thesisDigel, S. W. 1991, PhD thesis, Harvard University, Cambridge, MA.
B T. ; J Draine, A Katz, L Randall, M Reece, Physics of the Interstellar and Intergalactic Medium Fan. 2139Draine, B. T. 2011, Physics of the Interstellar and Intergalactic Medium Fan, J., Katz, A., Randall, L., & Reece, M. 2013, Physics of the Dark Universe, 2, 139
. K M Ferrière, Reviews of Modern Physics. 731031Ferrière, K. M. 2001, Reviews of Modern Physics, 73, 1031
. C Flynn, J Holmberg, L Portinari, B Fuchs, H Jahreiß, MNRAS. 3721149Flynn, C., Holmberg, J., Portinari, L., Fuchs, B., & Jahreiß, H. 2006, MNRAS, 372, 1149
. B M Gaensler, G J Madsen, S Chatterjee, S A Mao, PASA. 25184Gaensler, B. M., Madsen, G. J., Chatterjee, S., & Mao, S. A. 2008, PASA, 25, 184
. S Gillessen, F Eisenhauer, S Trippe, ApJ. 6921075Gillessen, S., Eisenhauer, F., Trippe, S., et al. 2009, ApJ, 692, 1075
. D A Grabelsky, R S Cohen, L Bronfman, P Thaddeus, J May, ApJ. 315122Grabelsky, D. A., Cohen, R. S., Bronfman, L., Thaddeus, P., & May, J. 1987, ApJ, 315, 122
C Heiles, R Crutcher, Lecture Notes in Physics. R. Wielebinski & R. BeckBerlin Springer Verlag664137Cosmic Magnetic FieldsHeiles, C., & Crutcher, R. 2005, in Lecture Notes in Physics, Berlin Springer Verlag, Vol. 664, Cosmic Magnetic Fields, ed. R. Wielebinski & R. Beck, 137
. C Heiles, S Kulkarni, A A Stark, ApJ. 24773Heiles, C., Kulkarni, S., & Stark, A. A. 1981, ApJ, 247, L73
. C Heiles, T H Troland, ApJ. 5861067Heiles, C., & Troland, T. H. 2003, ApJ, 586, 1067
. F V Hessman, A&A. 579123Hessman, F. V. 2015, A&A, 579, A123
. M Heyer, T M Dame, ARA&A. 53583Heyer, M., & Dame, T. M. 2015, ARA&A, 53, 583
. J Holmberg, C Flynn, MNRAS. 313209Holmberg, J., & Flynn, C. 2000, MNRAS, 313, 209
. P M W Kalberla, ApJ. 588805Kalberla, P. M. W. 2003, ApJ, 588, 805
. P M W Kalberla, L Dedes, A&A. 487951Kalberla, P. M. W., & Dedes, L. 2008, A&A, 487, 951
. P M W Kalberla, L Dedes, J Kerp, U Haud, A&A. 469511Kalberla, P. M. W., Dedes, L., Kerp, J., & Haud, U. 2007, A&A, 469, 511
. P M W Kalberla, J Kerp, A&A. 339745Kalberla, P. M. W., & Kerp, J. 1998, A&A, 339, 745
. E D Kramer, L Randall, ApJ. 824116Kramer, E. D., & Randall, L. 2016, ApJ, 824, 116
S R Kulkarni, C Heiles, Jr, Interstellar Processes. D. J. Hollenbach & H. A. Thronson,134Kulkarni, S. R., & Heiles, C. 1987, in Astrophysics and Space Science Library, Vol. 134, Interstellar Processes, ed. D. J. Hollenbach & H. A. Thronson, Jr., 87-122
H S Liszt, The Center, Bulge, and Disk of the Milky Way. L. Blitz180Liszt, H. S. 1992, in Astrophysics and Space Science Library, Vol. 180, The Center, Bulge, and Disk of the Milky Way, ed. L. Blitz, 111-130
H S Liszt, W B Burton, Kinematics, Dynamics and Structure of the Milky Way. W. L. H. Shuter100Liszt, H. S., & Burton, W. B. 1983, in Astrophysics and Space Science Library, Vol. 100, Kinematics, Dynamics and Structure of the Milky Way, ed. W. L. H. Shuter, 135-142
. A Luna, L Bronfman, L Carrasco, J May, ApJ. 641938Luna, A., Bronfman, L., Carrasco, L., & May, J. 2006, ApJ, 641, 938
. C Mckee, A Parravano, D J Hollenbach, ApJ. McKee, C., Parravano, A., & Hollenbach, D. J. 2015, ApJ
. H Nakanishi, Y Sofue, Publ. Astron. Soc. Jap. 55191Nakanishi, H., & Sofue, Y. 2003, Publ. Astron. Soc. Jap., 55, 191
. Publ. Astron. Soc. Jap. 58847-. 2006, Publ. Astron. Soc. Jap., 58, 847
& for the Fermi. A Okumura, LAT CollaborationT Kamae, LAT CollaborationarXiv:0912.3860ArXiv e-printsOkumura, A., Kamae, T., & for the Fermi LAT Collaboration. 2009, ArXiv e-prints, arXiv:0912.3860
. J H Oort, Bull. Astron. Inst. Netherlands. 6249Oort, J. H. 1932, Bull. Astron. Inst. Netherlands, 6, 249, 6, 249
. Bull. Astron. Inst. Netherlands. 1545-. 1960, Bull. Astron. Inst. Netherlands, 15, 45, 15, 45
. E N Parker, ApJ. 145811Parker, E. N. 1966, ApJ, 145, 811
. J L Pineda, W D Langer, T Velusamy, P F Goldsmith, A&A. 554103Pineda, J. L., Langer, W. D., Velusamy, T., & Goldsmith, P. F. 2013, A&A, 554, A103
. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationA&A. 53619Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2011, A&A, 536, A19
. L Randall, M Reece, Phys. Rev. Lett. 112161301Randall, L., & Reece, M. 2014, Phys. Rev. Lett., 112, 161301
. J I Read, Journal of Physics G Nuclear Physics. 4163101Read, J. I. 2014, Journal of Physics G Nuclear Physics, 41, 063101
. J I Read, G Lake, O Agertz, V P Debattista, MNRAS. 3891041Read, J. I., Lake, G., Agertz, O., & Debattista, V. P. 2008, MNRAS, 389, 1041
. R J Reynolds, ApJ. 294256Reynolds, R. J. 1985, ApJ, 294, 256
R J Reynolds, The Interstellar Disk-Halo Connection in Galaxies. H. Bloemen144IAU SymposiumReynolds, R. J. 1991, in IAU Symposium, Vol. 144, The Interstellar Disk-Halo Connection in Galaxies, ed. H. Bloemen, 67-76
R J Reynolds, American Institute of Physics Conference Series. American Institute278Reynolds, R. J. 1992, in American Institute of Physics Conference Series, Vol. 278, American Institute of Physics Conference Series, 156-165
. D B Sanders, P M Solomon, N Z Scoville, ApJ. 276182Sanders, D. B., Solomon, P. M., & Scoville, N. Z. 1984, ApJ, 276, 182
. D H F Schnitzeler, MNRAS. 427664Schnitzeler, D. H. F. M. 2012, MNRAS, 427, 664
N Z Scoville, D B SandersJr, Interstellar Processes. D. J. Hollenbach & H. A. Thronson,134Scoville, N. Z., & Sanders, D. B. 1987, in Astrophysics and Space Science Library, Vol. 134, Inter- stellar Processes, ed. D. J. Hollenbach & H. A. Thronson, Jr., 21-50
. N J Shaviv, arXiv:1606.02595ArXiv e-printsShaviv, N. J. 2016a, ArXiv e-prints, arXiv:1606.02595
. J H Taylor, J M Cordes, ApJ. 411674Taylor, J. H., & Cordes, J. M. 1993, ApJ, 411, 674
. J G A Wouterloot, J Brand, W B Burton, K K Kwee, A&A. 23021Wouterloot, J. G. A., Brand, J., Burton, W. B., & Kwee, K. K. 1990, A&A, 230, 21
| []
|
[
"LONG RANGE SCATTERING FOR NONLINEAR SCHRÖDINGER EQUATIONS WITH CRITICAL HOMOGENEOUS NONLINEARITY IN THREE SPACE DIMENSIONS",
"LONG RANGE SCATTERING FOR NONLINEAR SCHRÖDINGER EQUATIONS WITH CRITICAL HOMOGENEOUS NONLINEARITY IN THREE SPACE DIMENSIONS"
]
| [
"Satoshi Masaki ",
"Hayato Miyazaki ",
"Kota Uriya "
]
| []
| []
| In this paper, we consider the final state problem for the nonlinear Schrödinger equation with a homogeneous nonlinearity of the critical order which is not necessarily a polynomial. In [10], the first and the second authors consider one-and two-dimensional cases and gave a sufficient condition on the nonlinearity for that the corresponding equation admits a solution that behaves like a free solution with or without a logarithmic phase correction. The present paper is devoted to the study of the three-dimensional case, in which it is required that a solution converges to a given asymptotic profile in a faster rate than in the lower dimensional cases. To obtain the necessary convergence rate, we employ the end-point Strichartz estimate and modify a time-dependent regularizing operator, introduced in [10]. Moreover, we present a candidate of the second asymptotic profile to the solution.2010 Mathematics Subject Classification. 35B44, 35Q55, 35P25. | 10.1090/tran/7636 | [
"https://arxiv.org/pdf/1706.03491v1.pdf"
]
| 119,127,170 | 1706.03491 | 6b4c38a27359b35af8c539160567ee6dfb60b0cd |
LONG RANGE SCATTERING FOR NONLINEAR SCHRÖDINGER EQUATIONS WITH CRITICAL HOMOGENEOUS NONLINEARITY IN THREE SPACE DIMENSIONS
12 Jun 2017
Satoshi Masaki
Hayato Miyazaki
Kota Uriya
LONG RANGE SCATTERING FOR NONLINEAR SCHRÖDINGER EQUATIONS WITH CRITICAL HOMOGENEOUS NONLINEARITY IN THREE SPACE DIMENSIONS
12 Jun 2017
In this paper, we consider the final state problem for the nonlinear Schrödinger equation with a homogeneous nonlinearity of the critical order which is not necessarily a polynomial. In [10], the first and the second authors consider one-and two-dimensional cases and gave a sufficient condition on the nonlinearity for that the corresponding equation admits a solution that behaves like a free solution with or without a logarithmic phase correction. The present paper is devoted to the study of the three-dimensional case, in which it is required that a solution converges to a given asymptotic profile in a faster rate than in the lower dimensional cases. To obtain the necessary convergence rate, we employ the end-point Strichartz estimate and modify a time-dependent regularizing operator, introduced in [10]. Moreover, we present a candidate of the second asymptotic profile to the solution.2010 Mathematics Subject Classification. 35B44, 35Q55, 35P25.
Introduction
In this paper, we consider large time behavior of solutions to nonlinear Schrödinger equation (NLS) i∂ t u + ∆u = F (u).
Here, (t, x) ∈ R 1+d and u = u(t, x) is a complex-valued unknown function. We suppose that the nonlinearity F is homogeneous of degree 1 + 2/d, that is, F satisfies (1.1) F (λu) = λ 1+ 2 d F (u) for any u ∈ C and λ > 0. This is the continuation of the previous study in [10]. In [10], we consider one-and two-dimensional cases and give a sufficient condition on F : C → C for existence of a modified wave operator, that is, for that (NLS) admits a nontrivial solution which asymptotically behaves like
(1.2) u p (t) = (2it) − d 2 e i |x| 2 4t u + x 2t exp −iµ u + x 2t 2 d log t
as t → ∞, where u + is a given final data and µ is a real constant determined by F . We would remark that it is applicable to non-polynomial nonlinearities such as | Re u| Re u. The aim here is to extend the previous result to the case d = 3. Because the exponent 1+ 2/d becomes small in high dimensions, we face some difficulties such as lack of differentiability of the nonlinearity. As for the nonlinearity F (u) = λ|u| 2/3 u, Ginibre-Ozawa [1] showed that a class of solutions has the asymptotic profile (1.2) with µ = λ. However, it seems that no other homogeneous nonlinearity is treated so far.
In [10], a sufficient condition on the nonlinearity F for existence of a modified wave operator is given in terms of the "Fourier coefficients" of the nonlinearity. The crucial step of construction of a modified wave operator is to find an asymptotic behavior that actually takes place. For this part, specifying a resonant part of the nonlinearity, which determines the shape of the asymptotic behavior, is essential. A new ingredient in [10] is the expansion of the nonlinearity into an infinite sum via Fourier series expansion. For example, the nonlinearity F (u) = | Re u| Re u is written as
(1.3) | Re u| Re u = 4 3π |u|u + m =0
4(−1) m+1 π(2m − 1)(2m + 1)(2m + 3) |u| 1−2m u 1+2m .
The first gauge-invariant term 4 3π |u|u is the resonant part and the remaining infinite sum is a non-resonant part. It turns out that the possible asymptotic behavior of solutions to (NLS) with d = 2 is (1.2) with µ = 4/3π.
Once we find a "right" asymptotic behavior, it is possible to construct a solution around the asymptotic profile. For this, we shall show that the non-resonant part is negligible for large time. Note that the non-resonant part is a sum of "gauge-variant" nonlinearities. Because of the gauge-variant property, the non-resonant part has different phase from the solution itself. The disagreement causes an extra time decay effect (cf. stationary phase) and so the effect of the non-resonant term becomes relatively small for large time. The case where the non-resonant part is a finite sum is previously treated in [7,8,16]. The main technical issue to treat general nonlinearity lies in showing that the non-resonant part which consists of infinitely many term is still acceptable (see [10]).
In this paper, we will extend the technique to the three-dimensional case. The argument in [10] is not directly applicable. To construct a solution around a given asymptotic profile in three dimensions, it is required that the solution converges to the asymptotic profile faster than in the one-and two-dimensional cases. Since the rate is controlled by the time decay rate of the non-resonant part, we need a good decay property of the non-resonant part. However, lack of differentiability in three-dimensions then disturbs obtaining such fast decay property.
To overcome this difficulty, we modify the argument of [10] in two respects. The first one is that we enlarge the function space to construct a solution by employing the end-point Strichartz estimate. This enable us to reduce the necessary condition on the rate of convergence of the solution. We notice that the end-point Strichartz estimate is peculiar to the space of dimensions other than two (see [9]). The second respect is to improve the estimate for the high frequency part of the non-resonant part, which yields a better decay rate of the solution. However, we still assume that the given final data has very small low-frequency part. We remark that if a final data has a nonnegligible low-frequency part then there appear other kinds of asymptotic behavior (see [2,3,5,6,13,14]).
In order to present the main result, let us briefly recall the decomposition of the nonlinearity in [10]. We identify a homogeneous nonlinearity F and 2π-periodic function g as follows. A homogeneous nonlinearity F is written as
(1.4) F (u) = |u| 5 3 F u |u| .
We then introduce a 2π-periodic function g(θ) = g F (θ) by g F (θ) = F (e iθ ). Conversely, for a given 2π-periodic function g, we can construct a homogeneous nonlinearity F = F g : C → C by F g (u) = |u| 5 3 g (arg u) if u = 0 and F g (u) = 0 if u = 0. Since g(θ) is 2π-periodic function, it holds, at least formally, that g(θ) = n∈Z g n e inθ , where
(1.5) g n := 1 2π 2π 0 g(θ)e −inθ dθ.
Remark that the expansion gives us
(1.6) F (u) = g 0 |u| 5 3 + g 1 |u| 2 3 u + n =0,1 g n |u| 5 3 −n u n . 1.1. Main results. Set a = (1 + |a| 2 ) 1/2 for a ∈ C or a ∈ R 3 . Let s, m ∈ R. The weighted Sobolev space on R 3 is defined by H m,s = {u ∈ S ′ ;
i∇ m x s u ∈ L 2 }. Let us simply write H m = H m,0 . We denote by g Lip the Lipschitz norm of g.
Throughout the paper, we suppose the following:
Assumption 1.1.
Assume that F is a homogeneous nonlinearity of degree 5/3 such that a corresponding 2π-periodic function g(θ) satisfies g 0 = 0, g 1 ∈ R, and n∈Z |n| 1+η |g n | < ∞ for some η > 0, where g n is given in (1.5). In particular, g is Lipschitz continuous.
Theorem 1.2 (Existence and uniqueness). Suppose that the nonlinearity F satisfies Assumption 1.
1 for η > 0. Fix δ ∈ (3/2, 5/3) so that δ − 3/2 < 2η. Then, there exists ε 0 = ε 0 ( g Lip ) such that for any u + ∈ H 0,2 ∩ H −δ satis- fying u + L ∞ < ε 0 there exists T > 0 and a solution u ∈ C([T, ∞); L 2 (R 3 )) of (NLS) which satisfies (1.7) sup t∈[T,∞) t b u(t) − u p (t) L 2 < ∞ for any b < δ/2, where (1.8) u p (t) := (2it) − 3 2 e i |x| 2 4t u + x 2t exp −ig 1 u + x 2t 2 3 log t .
The solution is unique in the following sense: Ifũ ∈ C([T , ∞); L 2 (R 3 )) solves (NLS) and satisfies (1.7) for someT and b > 3/4 thenũ = u.
The following theorem describes the asymptotic behavior more precisely.
t b u − u p − V L ∞ t ([t,∞);L 2 x )∩L 2 t ([t,∞);L 6 x ) < ∞ for any b < δ/2, where (1.10) V(t) := −F −1 n =0,1 g n 2(in) 3/2 | u + | 5 3 −n u + n i − 3 2 n e −int|·| 2 e −ing 1 | u + | 2 3 log t 1 + in(n − 1)t| · | 2 ξ n .
In the L ∞ ([T, ∞); L 2 )-topology, V is small: For any b < δ/2,
(1.11) sup t∈[T,∞) t b V L ∞ t ([t,∞);L 2 x ) < ∞.
In the L 2 ([T, ∞); L 6 )-topology, it holds that CT − 1 2 . However, we do not have a lower bound of v p so far. If this estimate is sharp then v p and V are true second asymptotic profiles of the solution in L 2 t L 6
(1.12) sup t∈[T,∞) t b V − v p L 2 t ([t,∞);L 6 x ) < ∞ for any b < δ/2, where (1.13) v p (t) := −i n =0,1 g n 1 t −1 − i n−1 n ∆ |u p (t)| 5 3 −n u p (t) n .
xtopology. On the other hand, if v p (and so V) is small also in L 2 t L 6
x -topology, then the asymptotics (1.9) holds with V = 0, which means the asymptotic behavior of u p is the same as that in the case F (u) = λ|u| 2/3 u. Remark 1.5. In the case F (u) = λ|u| 2/3 u, our estimate (1.9) is an improvement of that in [1]. More precisely, it improves possible range of b and includes the endpoint case L 2 t L 6
x .
Remark 1.6. Under suitable additional assumptions such as u + ∈Ḣ −2− , we have V(t) =F (| 2t x | 6/5 u p (t))+o(t −1 ) in L 2 , whereF (u) is a homogeneous nonlinearity such that the corresponding Fourier coefficients areg n = 1 n(1−n) g n . The asymptotic profileF (| 2t
x | 6/5 u p (t)) is a natural extension of those used in [12,16].
g n = (−1) n−1 2 Γ( 11 6 )Γ( 3n−5 6 ) √ πΓ(− 1 3 )Γ( 3n+11 6 )
n: odd, 0 n: even.
In particular, g n = O(|n| −8/3 ) as |n| → ∞. See Appendix A for the details.
Remark 1.8. Theorem 1.2 implies that when F satisfies Assumption 1.1 and g 1 = 0, (NLS) admits a nontrivial solution which has the asymptotic profile
(1.14) u p (t) = (2it) − d 2 e i |x| 2 4t u + x 2t .
Notice that this is nothing but the asymptotic behavior of the linear solution e it∆ u + , and so that our theorem implies that the equation admits an asymptotic free solution in this case. The nonlinearity F (u) = | Re u| 2 3 Re u− i| Im u| 2 3 Im u is such an example (See Appendix A). 1.2. Strategy and Improvements. Let us briefly outline the proof of Theorems 1.2 and 1.3. The strategy is the same spirit as in [10]. By the decomposition (1.6) and by Assumption 1.1, we write G corresponds to the resonant part and N to the non-resonant part. We then introduce a formulation in [8] (see also [4,7,16]). Let U (t) = e it∆ . Introduce a multiplication operator M (t) and a dilation operator D(t) by
M (t) = e i|x| 2 4t , (D(t)f )(x) = (2it) − 3 2 f x 2t .
They are isometries on L 2 (R 3 ). Then, u p is written as u p (t) = M (t)D(t) w(t) with (1.16) w(t) := u + exp(−ig 1 | u + | 2 3 log t). Note that | w(t.x)| = | u + (x)|. We regard the equation (NLS) as
L(u − u p ) = F (u) − F (u p ) − Lu p + G(u p ) + N (u p ),
where L = i∂ t + ∆ x . A computation shows that it is rewritten as the following integral equation;
(1.17) u(t) − u p (t) = i ∞ t U (t − s) (F (u) − F (u p )) (s)ds + E r (t) + E nr (t),
where external terms are defined by
E r (t) := R(t) w − i ∞ t U (t − s)R(s)G( w)(s) ds s , (1.18) E nr (t) := i ∞ t U (t − s)N (u p )(s)ds, (1.19) with R(t) = M (t)D(t) U − 1 4t − 1
(see [8] for the details).
For R > 0, T 1, and b > 0, we define a complete metric space
X T,b,R := {v ∈ C([T, ∞); L 2 (R 3 )); v − u p X T,b R}, v X T,b := sup t∈[T,∞) t b v(t) L 2 (R 3 ) = sup t∈[T,∞) t b v L ∞ t ([t,∞);L 2 (R 3 )) , d(u, v) := u − v X T,b .
It is easy to see that
X T 1 ,b 1 ,R 1 ⊂ X T 2 ,b 2 ,R 2 if (1 )T 1 T 2 , b 1 b 2 , and R 1 R 2 .
When the asymptotic profile u p is suitably chosen, we can construct a solution in X T,b,R for some T, b, R. The appropriateness can be stated as the existence of T 0 1 such that
(1.20) E r + E nr X T 0 ,b < ∞,
where E r and E nr are given in (1.18) and (1.19), respectively. The solvability of (1.17) under this assumption will be discussed in Section 3. Then, it will turn out that we need to choose b > 3/4. [10]), and so the above condition is a natural extension.
Remark 1.9. The condition for b is b > d/4 in dimensions d = 1, 2 (seeRemark 1.
10. An improvement lies in the definition of X t,b -norm. In the previous paper [10], the norm has one more term
(1.21) sup t∈[T,∞) t b v L q t ([t,∞),L r x (R d )) , where (q, r) = (4, ∞) if d = 1 and (q, r) = (4, 4) if d = 2 are admissible pairs.
In the three-dimensional case, we are able to remove this kind of auxiliary norm by means of the endpoint Strichartz estimate. Theorem 1.3 suggests that the exponent b for which (1.21) can be bounded actually depends on the choice of (q, r).
The main step of the proof of main theorems is the following.
Proposition 1.11. Let 3/2 < δ < 5/3. Assume that n∈Z |n| 1+η |g n | < ∞ for some η > 1 2 (δ − 3 2 ). For any u + ∈ H 0,2 ∩ H −δ , there exists a constant C = C(g 1 , u + H 0,2 ∩H −δ ) such that (1.22) E r + E nr L ∞ t ([T,∞);L 2 ) CT − δ 2 log T 3 n∈Z |n| 1+η |g n | and (1.23) E r + E nr − V L ∞ t ([T,∞);L 2 )∩L 2 ([T,∞);L 6 ) CT − δ 2 log T 3 n∈Z |n| 1+η |g n | holds for all T 2, where V is given in (1.10).
The first estimate shows that (1.20) holds for 3/4 < b < δ/2. We then obtain Theorem 1.2. The second estimate is a main step of the proof of (1.9). Combining some other estimates on V, we obtain Theorem 1.3.
The main technical part lies in the estimate of E nr . We briefly recall previous results to explain how to handle the term. In [7], Hayashi, Naumkin, Shimomura, and Tonegawa introduced an argument to show the time decay of the non-resonant part by means of integration by parts. The decay comes from the fact that the phase of the non-resonant part is different from that of the linear part. Their method however requires higher differentiability of the nonlinearity. In order to reduce the required differentiability of the nonlinearity, Hayashi, Naumkin, and Wang [8] employ a time-dependent smoothing operator (essentially a cutoff to the low-frequency part) and apply the integration by parts only to the low-frequency part. In [10], the frequency cutoff is chosen dependently also on the "Fourier mode" to treat an infinite Fourier series expansion of the nonlinearity.
The time decay estimates of the high-frequency part in [8,10] are based on the fact that the regularizing operator converges to the identity operator as time goes to infinity. So, the only way to improve the estimate would seem to "lessen" the high-frequency part by modifying the regularizing operator so that it converges in a faster rate. However, if we do so, then the estimate for the low-frequency part becomes worse. The loss may not be recovered by refining the estimate on the low-frequency part because such a refinement requires differentiability more than that nonlinearities satisfying (1.1) possess.
To resolve the difficulty, we improve the estimate for the high-frequency part in another way. We work with a regularizing operator which has a flatness property. This enable us to use a regularizing operator even milder than that used in [8,10]. For the details, see Remark 2.2. As a result, it reduces the required differentiability of the nonlinearity. The idea is also applicable to the two-dimensional case and improves the previous result in [10]. However, we do not pursue it here.
The rest of the paper is organized as follows. In Section 2, we summarize useful estimates. The improve estimate for regularizing operator is discussed here. Section 3 is devoted to the proof of main theorems in an abstract form. Then, it will turn out that our main result is a consequence of Proposition 1.11. Finally, we prove Proposition 1.11 in Section 4.
Preliminaries
2.
1. An estimate for regularizing operator. To obtain time decay property of the non-resonant part E nr , we improve an estimate for the highfrequency part. In this subsection, we consider general space dimensions d. We denote the homogeneous Sobolev space on R d byḢ m = {u ∈ S ′ ; (−∆) m 2 u ∈ L 2 }. Let ψ ∈ S. We introduce a regularizing operator
K ψ = K ψ (t, n) by (2.1) K ψ := ψ i∇ |n| √ t := F −1 ψ ξ |n| √ t F.
We have an equivalent expression
K ψ f = C d ((|n| √ t) d [F −1 ψ](|n| √ t·) * f )(x).
The following is an improvement of [10] by using a kind of isotropic property of ψ near the origin.
Lemma 2.1 (Boundedness of K). Take ψ ∈ S and set K ψ as in (2.1). Let s ∈ R and θ ∈ [0, 2]. Assume ∇ψ(0) = 0 if θ ∈ (1, 2].
For any t > 0 and n = 0, the followings hold.
(i) K ψ is a bounded linear operator on L 2 and satisfies K ψ L(L 2 ) ψ L ∞ . Further, K ψ commutes with ∇. In particular, K ψ is a bounded linear operator onḢ s and satisfies
K ψ L(Ḣ s ) ψ L ∞ . (ii) K − ψ(0) is a bounded linear operator fromḢ s+θ toḢ s with norm K ψ − ψ(0) L(Ḣ s+θ ,Ḣ s ) Ct − θ 2 |n| −θ .
Proof. The first item is obvious. Let us prove the second. We consider the case θ ∈ (1, 2] and ∇ψ(0) = 0, the other case is the same as in [8]. It suffices to show the case s = 0. By assumption ∇ψ(0) = 0, we have
R d yF −1 ψ(y)dy = 0
For φ ∈Ḣ θ , one sees from the equivalent expression that
[(K ψ − ψ(0))φ](x) = C d (|n| √ t) d R d F −1 ψ(|n| √ ty)(φ(x − y) − φ(x) + y · ∇φ(x))dy.
Remark that 2]. By these estimates,
φ(· − y) − φ + y · ∇φ L 2 x = (e −iy·ξ − 1 + iy · ξ)Fφ L 2 ξ C|y| θ φ Ḣθ for θ ∈ [1,(K ψ − ψ(0))φ L 2 C d (|n| √ t) d R d |F −1 ψ(|n| √ ty)||y| θ φ Ḣθ dy C ψ t − θ 2 |n| −θ φ Ḣθ . The proof is completed.
Remark 2.2. It is the property ∇ψ(0) = 0 that allows us to take θ ∈ (1, 2] in Lemma 2.1 (ii). The property implies that the corresponding cutoff operator K ψ is a "flat" cutoff. It was not used in [8,10] and so θ is restricted to θ 1. If d 2, the time decay t −1/2 for the high-frequency part, which is given with θ = 1, is not sufficient. To recover the lack of decay, the operator of the form ψ |n| −1 t −σ/2 (i∇) was used with σ > 1. This makes the estimate of the high-frequency part better but that of the low-frequency part worse, in view of the time decay rate and order in n. In particular, the low-frequency part generated by the operator is considerably large and so it causes some loss in the integration-by-parts procedure.
Remark 2.3. It is easy to see that if ψ ∈ S satisfies ψ ≡ 1 in the neighborhood of the origin, we have no upper bound on θ in Lemma 2.1.
2.2.
Fractional chain rule of homogeneous functions of order 5/3. Let us collect useful estimates on the estimate of the nonlinearity satisfying (1.1). In view of the expansion (1.6), we consider nonlinearity of the form |u| 5/3−n u n . To this end, we introduce a Lipschitz µ norm (µ > 1). For a multi-index α = (α 1 , α 2 ) ∈ (Z 0 ) 2 , define ∂ α = ∂ α 1 z ∂ α 2 z . Put µ = N + β with N ∈ Z and β ∈ (0, 1]. For a function G ∈ C N (R 2 , C), we define
G Lip µ = |α| N −1 sup z∈C\{0} |∂ α G(z)| |z| µ−|α| + |α|=N sup z =z ′ |∂ α G(z) − ∂ α G(z ′ )| |z − z ′ | β .
If G ∈ C N (R 2 , C) and G Lip µ < ∞, then we write G ∈ Lip µ. Proof. Set F (z) = |z| 5/3−n z n . By definition of the Lipschitz norm,
F Lip 5 3 = sup z∈C\{0} |F (z)| |z| 5/3 + sup z =w |F z (z) − F z (w)| |z − w| 2/3 + sup z =w |Fz(z) − Fz(w)| |z − w| 2/3 .
Obviously, the first term is bounded. In what follows, we estimate the second term. The third term is handled similarly. Introduce F (z) by
F z (z) = 5 6 + n 2 z − 1 6 + n 2z 5 6 − n 2 =: 5 6 + n 2 F (z).
To estimate the second term, it suffices to consider the case w = 1. Indeed,
if w = 0 then | F (z) − F (w)| |z − w| 2/3 = | F (z)| |z| 2/3 C,
otherwise, denoting z and w in the phase amplitude form z = |z|e iθ and w = |w|e iτ , we have
| F (z) − F (w)| |z − w| 2/3 = ||z| 2/3 e i(n−1)θ − |w| 2/3 e i(n−1)τ | ||z|e iθ − |w|e iτ | 2/3 = | |z| |w| 2/3 e i(n−1)(θ−τ ) − 1| | |z| |w| e i(θ−τ ) − 1| 2/3 = | F ( z) − F (1)| | z − 1| 2/3 ,
where z = z/w. Let ε ∈ (0, 1) to be chosen later. Using the elemental inequality |z − 1| max(|z − 1|, |z| − 1), we have
||z| 2/3 e i(n−1) − 1| |z − 1| 2/3 |z| 2/3 + 1 (max(ε, |z| − 1)) 2/3 C(ε)
for any |z − 1| > ε. Let us consider tha case |z − 1| ε. By the Taylor expansion, if ε is sufficiently small then |re iθ − 1| C(|r − 1| + |θ|) for any |z − 1| ε, which implies ||z|e iθ − 1| 2/3 C(||z| − 1| 2/3 + |θ| 2/3 ). Hence,
||z| 2/3 e i(n−1)θ − 1| |z − 1| 2/3 C ||z| 2/3 − 1| + |z| 2/3 |e i(n−1)θ − 1| ||z| − 1| 2/3 + |θ| 2/3 C ||z| 2/3 − 1| + |z| 2/3 |(n − 1)θ| 2/3 ||z| − 1| 2/3 + |θ| 2/3 C n 2/3
for any |z − 1| ε, where we have used |e iτ − 1| = 2| sin(τ /2)| 2 1/3 |τ | 2/3 . Thus, combining the above estimates, we see that
sup z =w |F z (z) − F z (w)| |z − w| 2/3 C n 5/3 .
This completes the proof.
We recall the fractional chain rule in [15,Theorem 5.3.4.1] (see also [11]).
Lemma 2.5. Suppose that µ > 1 and s ∈ (0, µ). Let G ∈ Lip µ. Then, there exists a positive constant C depending on µ and s such that
|D x | s G(f ) L 2 x C G Lip µ f µ−1 L ∞ x |D x | s f L 2 x
holds for any f ∈ L ∞ ∩Ḣ s .
2.3.
Estimates on nonlinearity. We give some specific estimates on w and | w| 5/3−n w n by using the tools established in the preceding subsection.
Lemma 2.6. Let 3/2 δ < δ ′ < 5/3. Let u + ∈ H 0.5/3 and define w as in (1.16). Then,
w H δ C u + H 0,δ u + H 0,δ 2 3 g 1 u + 1 3 L ∞ log t 2 , and | w| 5 3 −n w n H δ C n δ ′ u + 2 3 L ∞ u + H 0, 5 3 × u + H 0, 5 3 2 3 g 1 u + 1 3 L ∞ log t 2
for any t 2.
Proof. Let us prove the first estimate. Since the L 2 estimate is trivial, we estimateḢ δ norm. Fix t 3 and let λ = −g 1 log t for simplicity. Let Φ(z) = exp(iλ|z| 2/3 ). Note that Φ(z) is a 2/3-Hölder functions with norm O(|λ|) because
|Φ(z 1 ) − Φ(z 2 )| = sin λ 2 (|z 1 | 2/3 − |z 2 | 2/3 ) C|λ||z 1 − z 2 | 2/3 .
It holds that
w Ḣδ C (∇ u + )Φ( u + ) Ḣδ−1 + C|λ| F ( u + )(∇ u + )Φ( u + ) Ḣδ−1 ,
where F (x) = z( d dz + d dz )|z| 2/3 is a 2/3-Hölder continuous function. We only estimate the second term since the first term is treated in a similar way. It follows that
F ( u + )(∇ u + )Φ( u + ) Ḣδ−1 C |∇| δ−1 F ( u + ) L 3 δ−1 ∇ u + L 6 5−2δ Φ( u + ) L ∞ + C F ( u + ) L ∞ |∇| δ−1 ∇ u + L 2 Φ( u + ) L ∞ + C F ( u + ) L ∞ ∇ u + L 6 5−2δ |∇| δ−1 Φ( u + ) L 3 δ−1 .
Obviously, the second term is bounded by u + 2/3
L ∞ u + Ḣδ . By [17, Propo- sition A.1], |∇| δ−1 Φ( u + ) L 3 δ−1 C|λ| u + 5 6 − δ 2 L ∞ |∇| s u + − 1 6 + δ 2 L 3/s C|λ| u + 2 3
H δ , where s = (δ − 1)/( 1 2 ( 2 3 + (δ − 1))) ∈ ( 3 2 (δ − 1), 1). Hence, the the third term is bounded by C|λ| u + 2/3
L ∞ u + 5/3
H δ . Since F is 2/3-Hölder, the same argument shows that the first term is bounded by C u +
5/3
H δ , which completes the proof of the first estimate.
Let us show the second. Let ε > 0 be chosen later. By interpolation inequality, Hölder's inequality, Lemma 2.5 and Lemma 2.4, we have
| w| 1+ 2 3 −n w n H δ | w| 5 3 −n w n 1−θ L 2 | w| 5 3 −n w n θ H 5 3 −ε C ε n 5 3 θ w 2 3 L ∞ w 1−θ L 2 w θ H 5 3 −ε
as long as δ < 5 3 − ε, where θ = 3 5 (1 + 3ε 5−3ε )δ. Choose ε > 0 so small that 5 3 θ δ ′ . Then the second estimate is a consequence of the first.
The following estimate is shown as in [10].
Lemma 2.7. Let w be as in (1.16). Then, it holds that
∂ t (| w| 5 3 −n w n ) H δ C n 1+δ |g 1 | t u + 4 3 L ∞ u + H 0,2 g 1 u + 2 3 L ∞ log t δ
for any 0 δ 2 and t 2.
Remark 2.8. The function ∂ t (| w| 5 3 −n w n ) is of the form t −1 F n ( u + ) exp(−ing 1 | u + | 2/3 log t),
where F n satisfies |F (j) n (z)| C n 1+j |z| 7 3 −j for j = 0, 1, 2. Therefore, we can estimate its H 2 -norm by an explicit calculation. Then, the estimate follows from an interpolation as in [10]. It is possible to estimate this term in a similar way to Lemma 2.6. This improves the assumption on u + into u + ∈ H δ but the order of |n| becomes worse. This is the reason why we apply an interpolation argument to this term, as in [10]. The full regularity u + ∈ H 2 is required in this step.
Construction of a solution around given asymptotic profile
In this section, we solve an equation of the form
(3.1) u(t) − u p (t) = i ∞ t U (t − s) (F (u) − F (u p )) (s)ds + E(t),
where u p is a given asymptotic profile of the form (1.8) and E(t) is an external term. Remark that our equation (1.17) is of the form.
= ε 0 ( g Lip ) > 0 such that if u + L ∞ ε 0 and if an external term E satisfies E X T 0 ,b M for some T 0 1, M > 0, and b > 3/4, then (3.1) admits a unique solution u(t) in X T 1 ,b,2M for some T 1 = T 1 (M, g Lip , b)
T 0 . Moreover, for any function V, admissible pair (q, r), andb b, the solution satisfies
sup t T 1 tb u − u p − V L q t ([t,∞);L r x ) C + sup t T 1 tb E − V L q t ([t,∞);L r x ) .
The proposition shows that the conclusion of Theorem 1.2 follows from the estimate (1.20), which is true for b < δ/2 in view of Proposition 1.11. Indeed, for each 3/4 < b < δ/2, we can construct a solution u(t, x) = u(t, x; b) on [T 1 (b), ∞) which satisfies (1.7) for this b, by using the proposition. Uniqueness property of the proposition then show these solution coincide each other.
Hence, with a help of the standard well-posedness theory in L 2 , the solution exists in an interval independent of b, say [T 1 , ∞), and satisfies (1.7) for any b < δ/2. The estimate (1.9) in Theorem 1.3 follows from corresponding estimate on E r + E nr given in Proposition 1.11. Lemma 3.2. Suppose that g is Lipschitz continuous. Let u + ∈ L ∞ and let u p be as in (1.8). Proof. The estimate is the same as in [7,8,16] except for using the endpoint Strichartz' estimate. Let us first decompose
If b > 3/10 then it holds that ∞ t U (t − s) (F (v) − F (u p )) ds X T,b C g Lip v − u p X T,b v − u p 2 3 X T,b T 1 2 − 2 3 b + u + 2 3 L ∞ for any v ∈ X T,b,R with T 1 and R > 0.F (v) − F (u p ) = F (1) (v) + F (2) (v), where F (1) (v) = χ {|up| |v−up|} (F (v) − F (u p )) , F (2) (v) = χ {|up| |v−up|} (F (v) − F (u p )) ,
and χ A is a characteristic function on A ⊂ R 1+3 . Since g is Lipschitz, it follows from [10, Appendix A] that
|F (v) − F (u p )| C g Lip |v − u p | 1+ 2 3 + |u p | 2 3 |v − u p | .
Since b > 3/10, we estimate F (1) (v) by the endpoint Strichartz estimate as follows:
∞ t U (t − s)F (1) (v)ds L ∞ (T,∞;L 2 ) C |v − u p | 1+ 2 3 L 2 (T,∞;L 6 5 ) CT ( 1 2 − 2 3 b)−b v − u p 5 3 X T,b . For estimate of F (2) (v), we use u p (t) L ∞ Ct −3/2 u + L ∞ . Then, ∞ t U (t − s)F (2) (v)ds L ∞ (T,∞;L 2 ) C |u p | 2 3 |v − u p | L 1 (T,∞;L 2 ) CT −b v − u p X T,b u + 2 3 L ∞
as long as b > 0. This completes the proof.
Proof of Proposition 3.1. Let
(3.2) Φ(v)(t) := u p (t) + i ∞ t U (t − s) (F (v) − F (u p )) (s)ds + E(t)
By Lemma 3.2 and by assumption, we have
Φ(v) − u p X T 0 ,b C 1 g Lip R R 2 3 T 1 2 − 2 3 b + ε 2 3 0 + M (3.3)
for any v ∈ X T,b,R with T T 0 and R > 0. We next see that
(3.4) d(Φ(v 1 ), Φ(v 2 )) C 2 g Lip R 2 3 T 1 2 − 2 3 b + ε 2 3 0 d(v 1 , v 2 )
for any v 1 , v 2 ∈ X T,b,R with T 1 and R > 0. Indeed, by the integral equation of (NLS), we see that
Φ(v 1 ) − Φ(v 2 ) = i ∞ t U (t − s) (F (v 1 ) − F (v 2 )) (s)ds.
One finds
|F (v 1 ) − F (v 2 )| C g Lip (|v 1 | 2 3 + |v 2 | 2 3 )|u − v| C g Lip (|v 1 − u p | 2 3 + |v 2 − u p | 2 3 )|u − v| + C g Lip |u p | 2 3 |v 1 − v 2 |. Motivated by the calculation, we introduce a decomposition of F (v 1 )−F (v 2 ) into two parts depending on whether |v 1 − u p | 2 3 + |v 2 − u p | 2 3 |u p | 2 3 or not.
The rest of the proof is similar to that of Lemma 3.2.
Choose ε 0 = ε 0 ( g Lip ) so small that
C 1 g Lip ε 2 3 0 1 4 , C 2 g Lip ε 2 3 0 1 4 .
Choose R = 2M . By the assumption b > 3/4, we can choose
T 1 T 0 such that (2M ) 2 3 T 1 2 − 2 3 b 1 ε 2 3
. It then follows from (3.3) and (3.4) that
Φ(v) − u p X T 1 ,b (4C 1 g Lip ε 2 3 0 + 1)M 2M = R and d(Φ(v 1 ), Φ(v 2 )) 2C 2 g Lip ε 2 3 0 d(v 1 , v 2 ) 1 2 d(v 1 , v 2 )
for any v 1 , v 2 ∈ X T 1 ,b,2M , which shows Φ : X T 1 ,b,2M → X T 1 ,b,2M is a contraction mapping. Thus, we obtain a unique solution u(t) ∈ X T 1 ,b,2M to (3.1). Takeb b and an admissible pair (q, r). Then, as in Lemma 3.2, we deduce from the Strichartz estimate that
tb u − u p − V L q t ([t,∞);L r x ) Ctb −b (2M ) + tb E − V L q t ([t,∞);L r x )
for any t T 1 . This shows the latter statement.
Proof of main results
In this section, we prove main theorems by showing Proposition 1.11. Let us first recall an estimate in [8, Lemma 2.1] which shows E r is harmless.
R(t) w L ∞ t ([T,∞);L 2 )∩L 2 t ([T,∞);L 6 ) CT − δ 2 (log T ) 2 and ∞ t U (t − s)R(s)G( w) ds s L ∞ hold for all T 2.
Hence, we concentrate on the treatment of E nr in what follow. As for this term, we have the following.
Proposition 4.2. Let 3/2 < δ < 5/3. Assume that n∈Z |n| 1+η |g n | < ∞ for some η > 1 2 (δ − 3 2 ). Let V and v p be as in (1.10) and (1.13), respectively. For any u + ∈ H 0,2 ∩ H −δ , there exists a constant C = C(g 1 , u + H 0,2 ∩H −δ ) such that
(4.1) E nr − V L ∞ t (T,∞;L 2 )∩L 2 (T,∞;L 6 ) CT − δ 2 log T 3 n =0,1 |n| 1+η |g n | holds for all T 2. Moreover, V is small in L ∞ (T, ∞; L 2 ) in such a sense that (4.2) V L ∞ (T,∞;L 2 ) C u + 5 3 H 0, 5 3 ∩H −δ T − δ 2 n =0,1 |n| −δ |g n | for T 2. Furthermore, V is approximated by v p in L 2 (T, ∞; L 6 ): There exists C = C(g 1 , u + H 0, 5 3 ∩H −δ ) > 0 such that (4.3) V − v p L 2 (T,∞;L 6 ) CT − δ 2 (log T ) 3 n =0,1 |n| 5 6 |g n |.
for T 2.
The estimates (4.1) and (4.2) complete the proof of Proposition 1.11. The estimates (4.2) and (4.3) imply (1.11) and (1.12), respectively. Hence, Theorems 1.2 and 1.3 both follow from the above proposition.
4.1.
Integration by parts and extraction of the main part. Without loss of generality, we may suppose that b 3/4. Using u p = M (t)D(t) w(t) = D(t)E(t) w(t) with E(t) = e it|x| 2 , we obtain
N (u p ) = n =0,1 g n 1 2t D(t)i − 3 2 (n−1) E n (t)φ n (t) ,
where φ n (t) := | w(t)| 5 3 −n w n (t). Let ψ 0 (x) = e −|x| 2 /4 ∈ S and set K(t, n) := K ψ 0 (t, n) as in (2.1). Remark that ∇ψ 0 (0) = 0. We decompose N (u p ) into low frequency part and high frequency part,
N (u p ) = P + Q, where P = n =0,1 g n 1 2t D(t) i − 3 2 (n−1) E n (t)Kφ n (t) , Q = − n =0,1 g n 1 2t D(t) i − 3 2 (n−1) E n (t)(K − 1)φ n (t) .
As for the high frequency part Q, we have the following.
(4.4) ∞ t U (t − s)Q(s)ds L ∞ (T,∞;L 2 )∩L 2 (T,∞;L 6 ) CT − δ 2 (log T ) 3 n =0,1 |n| ε |g n |.
for any T 2.
Proof. By Strichartz' estimate, it suffices to bound Q L 1 (T,∞;L 2 ) . By using Lemma 2.1 (ii) and Lemma 2.6, we have
Q(t) L 2 Ct −1 n =0,1 |g n | (K − 1)φ n L 2 Ct −1− δ 2 n =0,1 |n| −δ |g n | φ n Ḣδ Ct −1− δ 2 u + 2 3 L ∞ u + H 0, 5 3 u + H 0, 5 3 2 3 × g 1 u + 1 3 L ∞ log t 2 n =0,1 n ε |g n |
for any ε > 0.
Next, we consider the low-frequency part. By the factorization of U (t) = M (t)D(t)FM (t), we see that Again by factorization of U (t), we have
(4.6) FU (−s)D(s)E ρ (s) = i 3 2 E 1− 1 ρ (s)U ρ 4s D ρ 2
for ρ = 0 (see [8]). Therefore, we further compute
FU (−s)P(s) = n =0,1 i − 3 2 (n−1) g n 1 2s FU (−s)D(s)E n (s)Kφ n (s) = n =0,1 i − 3 2 (n−2) g n 1 2s E 1− 1 n (s)U n 4s D n 2 Kφ n (s). Now, we have E 1− 1 n (s) = A n (s)∂ s (sE 1− 1 n (s)) for n = 0, 1, where (4.7) A n (s) := 1 + i 1 − 1 n s|x| 2 −1 .
Further,
∂ s U n 4s = U n 4s ∂ s − in 2s 2 ∆ .
Therefore, an integration by parts gives us
∞ t E 1− 1 n (s)U n 4s D n 2 Kφ n (s) ds s = − E 1− 1 n (t)A n (t)U n 4t D n 2 Kφ n (t) − ∞ t E 1− 1 n (s)s∂ s s −1 A n (s) U n 4s D n 2 Kφ n (s)ds − ∞ t E 1− 1 n (s)A n (s)U n 4s ∂ s − in 2s 2 ∆ D n 2
Kφ n (s)ds (4.8)
Combining (4.5), (4.6), and (4.8), we reach to
(4.9) i ∞ t U (t − s)P(s)ds =iU (t)F −1 n =0,1 i − 3 2 (n−2) g n ∞ t E 1− 1 n (s)U n 4s D n 2 Kφ n (s) ds 2s = − iD(t) n =0,1 g n 2i 3 2 (n−1) E n (t)D n 2 −1 U − n 4t A n (t)U n 4t D n 2 Kφ n (t) − i ∞ t U (t − s)D(s) n =0,1 g n 2i 3 2 (n−1) E n (s)D n 2 −1 U − n 4s s∂ s s −1 A n (s) U n 4s D n 2 Kφ n (s)ds − i ∞ t U (t − s)D(s) n =0,1 g n 2i 3 2 (n−1) E n (s)D n 2 −1 U − n 4s A n (s)U n 4s ∂ s − in 2s 2 ∆ D n 2
Kφ n (s)ds =: I 1 + I 2 + I 3 .
It will turn out that the term I 1 contains the main part and that I 2 and I 3 are remainder terms.
4.2.
Estimate of reminders. Let us estimate I 2 and I 3 defined in in (4.9). The following estimate is crucial.
Lemma 4.4. Let 3/2 < δ < 5/3 and η > 1 2 δ − 3 2 . Let ψ(x) ∈ S and set K(t, n) := K ψ (t, n) as in (2.1). Then, it holds for any t 1 and n = 0, 1 that [10] if d = 1, 2. Although the proof for d = 3 is essentially the same, we give it for self-containedness.
(4.10) A n (t)U n 4t D n 2 Kφ n (t) L 2 Ct − δ 2 |n| −δ+η φ n (t) H δ + |ξ| −δ φ n (t) L 2 .
Lemma 4.4 is proved in
Proof of Lemma 4.4. We set B(t) = (1 + t|x| 2 ) − 1 2 , which yields |A n (t)| CB(t) 2 for any n = 0, 1. Then we have |x| θ B(t) 2 Ct − θ 2 for any θ ∈ [0, 2] and B 2 ∈ L (3/2)+ε ∩ L ∞ (R d ) for all ε > 0.
By the triangle inequality,
B(t) 2 U n 4t D n 2 Kφ n (t) L 2 B(t) 2 U n 4t − 1 D n 2 Kφ n (t) L 2 + B(t) 2 D n 2 (K − 1) φ n (t) L 2 + B(t) 2 D n 2 φ n (t) L 2
=: I n + II n + III n For any p 1 > 2, one sees from Sobolev embedding and Lemma 2.1 (i) that
I n C B(t) 2 L p 1 |∇| 3 p 1 n|∇| 2 t 1 2 (δ− 3 p 1 ) D n 2 Kφ n (t) L 2 Ct − δ 2 |n| −δ+( δ 2 − 3 2p 1 ) φ n (t) Ḣδ .
By definition of η, we are able to choose p 1 so that
δ 2 − 3 2p 1 < η.
By Lemma 2.1 (ii), we estimate
II n C B 2 L p 2 |∇| 3 p 2 D n 2 (K − 1)φ n (t) L 2 Ct − 3 2p 2 |n| − 3 p 2 |∇| 3 p 2 (K − 1) φ n (t) L 2 Ct − 1 2 ( 3 p 2 +θ 2 ) |n| − 3 p 2 −θ 2 φ n (t) Ḣ 3 p 2 +θ 2
for any p 2 ∈ (2, ∞] and θ 2 ∈ [0, 2]. Taking p 2 and θ 2 so that θ 2 + 3 p 2 = δ, we obtain desired estimate for II. Finally, we have
III n Ct − δ 2 |ξ| −δ D n 2 φ n (t) L 2 Ct − δ 2 |n| −δ |ξ| −δ φ n (t) L 2 . These estimates yield B 2 U n 4t D n 2 Kφ n (t) L 2 Ct − δ 2 |n| −δ+η φ n (t) H δ + |ξ| −δ φ n (t) L 2 .
This completes the proof.
Let us now give the estimate on I 2 and I 3 .
Lemma 4.5. There exists C = C(g 1 , u + H 0,2 ∩H −δ ) > 0 such that
I 2 + I 3 L ∞ ([T,∞);L 2 )∩L 2 ([T,∞);L 6 ) CT − δ 2 (log T ) 3 n =0,1 |n| 1+η |g n |
for any T 2.
Proof. By Strichartz' estimate, the identity ∂ s s −1 A(s) = −2s −2 A(s) + s −2 (A(s)) 2 , and Lemma 4.4, we compute (4.11)
I 2 L ∞ (T,∞;L 2 )∩L 2 (T,∞;L 6 ) C n =0,1 |g n | ∞ T A(s)U n 4s D n 2 Kφ n (s) L 2 ds s C n =0,1 |g n ||n| −δ+η ∞ T s − δ 2 φ n (s) Ḣδ ∩H 0,−δ ds s .
We estimate I 3 L 2 . We introduce the regularizing operators K j := K ψ j (j = 1, 2) by (2.1) with
ψ 1 (x) = − 1 2 x · ∇ψ 0 ∈ S, ψ 2 (x) = i 2 |x| 2 ψ 0 (x) ∈ S.
Remark that ∇ψ 1 (0) = ∇ψ 2 (0) = 0. We then have an identity
∂ s − in 2s 2 ∆ D n 2 Kφ n = D n 2 K∂ s φ n + s −1 D n 2 K 1 φ n + s −1 nD n 2 K 2 φ n .
Since K 1 and K 2 of the form (2.1), the estimate (4.10) is valid also for these regularizing operators. Then, mimicking the estimate of I 2 , we have
I 3 L ∞ (T,∞;L 2 )∩L 2 (T,∞;L 6 ) C n =0,1 |g n ||n| −δ+η ∞ T s − δ 2 ∂ s φ n (s) Ḣδ ∩H 0,−δ ds + C n =0,1 |g n ||n| −δ+η ∞ T s − δ 2 −1 φ n (s) Ḣδ ∩H 0,−δ ds + C n =0,1 |g n ||n| −δ+1+η ∞ T s − δ 2 −1 φ n (s) Ḣδ ∩H 0,−δ ds (4.12)
for T 2. By (4.11), (4.12), Lemmas 2.6 and 2.7, and the estimates
φ n H 0,−δ C u + 2 3 L ∞ u + Ḣ−δ , ∂ t φ n H 0,−δ C |g 1 | t u + 4 3
L ∞ u + Ḣ−δ , we obtain the desired estimate.
4.3.
Estimates on the main contribution. We estimate I 1 in (4.9). Recall that
V = −F −1 n =0,1 g n 2i 3 2 n M − n 4t A n (t)D n 2 φ n (t).
With the following proposition, we obtain (4.1).
Proposition 4.6. There exists C = C(g 1 , u + H 0, 5 3 ) > 0 such that
I 1 − V L ∞ ([T,∞);L 2 )∩L 2 ([T,∞);L 6 ) CT − δ 2 log T 3 n =0,1
|n| 1+η |g n | holds for any T 2.
Proof. We further break up I 1 as
I 1 = − iD(t) n =0,1 g n 2i 3 2 (n−1) E n (t)D n 2 −1 U − n 4t A n (t) U n 4t − 1 D n 2 Kφ n (t) − iD(t) n =0,1 g n 2i 3 2 (n−1) E n (t)D n 2 −1 U − n 4t A n (t)D n 2 (K − 1)φ n (t) − iD(t) n =0,1 g n 2i 3 2 (n−1) E n (t)D n 2 −1 U − n 4t A n (t)D n 2 φ n (t) =: IV + V + VI.
A computation shows that VI = V. Since |A n (t)| 1, we have
IV L 2 x C n =0,1 |g n | |n| t δ/2 |∇| δ D n 2 Kφ n (t) L 2 Ct − δ 2 n =0,1 |n| − δ 2 |g n | φ n (t) Ḣδ and V L 2 x C n =0,1 |g n | (K − 1)φ n (t) L 2 Ct − δ 2 n =0,1 |n| −δ |g n | φ n (t) Ḣδ .
Hence, we have L ∞ (T, ∞; L 2 )-estimate. Similarly, by L p −L q estimate of the Schrödinger group, the Hölder estimate, Sobolev embedding, and Lemma 2.1 (i), we have
IV L 6 x C n =0,1 |g n | B(t) 2 U n 4t − 1 D n 2 Kφ n (t) L 6 5 C n =0,1 |g n | B 2 (t) L p 4 |∇| 3 p 4 −1 n|∇| 2 t δ 2 + 1 2 − 3 2p 4 D n 2 Kφ n (t) L 2 Ct − δ 2 − 1 2 n =0,1 |n| −δ+( δ 2 + 1 2 − 3 2p 4 ) |g n | φ n (t) Ḣδ
for any 3 p 4 > 3/2. By definition of η, we are able to choose p 4 so that
δ 2 + 1 2 − 3 2p 4 < 1 + η.
By Hölder's inequality and Lemma 2.1 (ii), we obtain
V L 6 x C n =0,1 |g n | B(t) 2 L 3 (K − 1)φ n (t) L 2 Ct − δ 2 − 1 2 n =0,1 |n| −δ |g n | φ n (t) Ḣδ .
This competes the proof.
We are in a position to finish the proof of Proposition 4.2.
Proof of Proposition 4.2. It suffices to establish (4.2) and (4.3). The estimate (4.2) follows from
V L 2 C n =0,1 |g n | B(t) 2 D n 2 φ n (t) L 2 .
The right hand side is III n in the proof of Lemma 4.4. Finally, we prove (4.3). Since
V = v p − n =0,1 g n 2i 3 2 (n+1) C n (t)M t n D(t) U − 1 4nt − 1 φ n (t),
where C n (t) := F −1 A n (t)F = (1 + i( n−1 n )t∆) −1 . Since ∇C n (t) L(L 2 ) Ct −1/2 for any n = 0, 1 and t 2, we see from Sobolev embedding that
V − v p L 6 x Ct − 1 2 n =0,1 |g n | U − 1 4nt − 1 φ n (t) L 2 Ct − δ 2 − 1 2 n =0,1 |n| − δ 2 |g n | φ n (t) Ḣδ .
Hence, we have the desired estimate.
We finally give an outline to obtain the asymptotics of V(t) in Remark 1.6. Note that as t → ∞ for suitable u + . We omit the detail.
Appendix A. A calculation of Fourier coefficients
In this appendix, we demonstrate an explicit formula of Fourier coefficients of the function g(θ) = | cos θ| α−1 cos θ. This contains our example in Remark 1.7.
Proposition A.1. Let α > −1 be not an odd integer. Let g n := 1 2π π −π | cos θ| α−1 cos θ cos nθdθ for n ∈ Z. Then, g n = 0 for even n and (A.1) g n = (−1)
n−1 2 Γ( α+2 2 )Γ( n−α 2 ) √ πΓ(− α−1 2 )Γ( n+α+2 2 )
for odd n. In particular, g n = O(|n| −α−1 ) as |n| → ∞.
Proof. g n = 0 for even n is obvious. For odd n, by the symmetry we have g n = 1 π π 2 − π 2 cos α θ cos nθdθ Let a m := g 2m+1 for m ∈ Z. We first show that there exists a constant c α ∈ R such that
(A.2) a m = c α (−1) m Γ(m − α−1 2 ) Γ(m + α+3
2 ) for m ∈ Z. By integration by parts, a m − a m−1 = 2 π(α + 1) for odd n. In particular, g n = O(|n| −α−1 ) as |n| → ∞.
Proof. g n = 0 for even n is obvious. For odd n, by the symmetry we have Together with b 0 = Γ( α+2 2 ) √ πΓ( α+3 2 ) , we obtain the result as in the previous case.
Remark 1 . 7 .
17Our theorem can be applied to F (u) = | Re u|
2 3
2Re u. The corresponding periodic function is g(θ) = | cos θ|
some C > 0 and for any n ∈ Z.
Proposition 3. 1 .
1Suppose that g is Lipschitz continuous. Let u + ∈ L ∞ and let u p be as in(1.8). There exists a constant ε 0
Remark 3 . 3 .
33The constant C in the estimate of the above lemma can be taken independent of b, provided b 3/4.
Lemma 4 . 1 .
41Let δ ∈ (3/2, 5/3). For any u + ∈ H 0,5/3 , there exists a constant C = C(g 1 , u + H 0,5/3 ) such that
Lemma 4. 3 .
3Fix ε > 0. There exists a constant C = C(g 1 , u + H 0, 5 3 ) such that
ζ A n (t) L(L 2 ) t − ζ 2 for any ζ ∈ [0, 2], VIII is small if u + ∈ L ∞ ∩ H 0,−2− . Further, since F −1 M − n 4t = U ( t n )F −1 = M ( n u p (t) n + o(t −1 )
|
c α = Γ( α+2 2 )/ √ πΓ( 1−α 2 ),which shows (A.3) together with (A.2). The last assertion easily follows by means of the Stirling formula. A similar argument shows the following Proposition A.2. Let α > −1 be not an odd integer. Let sin θ| α−1 sin θ sin nθdθ for n ∈ Z. Then, g n = 0 for even n and (A.3) g n = Γ( α+2 2 )Γ( n−α 2 ) √ πΓ(− α−1 2 )Γ( n+α+2 2 )
α θ sin nθdθ Let b m := g 2m+1 for m ∈ Z. We have the recurrence relation
t ([T,∞);L 2 )∩L 2 t ([T,∞);L 6 )
(log T ) 3
Long range scattering for nonlinear Schrödinger and Hartree equations in space dimension n ≥ 2. J Ginibre, T Ozawa, Comm. Math. Phys. 15131207269J. Ginibre and T. Ozawa, Long range scattering for nonlinear Schrödinger and Hartree equations in space dimension n ≥ 2, Comm. Math. Phys. 151 (1993), no. 3, 619-645. MR1207269
Large time behavior for the cubic nonlinear Schrödinger equation. Nakao Hayashi, Pavel I Naumkin, 1065-1085. MR1924713Canad. J. Math. 545Nakao Hayashi and Pavel I. Naumkin, Large time behavior for the cubic nonlinear Schrödinger equation, Canad. J. Math. 54 (2002), no. 5, 1065-1085. MR1924713
On the asymptotics for cubic nonlinear Schrödinger equations. Complex Var. Theory Appl. 4952073463, On the asymptotics for cubic nonlinear Schrödinger equations, Complex Var. Theory Appl. 49 (2004), no. 5, 339-373. MR2073463
Domain and range of the modified wave operator for Schrödinger equations with a critical nonlinearity. Comm. Math. Phys. 2672, Domain and range of the modified wave operator for Schrödinger equa- tions with a critical nonlinearity, Comm. Math. Phys. 267 (2006), no. 2, 477-492. MR2249776
Global existence for the cubic nonlinear Schrödinger equation in lower order Sobolev spaces. 801-828. MR2850366Differential Integral Equations. 249, Global existence for the cubic nonlinear Schrödinger equation in lower or- der Sobolev spaces, Differential Integral Equations 24 (2011), no. 9-10, 801-828. MR2850366
Logarithmic time decay for the cubic nonlinear Schrödinger equations. Int. Math. Res. Not. IMRN. 143384451, Logarithmic time decay for the cubic nonlinear Schrödinger equations, Int. Math. Res. Not. IMRN 14 (2015), 5604-5643. MR3384451
Modified wave operators for nonlinear Schrödinger equations in one and two dimensions. Nakao Hayashi, Pavel I Naumkin, Akihiro Shimomura, Satoshi Tonegawa, Electron. J. Differential Equations. 16622047418Nakao Hayashi, Pavel I. Naumkin, Akihiro Shimomura, and Satoshi Tonegawa, Mod- ified wave operators for nonlinear Schrödinger equations in one and two dimensions, Electron. J. Differential Equations (2004), No. 62, 16 pp. (electronic). MR2047418
Modified wave operators for nonlinear Schrödinger equations in lower order Sobolev spaces. Nakao Hayashi, Huimei Wang, Pavel I Naumkin, J. Hyperbolic Differ. Equ. 842864547Nakao Hayashi, Huimei Wang, and Pavel I. Naumkin, Modified wave operators for nonlinear Schrödinger equations in lower order Sobolev spaces, J. Hyperbolic Differ. Equ. 8 (2011), no. 4, 759-775. MR2864547
Endpoint Strichartz estimates. Markus Keel, Terence Tao, 955-980. MR1646048Amer. J. Math. 1205Markus Keel and Terence Tao, Endpoint Strichartz estimates, Amer. J. Math. 120 (1998), no. 5, 955-980. MR1646048
Long range scattering for nonlinear schrödinger equations with general homogeneous nonlinearity. Satoshi Masaki, Hayato Miyazaki, arXiv:1612.04524preprintSatoshi Masaki and Hayato Miyazaki, Long range scattering for nonlinear schrödinger equations with general homogeneous nonlinearity, preprint (2016), available at arXiv:1612.04524.
On the well-posedness of the generalized Korteweg-de Vries equation in scale-criticalL r -space. Satoshi Masaki, Jun-Ichi Segata, Anal. PDE. 93Satoshi Masaki and Jun-ichi Segata, On the well-posedness of the generalized Korteweg-de Vries equation in scale-criticalL r -space, Anal. PDE 9 (2016), no. 3, 699-725. MR3518534
Wave operators for the nonlinear Schrödinger equation with a nonlinearity of low degree in one or two space dimensions. Kazunori Moriyama, Satoshi Tonegawa, Yoshio Tsutsumi, Commun. Contemp. Math. 56Kazunori Moriyama, Satoshi Tonegawa, and Yoshio Tsutsumi, Wave operators for the nonlinear Schrödinger equation with a nonlinearity of low degree in one or two space dimensions, Commun. Contemp. Math. 5 (2003), no. 6, 983-996. MR2030566
The dissipative property of a cubic non-linear Schrödinger equation. P I Naumkin, Izv. Ross. Akad. Nauk Ser. Mat. 7923352593P. I. Naumkin, The dissipative property of a cubic non-linear Schrödinger equation, Izv. Ross. Akad. Nauk Ser. Mat. 79 (2015), no. 2, 137-166. MR3352593
On the critical nongauge invariant nonlinear Schrödinger equation. Pavel I Naumkin, Isahi Sánchez-Suárez, MR2784622Discrete Contin. Dyn. Syst. 303Pavel I. Naumkin and Isahi Sánchez-Suárez, On the critical nongauge invariant non- linear Schrödinger equation, Discrete Contin. Dyn. Syst. 30 (2011), no. 3, 807-834. MR2784622
Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations. Thomas Runst, Winfried Sickel, De Gruyter Series in Nonlinear Analysis and Applications. 31419319Walter de Gruyter & CoThomas Runst and Winfried Sickel, Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations, De Gruyter Series in Nonlinear Analysis and Applications, vol. 3, Walter de Gruyter & Co., Berlin, 1996. MR1419319
Long-range scattering for nonlinear Schrödinger equations in one and two space dimensions. Akihiro Shimomura, Satoshi Tonegawa, 127-150. MR2035499Differential Integral Equations. 171-2Akihiro Shimomura and Satoshi Tonegawa, Long-range scattering for nonlinear Schrödinger equations in one and two space dimensions, Differential Integral Equa- tions 17 (2004), no. 1-2, 127-150. MR2035499
The defocusing energy-critical nonlinear Schrödinger equation in higher dimensions. Monica Visan, Duke Math. J. 1382Monica Visan, The defocusing energy-critical nonlinear Schrödinger equation in higher dimensions, Duke Math. J. 138 (2007), no. 2, 281-374. MR2318286
Japan E-mail address: [email protected] Advanced Science Course, Department of Integrated Science and Technology. Toyonaka, Osaka; Tsuyama College, Tsuyama, Okayama; Okayama, OkayamaDivision of Mathematical Science, Department of Systems Innovation, Graduate School of Engineering Science, Osaka University ; National Institute of Technology ; jp Department of Applied Mathematics, Faculty of Science, Okayama University of ScienceJapan E-mail address: [email protected]. Japan E-mail address: [email protected] of Mathematical Science, Department of Systems Innovation, Grad- uate School of Engineering Science, Osaka University, Toyonaka, Osaka, 560- 8531, Japan E-mail address: [email protected] Advanced Science Course, Department of Integrated Science and Technol- ogy, National Institute of Technology, Tsuyama College, Tsuyama, Okayama, 708-8509, Japan E-mail address: [email protected] Department of Applied Mathematics, Faculty of Science, Okayama Univer- sity of Science, Okayama, Okayama, 700-0005, Japan E-mail address: [email protected]
| []
|
[
"Nonlinear propagating localized modes in a 2D hexagonal crystal lattice",
"Nonlinear propagating localized modes in a 2D hexagonal crystal lattice"
]
| [
"J Bajars \nSchool of Mathematics\nUniversity of Edinburgh James Clerk Maxwell Building\nThe King's Buildings\nMayfield RoadEH9 3JZEdinburghUK, (\n",
"J C Eilbeck \nDepartment of Mathematics\nHeriot-Watt University\nEH14 4ASRiccarton, EdinburghUK\n",
"B Leimkuhler \nSchool of Mathematics\nUniversity of Edinburgh James Clerk Maxwell Building\nThe King's Buildings\nMayfield RoadEH9 3JZEdinburghUK, (\n",
"Maxwell Institute "
]
| [
"School of Mathematics\nUniversity of Edinburgh James Clerk Maxwell Building\nThe King's Buildings\nMayfield RoadEH9 3JZEdinburghUK, (",
"Department of Mathematics\nHeriot-Watt University\nEH14 4ASRiccarton, EdinburghUK",
"School of Mathematics\nUniversity of Edinburgh James Clerk Maxwell Building\nThe King's Buildings\nMayfield RoadEH9 3JZEdinburghUK, ("
]
| []
| In this paper we consider a 2D hexagonal crystal lattice model first proposed by Marín, Eilbeck and Russell in 1998. We perform a detailed numerical study of nonlinear propagating localized modes, that is, propagating discrete breathers and kinks. The original model is extended to allow for arbitrary atomic interactions, and to allow atoms to travel out of the unit cell. A new on-site potential is considered with a periodic smooth function with hexagonal symmetry. We are able to confirm the existence of long-lived propagating discrete breathers. Our simulations show that, as they evolve, breathers appear to localize in frequency space, i.e. the energy moves from sidebands to a main frequency band. Our numerical findings contribute to the open question of whether exact moving breather solutions exist in 2D hexagonal layers in physical crystal lattices. | 10.1016/j.physd.2015.02.007 | [
"https://arxiv.org/pdf/1409.0355v1.pdf"
]
| 18,758,960 | 1409.0355 | e4715a3f97eb44a12cab2746dd3af30f307f4835 |
Nonlinear propagating localized modes in a 2D hexagonal crystal lattice
Sep 2014
J Bajars
School of Mathematics
University of Edinburgh James Clerk Maxwell Building
The King's Buildings
Mayfield RoadEH9 3JZEdinburghUK, (
J C Eilbeck
Department of Mathematics
Heriot-Watt University
EH14 4ASRiccarton, EdinburghUK
B Leimkuhler
School of Mathematics
University of Edinburgh James Clerk Maxwell Building
The King's Buildings
Mayfield RoadEH9 3JZEdinburghUK, (
Maxwell Institute
Nonlinear propagating localized modes in a 2D hexagonal crystal lattice
Sep 2014
In this paper we consider a 2D hexagonal crystal lattice model first proposed by Marín, Eilbeck and Russell in 1998. We perform a detailed numerical study of nonlinear propagating localized modes, that is, propagating discrete breathers and kinks. The original model is extended to allow for arbitrary atomic interactions, and to allow atoms to travel out of the unit cell. A new on-site potential is considered with a periodic smooth function with hexagonal symmetry. We are able to confirm the existence of long-lived propagating discrete breathers. Our simulations show that, as they evolve, breathers appear to localize in frequency space, i.e. the energy moves from sidebands to a main frequency band. Our numerical findings contribute to the open question of whether exact moving breather solutions exist in 2D hexagonal layers in physical crystal lattices.
Introduction
This article examines propagating nonlinear localized modes in crystalline materials. These modes are commonly referred to as propagating discrete breathers, i.e. waves contained within a bell-shaped envelope exhibiting internal oscillations of frequencies outside the phonon band. There is strong interest in such breather solutions, as they provide possible mechanisms underpinning physical phenomena, for example the formation of long decorated dark lines in muscovite mica [3,20,21,22,25], a possible mechanism for high temperature superconductivity [23,19], and the development of next generation plasma fusion reactors [24]. Laboratory and numerical experiments provide evidence for such coherent localized phenomena [26,24,28,8,17,19,18,13].
The existence of dark lines in muscovite mica crystals was first highlighted by Russell some twenty years ago [22]. Although some of these lines were thought to be formed by cosmic rays, the fact that many of the lines follow crystallographic axes was puzzling. Russell was unable to find a suitable linear theory for such phenomena and suggested the possibility of nonlinear localized modes. He called these "quodons" as they appeared to be connected to a symmetry feature of the axes which he called quasi-one-dimensionality, for which displacement of an atom along the axis direction was met by a force acting along the same line (technically this is C 2 symmetry). In this paper we use the term quasi-one-dimensional to refer both to this type of symmetry and to the fact that the observed mobile pulses seem to be highly localized along one of the crystallographic axes. The two effects are believed to be related [17], although the exact mechanism is not clear.
The active component in the mica case seems to be the 2D hexagonal layer of K atoms sandwiched between two relatively rigid silicate layers. Such a symmetry feature may be also associated with many of the materials having high T c superconductivity properties [23], although in this case the underlying 2D layers have a higher cubic rather than hexagonal symmetry. More tenuous suggestions that localized modes could lead to enhanced fusion rates in deuterated crystals have also been put forward [24].
More generally, there is increasing interest in single 2D hexagonal crystals such as graphene [11] and other layered structures that could be built from the two-dimensional atomic crystals based on graphene geometries [12]. An open question is what role both stationary and mobile localized modes can play in such structures.
Due to advances in computer power and better understanding of molec-ular dynamics algorithms [2,14], we are now well equipped for numerical study of discrete breathers in higher dimensional dynamical lattices. There is a good theoretical and numerical understanding of existence of stationary discrete breathers [9,4,15], that is, spatially localized time-periodic excitations. The same cannot be said about mobile discrete breathers in 1D and higher dimensional dynamical lattices [9,4,10,5,16]. There are still open theoretical questions regarding the existence of propagating discrete breathers in general nonintegrable lattices. There are a few exceptions such as the Ablowitz-Ladik chain [1], which is an integrable model. Thus we must rely on numerical studies of propagating localized modes, the main focus of this paper. In addition, we propose here a 2D model with a lower level of complexity for future analytical investigations. In their work, Marín et al. [17,18] showed numerically for the first time the existence of propagating localized modes (discrete breathers) in a 2D dynamical hexagonal nonlinear lattice. They extended their results to a 2D cubic lattice in [19]. Their lattices were subject to a nearest neighbour anharmonic interparticle interaction potential and an on-site potential. Their model represents a 2D nonlinear lattice when embedded in a surrounding 3D lattice and as such can be thought of as a 2D layer model of a 3D layered crystal lattice. Examples of such crystals are cuprates, the copper-oxide based high temperature superconductors, with cubic symmetry, and muscovite mica, a potassium based silicate insulators, with hexagonal symmetry. The study [17] was limited by available computer power and hence only to models of 16 2 and 32 2 lattices sizes (i.e. < 1000 lattice sites) with periodic boundary conditions. This numerical study confirmed the existence of propagating highly localized quasi-one-dimensional discrete breathers propagating in crystallographic directions. The quasi-one-dimensional nature of the discrete breathers suggests that they may exist in crystals containing any 1D chains with C 2 symmetry.
The present paper explores the hexagonal model in much greater detail, using a much larger computational domain to simulate a system of up to 3 × 10 5 lattice sites. Using somewhat smaller 2D domains but with periodic boundary conditions, we are able to track a breather traversing one million lattice sites. Our study shows frequency sharpening effects in both 1D and 2D, which were not observed in the paper of Marin et al. [17]. Moreover, we obtain a better understanding of the 2D qualitative nature of these quasione-dimensional propagating breathers.
For the theory stated above to be a proper representation of physical real-ity, discrete breathers must travel long distances, i.e. one or more millimeters, if we are to associate them with the creation of dark lines in mica and to give numerical support for sputtering experiment carried out by Russell et al. [24]. In this experiment, a specimen of muscovite mica of size ∼ 1mm. in thickness and ∼7mm. across the (001)-face was subject to low energy alpha particle bombardment at one end of the crystal. The experiment showed that particles were emitted at the opposite face of the specimen in the crystallographic directions of the potassium layer of muscovite. The potassium layer is thought to be the layer where discrete breathers could have propagated [25]. In general discrete breathers are not a priori expected to be long-lived since the lattice models considered are likely to be nonintegrable. In addition, their lifespan is subject to interactions with defects and with the phonon background, which may be viewed as thermal noise. The experiment by Russell et al. [24] showed the transport of energy over more than 10 7 lattice sites at about 300 • K (room temperature). An obvious challenge is to demonstrate the existence of long-lived propagating discrete breather solutions using a theoretical model to understand their role in the transport of energy in sputtering experiments and formation of the dark lines in mica. The simulations of Marín et al. [17] were done on small lattices with periodic boundary conditions: the longest distance of breather travel was reported to be ≤ 10 4 lattice sites (i.e. traversing the periodic lattice many times) before the wave collapsed. Their results suggest that the discrete breathers are sensitive to small scale interactions with phonon background, i.e. thermal noise may turn propagating modes into stationary ones, or scatter their energy into the lattice, and thus lifetimes were insufficient to support the experimental results of Russell et al [24]. On the other hand this does not preclude the existence of long-lived propagating breather solutions when the right conditions are met, since the study by Cretegny et al. [7] showed that phonons may turn stationary breathers into a mobile one. The paper [17] used a lattice configuration in its dynamical equilibrium state for the initial conditions, while additionally exciting three atoms in one of the crystallographic direction with positive-negative-positive or vice versa initial velocity conditions. With these initial conditions they were able to create a propagating breather solution together with low amplitude phonons which spread into the domain and continued to interact with the propagating breather. Thus these interactions could be responsible for the collapse of propagating discrete breather.
The model in [17] was highly restricted in that it incorporated only nearest neighbour interactions between potassium atoms, with the atoms confined to their unit cells. Thus they were not able to study kink solutions in a 2D hexagonal lattice. The objective of this paper is to eliminate this constraint, to allow short and long range interactions between potassium atoms and perform a conceptual numerical study of long-lived breather solutions. The current approach also allows us to study kink solutions in a 2D hexagonal lattice. The paper is organised in the following way. In Section 2 we consider in detail the theoretical model we use in our study. We derive a dimensionless set of equations in Sec. 3, and in Sec. 4 we investigate the linearised system and derive the linearised dispersion relation. In Sec. 5 we report on a number of numerical simulations of long-lived breathers in our model system. Section 6 is devoted to a brief discussion of simulations of kink solutions, which do not appear to travel long distances in the present model. Some mathematical details are presented in two appendices.
Mathematical model
In this section we describe a 2D mathematical K-K sheet layer model of muscovite mica crystal. We model the potassium layer of mica of N potassium atoms by classical Hamiltonian dynamics with the Hamiltonian:
H = K + V + U = N n=1 1 2 m ṙ n 2 + U(r n ) + 1 2 N n ′ =1, n ′ =n V ( r n − r n ′ ) ,(1)
where K is the kinetic energy, U is the on-site potential energy encompassing forces from the silicate layers of atoms above and below the potassium K-K sheet, and V is the radial interaction potential between the potassium atoms. In the Hamiltonian (1), r n ∈ R 2 is the 2D position vector of the n th potassium atom with mass m, andṙ n is its time derivative. The symbol u refers to the Euclidean two-norm of a vector u, i.e. its length.
Forces between crystal layers
The potassium K-K sheet of muscovite mica crystal is compactly sandwiched between rigid layers of silicon-oxygen tetrahedra which enforces hexagonal lattice symmetry on potassium atoms [22]. Marín et al. [17] considered the rigid silicon-oxygen layer approximation and anharmonic interaction Morse forces between free potassium and fixed oxygen atoms to obtain an on-site force as a superposition of these forces for each site. In this paper we adopt a simpler approach and consider a periodic smooth on-site potential with hexagonal symmetry from [29]. This can be thought of as a generalization of a discrete 1D sine-Gordon lattice to a two dimensions with hexagonal symmetry. The on-site potential function resembles an egg-box carton and can be written as
U(x, y) = 2 3 U 0 1 − 1 3 cos 4πy √ 3σ + cos 2π( √ 3x − y) √ 3σ + cos 2π( √ 3x + y) √ 3σ ,(2)
where σ is the lattice constant, i.e. the equilibrium distance between potassium atoms, and U 0 > 0 is the maximal value of the on-site potential. Note that a simple product of cosine functions would not provide the required hexagonal symmetry. In Figure 1(a), we plot the on-site potential function (2) with sixteen sites, σ = 1 and U 0 = 1. In Figure 1(b) we show potassium atoms in their dynamical equilibrium states together with their labels in (x, y) coordinates. We will adopt these labels and notation in Secs. 4 and 5.
For further reference and analysis we write down the harmonic approximation to an on-site potential well
U h (x, y) = 16π 2 U 0 18σ 2 (x − x 0 ) 2 + (y − y 0 ) 2 ,(3)
where (x 0 , y 0 ) are any local minima, equilibrium states, of the on-site potential (2), see Fig. 1. For small atomic displacements, each potassium atom K will remain in each particular site. This was imposed as a global constraint in [17] with nearest neighbour interactions. In our approach we can allow atoms to move from one site to another. In addition, the on-site potential (2) provides a simpler implementation, since function (2) is periodic and defined on all of R 2 . In the 1D approximation, i.e. y =constant, this on-site potential (2) reduces to the cosine function which is the on-site potential of the discrete sine-Gordon equation and a periodic potential of the 1D model considered in [8]. The model in [8] could be thought as a 1D approximation of the 2D model (1). To see that, without loss of generality, consider y = 0 in the equation (2) which leads to the cosine function in the 1D approximation. The same holds true for other two crystallographic lattice directions. The hexagonal lattice, as demonstrated in Fig. 1(b), has three crystallographic lattice directions which can be prescribed by the direction cosine vectors: (1, 0) T and (1/2, ± √ 3/2) T .
Nonlinear interaction forces
In this section we describe an empirical interaction potential to model the atomic interactions of potassium atoms in the K-K sheet of mica. Essentially, from a modelling point of view, we are concerned with anharmonic radial interaction potentials V (r) = V (ǫ, σ, r) parametrized by ǫ > 0, the depth of the potential well, i.e. V (ǫ, σ, σ) = −ǫ, and σ > 0, the equilibrium distance, i.e. ∂ r V (ǫ, σ, σ) = 0. In addition, we require that ∂ rr V (ǫ, σ, σ) > 0 and V (ǫ, σ, r) are monotonically increasing functions for r < σ and r > σ such that lim
r→∞ V (ǫ, σ, r) = 0, lim r→∞ ∂ r V (ǫ, σ, r) = 0.(4)
As a first step to understanding the properties of the crystalline solids in this model, it is natural to consider the short-ranged scaled Lennard-Jones interaction potential
V LJ (r) = ǫ σ r 12 − 2 σ r 6 ,(5)
where σ coincide with the lattice constant in the on-site potential (2), r := r n,n ′ = ||r n − r n ′ || for all n, n ′ = 1, . . . , N and n = n ′ . Recall that the term r −6 describes long range attractive van der Waals force and the term r −12 models Pauli short range repulsive forces. Other possible models are the Morse potential and the Buckingham potential, among others. The Lennard-Jones potential (5) has the asymptotic properties (4). To increase the efficiency of the numerical computations, and to provide a suitable model for nearest neighbour interactions, we introduce an additional parameter in the potential (5), that is, a cut-off radius r c . In this paper we are concerned with a close range interaction model, i.e. r c = √ 3σ, which resembles but is not restricted to the fixed nearest neighbour interaction model.
We compared our numerical results to longer ranged interaction simulations, that is, with r c = 2σ and r c = 3σ, and did not observe any qualitative differences in our results. We attribute this to the asymptotic properties (4) of the Lennard-Jones potential (5).
To incorporate the cut-off radius, we set potential and forces to zero for all atomic distances larger than r c . For smooth cut-off computations we proceed in a similar manner as presented in [27]. Instead of only two polynomial terms, we add five additional even order polynomial terms to the interaction potential V (r), i.e.
V cut (r) = V (r) + ǫ 4 j=0 A j r r c 2j , 0 < r ≤ r c , 0, otherwise,(6)
where the cut-off dimensionless coefficients A j → 0 when r c → ∞ and are determined from the following five conditions:
V cut (σ) = V (σ), ∂ r V cut (σ) = ∂ r V (σ), ∂ rr V cut (σ) = ∂ rr V (σ), V cut (r c ) = 0, ∂ r V cut (r c ) = 0.(7)
In the definition of the cut-off potential (6), we only consider even power polynomial terms of the atomic radius r such that we do not need to compute square roots of atomic distances in the simulations. The particular choice of conditions (7) implies that the harmonic approximation of the cut-off potential (6) is equal to the harmonic potential approximation of V (r):
V h (r) = −ǫ + 36ǫ r σ − 1 2 .
Thus the linear analysis of the system for the nearest neighbour interactions with potential V cut (r) is equivalent to the linear analysis of the system with the original potential V (r). In A, we give exact formulas for the cut-off coefficients A j of an arbitrary potential V (r) satisfying the following properties: V → 0 and r∂ r V → 0 when r → ∞. The Lennard-Jones potential (5) satisfies these two properties. In Figure 2(a) we compare the Lennard-Jones potential (5) with the Lennard-Jones potential with cut-off radius r c = √ 3σ computed by (6). Due to the construction, the potential well is very well preserved despite the additional polynomial terms in the potential. With Figure 2(b) we confirm that the cutoff coefficients A j tend to zero when the cut-off radius r c tends to infinity. Thus in the limit we recover the original Lennard-Jones potential (5). Figure 2: Radial interaction potential V (r). (a) Lennard-Jones potential compared to the Lennard-Jones potential with cut-off radius r c = √ 3σ. (b) cut-off coefficients for the Lennard-Jones potential as functions of the cut-off radius r c /σ.
Dimensionless system of equations
In this section we derive a dimensionless system of equations. We consider the Hamiltonian (1) with the on-site potential (2) and the interaction potential (6), that is, a system with total energy
H = N n=1 1 2 m||ṙ n || 2 + U(U 0 , σ, r n ) + 1 2 N n ′ =1, n ′ =n V (ǫ, σ, r n,n ′ ) + ǫ 4 j=0 A j r n,n ′ r c 2j ,
where r n,n ′ = ||r n − r n ′ || and the potentials are represented with their set of parameters and variables. We introduce a characteristic length scale σ and time scale T of the system, i.e. r n = σr n and t = Tt. Thus r n,n ′ = σr n,n ′ , r c = σr c ,ṙ n = σ/Tṙ n and H = mσ 2 /T 2H , whereH is the dimensionless Hamiltonian function. Choosing the time scale T = σ m/U 0 such that H = U 0H , the dimensionless HamiltonianH of the dimensionless variables isH
= N n=1 1 2 ||ṙ n || 2 + U (1, 1,r n ) + 1 2 N n ′ =1, n ′ =n V (ǭ, 1,r n,n ′ ) +ǭ 4 j=0 A j r n,n ′ r c 2j
, whereǭ = ǫ/U 0 is a dimensionless parameter, the interaction potential well depth parameter divided by the depth of the on-site potential. Dropping the bars over the variables, except for the parameterǭ, the dimensionless dynamical system of equations iṡ
r n =u n , u n = − ∂ rn U (1, 1, r n ) − 1 2 ∂ rn N n ′ =1, n ′ =n V (ǭ, 1, r n,n ′ ) +ǭ 4 j=0 A j r n,n ′ r c 2j ,(8)
for all n = 1, . . . , N, where u n =ṙ n is the momentum. In the following we consider the dimensionless system (8) in our analysis and computations.
The dimensionless system of equations (8) contains two dimensionless parametersǭ and r c , and cut-off coefficients A j which depend on cut-off radius r c . Independently from the value of r c , whenǭ = 0 there is no interaction forces between potassium atoms and the system (8) decouples into a system of nonlinear oscillators. When the value ofǭ tends to infinity, interaction forces dominate over the forces from the on-site potential, and in this case the equations describe a Lennard-Jones fluid. To find a suitable range for parameterǭ values such that both potentials have relatively equal strength, we compute and compare unrelaxed potentials seen by a potassium atom moving in any of the three lattice directions. In other words we fix all neighbouring atoms of a particular K atom and compute potential energies for small atomic displacements of the atom in any of three lattice directions, see Fig. 3.
In Figure 3 we show results for five parameterǭ values. It is evident that forǭ > 1, the interaction forces dominate the on-site forces, and forǭ < 0.001 the interaction forces are too small compared to the on-site forces and are negligible. Assuming atomic relative displacements from equilibrium in the range of 0.2, these results suggest that for system (8) to model the K-K sheet of muscovite mica we should chooseǭ ∈ [1; 0.001]. We find Fig. 3 to be in a good agreement with the numerical results. We do not observe propagating discrete breather solutions outside of this range ofǭ values. Without loss of generality we chooseǭ = 0.05 as the main value for our numerical studies.
Linearised equations and dispersion relation
In this section we derive a nearest neighbour elastic spring interaction model of (8) and its dispersion relation. Recall that the cut-off coefficients A j are chosen such that ∂ rr V cut (σ) = ∂ rr V (σ), see conditions (7). We consider the atom r n with labels n = (l, m), see Fig. 1(b), and its six neighbouring atoms with labels (l ± 1, m), (l + 1, m ± 1) and (l − 1, m ± 1). The force acting on atom n from atom n ′ is given by a vector where r ≡ r n,n ′ = ||r n − r n ′ ||. The linearised version of (9) around the dynamical equilibrium states r 0 n and r 0
F n,n ′ = − 1 r ∂ r V (r) (r n − r n ′ ) ,(9)n ′ with r 0 = ||r 0 n − r 0 n ′ || is F lin n,n ′ = ∂ rn F n,n ′ r n − r 0 n + ∂ r n ′ F n,n ′ r n ′ − r 0 n ′ , where ∂ r n ′ F n,n ′ = −∂ rn F n,n ′ , and ∂ rn F n,n ′ = −∂ rr V r 0 r 0 n − r 0 n ′ r 0 r 0 n − r 0 n ′ r 0 T = −∂ rr V r 0 D n,n ′ = −V ′′ r 0 D n,n ′ .
The vector (r 0 n − r 0 n ′ ) /r 0 is the corresponding direction cosine vector for the atomic pair (n, n ′ ) equilibrium positions, six in this case, and the symmetric matrix D n,n ′ ∈ R 2×2 is an outer product of the direction cosine vector.
Note that V ′′ r 0 = 72ǭ and r 0 = 1 are constant when only nearest neighbour Lennard-Jones interactions are considered and recall that the harmonic approximation of the on-site potential (2) is given by (3). By dropping index n = (l, m) from matrix D n,n ′ and replacing index n ′ with the six neighbouring labels of atoms with labels (l, m) from Fig. 1(b), we obtain a system of dynamical linear equations
w l,m = − (D l+2,m + D l−2,m + D l+1,m+1 +D l+1,m−1 + D l−1,m+1 + D l−1,m−1 ) w l,m + D l+2,m w l+2,m + D l+1,m+1 w l+1,m+1 + D l−1,m+1 w l−1,m+1 + D l−2,m w l−2,m + D l+1,m−1 w l+1,m−1 + D l−1,m−1 w l−1,m−1 − κw l,m ,(10)
where w l,m = r l,m − r 0 l,m is the displacement vector from the equilibrium state of atom (l, m) and κ = 16π 2 /9/V ′′ r 0 after time rescaling t =t V ′′ r 0 . In B we give exact expressions for matrices D l,m and system (10) in componentwise form, which are in exact agreement with the linearised equations of the Morse hexagonal lattice with an on-site harmonic potential presented in [13].
We argue here that the equation (10) with linear interaction forces together with the egg-box on-site potential (2), instead of a harmonic on-site potential (3) is a 2D model with lower level of complexity and could be used as a starting point for analytical investigations. In particular, it may be possible to develop an existence proof for propagating localized modes in this model. The proposed model can be thought as a natural extension of the discrete sine-Gordon equation in two dimensions with hexagonal symmetry. Similarly a square lattice could be considered.
Following the approach of [13], we derive a dispersion relation from the simple wave solutions w l,m = Ae
i 1 2 k 1 l+ √ 3 2 k 2 m−ωt ,(11)
where A ∈ R 2 is an amplitude, k = (k 1 , k 2 ) is the wave number, andω is a frequency in thet time scale. Explicit calculations give the linear system matrix for w l,m in (10)
D l+2,m + D l−2,m + D l+1,m+1 + D l+1,m−1 + D l−1,m+1 + D l−1,m−1 = 3 0 0 3 ,(12)
see B. Thus, substituting (11) and (12) into the linear system (10), we obtain
0 = ω 2 − κ − 3 A + D l+2,m Ae +ik 1 + D l−2,m Ae −ik 1 + D l+1,m+1 Ae i 1 2 k 1 + √ 3 2 k 2 + D l+1,m−1 Ae i 1 2 k 1 − √ 3 2 k 2 + D l−1,m+1 Ae i − 1 2 k 1 + √ 3 2 k 2 + D l−1,m−1 Ae i − 1 2 k 1 − √ 3 2 k 2 .(13)
From the symmetry properties of matrices D l,m , equation (13) can be simplified to
0 = ω 2 − κ − 3 A + 2 cos (k 1 ) 1 0 0 0 A + cos 1 2 k 1 cos √ 3 2 k 2 1 0 0 3 A − sin 1 2 k 1 sin √ 3 2 k 2 0 √ 3 √ 3 0 A
and this leads to the dispersion relation
ω 2 − κ − 3 + 2 cos (k 1 ) + cos 1 2 k 1 cos √ 3 2 k 2 × ω 2 − κ − 3 + 3 cos 1 2 k 1 cos √ 3 2 k 2 −3 sin 2 1 2 k 1 sin 2 √ 3 2 k 2 = 0,(14)
which is in the exact agreement with the dispersion relation obtained for the Morse hexagonal lattice in [13]. The frequency solution of (14) has two positive branches for a given wavenumber k = (k 1 , k 2 ). From (13), by setting k = (2π, 0), for example, we can derive the maximal frequency of the linear system (10) int and t time scales, that iŝ
ω max = √ 6 + κ, ω max = 6V ′′ r 0 + 16π 2 9 , ω = V ′′ r 0ω ,
respectively. In Figure 4(a) we show surface plots of frequency ω/2π versus wavenumber forǭ = 0.05. In Figure 4(b), we plot normalized dispersion curves for equal components of wavenumber, that is, k 1 = k 2 , for different values ofǭ. The normalized frequency can be expressed as
ω ω max =ω ω max = α(k 1 , k 2 ) + κ(ǭ) 6 + κ(ǭ)
which tends to unity whenǭ → 0 for each value of (k 1 , k 2 ) ∈ R 2 . Hence ω → ω max = 4π/3 whenǭ → 0, the case of a decoupled system of harmonic oscillators of potential energy (3) with U 0 = 1.
Numerical simulations of propagating discrete breathers
In this section we describe numerical simulations of propagating discrete breathers obtained by solving the initial value problem (8). We integrate the Hamiltonian dynamics (8) in time with a second order time reversible symplectic Verlet method [2,14]. In the following, all numerical examples are performed withǭ = 0.05, r c = √ 3, time step τ = 0.04 and periodic boundary conditions. To excite mobile discrete breathers, we consider the lattice in its dynamical equilibrium state, see Fig. 1(b), and excite three neighbouring atomic momenta with the pattern
v 0 = γ(−1; 2; −1) T ,(15)
where the values of γ = 0 depend on the choice ofǭ. Single kicks or different patterns of simultaneous kicks can be considered as well as the excitation given above. Our objective is to consider initial conditions which produce the least amount of phonon background which may interfere with the study of propagating discrete breathers. This particular choice of pattern (15) gave the cleanest initial conditions for the computations of propagating discrete breathers. The following results are presented with γ = 0.5. The main observed properties of propagating discrete breathers were qualitatively similar for different values of the cut-off radius r c , forǭ values when propagating discrete breathers can be observed, time steps τ , for different initial momenta patterns and for different values of γ. However, long time numerical simulations are sensitive to initial conditions, small changes in parameter and time step values as well as to round-off errors. This is due to the chaotic nature of the underlying dynamical system.
To display the energy over the lattice, we define an energy density function by assigning to each atom its kinetic energy and on-site potential values as well as half of the interaction potential values. Since the energy H may take also negative values, and to explore better small scales of the system for plotting purposes only, we replace the total energy of the system by
H log = log(H + | min {H}| + 1)
such that H log ≥ 0.
As a first example, we consider a periodic rectangular lattice: N x = 100 and N y = 16, where N x and N y are the number of atoms in x and y axis directions, respectively. We place the initial momentum pattern (15) in the middle of the domain with respect to the y axis. We integrate the system in time up to 1000 time units. In Fig. 5 we demonstrate the evolution of the energy density function in time. For plotting purposes we have interpolated the energy density function on a rectangular mesh. The peaks of energy in Fig. 5 are associated with the propagating discrete breather. To perform this test, we also included a damping of the atomic momenta at the upper and lower boundaries for the initial time interval t ∈ [0; 100], to reduce the amount of phonons which spread over the domain. The propagating breather moves to the right on the horizontal line, i.e. on the horizontal crystallographic lattice line, and is highly localized in space. Evidently, from Fig. 5, the initial pattern (15) with a small amount of initial damping at the boundaries for some time interval, has created a clean breather solution with small amplitude phonon background not visible to the naked eye.
In Figure 6 we plot atomic displacements from their equilibrium states in the x and y axis directions at the final computational time, i.e. T end = 1000. We indicate the displacement function in the x axis direction by ∆x and in the y axis direction by ∆y. Comparing Figures 6(a) and 6(b), we notice the differences in the scales of the displacements. From Figure 6(a), it is evident that the largest displacement in the x direction is on the main chain of atoms along which the breather propagates and there are only very small amplitude displacements in adjacent chains. From Figure 6(b), it can be seen that on the main chain there is almost zero displacement in the y direction, whilst there is visible displacement of the adjacent chains. Notice the anti-symmetry between breather displacements on adjacent chains in Fig. 6(b).
To understand better the localization properties of the propagating breather solutions, we compute the maximal and minimal displacement values in atomic chains where the breather has propagated over a specified computational time interval. We assign the index m to the horizontal main chain of atoms along which the breather has propagated, and indices m ± k where k = 1, 2, 3 to the adjacent chains of atoms, see Fig. 1(b). We refer to these chains by y m . The displacement plot of maximal and minimal values is shown in Fig. 7(a). The figure confirms that the largest displacement of atoms is on the main chain y m in the x direction with almost zero displacement in y axis direction. Figure 7(a) is in good agreement with Figs. 6(a) and 6(b). Notice that there is still some displacement in both axis directions for atoms in adjacent chains y m±3 . This is due to the presence of phonons in the lattice. Compared to the breather energy, the phonon energy is very small, as can be seen in Fig. 7(b), where we plot the maximal energy of atoms over time.
Evidently, most of the energy is localized on the atoms on the main chain y m and rapidly decays along the y axis directions. Thus in the y axis direction the breather is localized on around five atoms while at the same time it is localized on around seven to eight atoms in the x axis direction, see Fig. 6(a). Closer inspection of Figs. 7(a) and 7(b) shows that maximal and minimal displacements in the x axis directions, as well as the energy, is symmetric with respect to the adjacent chains y m±k where k = 1, 2, 3, while maximal and minimal displacements in the y axis directions are antisymmetric. This is because when the breather propagates on the main chain y m , it pushes atoms away on adjacent chains. The maximum displacement away from the main chain is larger compared to the maximum displacement towards the main chain y m .
The displacement values in Fig. 7(a) are dependent on the values ofǭ and γ. For larger values of γ, we observed larger displacement values in the x axis directions of the atoms on the main chain y m . The same is true for smaller values ofǭ. Interestingly, smaller values ofǭ gave smaller values of displacements in the y direction on adjacent chains, and hence increases the quasi-one-dimensional nature of propagating discrete breathers. Indeed, Figure 7(b) indicates the quasi-one-dimensional nature of propagating discrete breathers. Despite the small amount of energy in adjacent chains of atoms, the energy is still strongly localized. That can be seen in Fig. 8. In Figure 8(a), we plot the energy density function of the chain y m at each time unit. Similarly we plot the energy density functions as a function of time in the adjacent chains y m+1 and y m+2 , see Figs. 8(b) and 8(c), respectively. As before, for plotting purposes we interpolated the results on the uniform mesh.
We pick maximal colour scales according to the values in Fig. 7(b). The amount of the breather energy in the chain y m is much higher as compared to the phonon energies; none of these energies are visible in Fig. 8(a). On the contrary, the small amount of phonon energy is visible in Figs. 8(b) and 8(c), thus confirming the presence of phonon waves in the lattice. The damping initiated in the simulation at initial times at the boundaries does not remove the phonons completely from the system, and these phonons will affect the long term solution of the propagating discrete breather in the lattice with periodic boundary conditions. Questions regarding the energy loss by the breather solution, its velocity, focusing properties and lifespan will be addressed in the following sections.
Focusing of discrete breathers in frequency space
In the previous section, we described how to excite propagating discrete breathers and discussed the energy localization properties in atomic chains of the propagating discrete breather. In this section we study a novel feature, the focusing property in the frequency domain, that is, the spectral properties of 2D propagating discrete breather solutions. For our study we consider numerical simulations on the rectangular long strip lattice: N x = 20000 and N y = 16, with periodic boundary conditions. We integrate in time until the breather has reached the right hand end of the lattice, that is, after around 10 5 time units in our example. For this example we kept damping at the upper and lower boundary until the breather has passed 500 sites, to remove some amount of the phonons from the lattice, thus obtaining cleaner data. We collected time series data of the displacement function ∆x m (t) at 100 equally spaced atoms on the main chain along which the breather propagates. From the data obtained, we compute the spectrum and plot the squared amplitude of the discrete Fourier transform in Fig. 9(a). We illustrate the same result but in the squared amplitude versus frequency domain in Fig. 9(b), where we plot each tenth section of Fig. 9(a). Two main observations can be drawn from Fig. 9. The first is that the breather frequency is above the phonon frequency band, compare Fig. 9(a) to Fig. 4(a). The second observation is that the propagating breather is focusing in frequency space as it evolves. As a result, we observe spreading of the breather in the time domain, which is illustrated in Fig. 10. In Figure 10, we show the time series of the displacement function ∆x m (t) on a normalized time axis of three atoms from the main chain y m . Notice that the amplitude of the breather in Fig. 10 does not change, only the width of the wave. Importantly, we observed such focusing effects also in 1D versions of our 2D lattice model. This naturally raises the question whether the spreading of the breather occurs also in the spatial domain. In our numerical results, we did not observe such phenomenon. It would be interesting to see if such frequency sharpening, as breathers evolve, also arises in other 1D and 2D models.
In our long strip lattice simulations we were not able to reach saturation in frequency space, we need simulations over longer lattice strips and for longer times. See the results of Sec. 5.3, where we have performed long time simulation of propagating breather solution on a 200 × 16 lattice with periodic boundary conditions. Notice the relative saturation in the frequency space, despite the presence of phonons.
The spreading of the breather in the time domain suggests that the breather is slowing down, since it takes longer to pass through one atom in space. We confirm this by computing the breather velocity in time, see Fig. 11(a). The graph is obtained by tracking the location of the breather in space. From this data we compute the breather velocity. The smoothed out normalized curve of breather velocity is shown in Fig. 11(a). At the same time, we compute the breather energy in time, which is estimated from the sum of energies over multiple atoms. A smoothed out normalized curve of the breather energy is illustrated in Fig. 11(b). Figure 11 shows that the propagating breather is slowing down and loosing its energy. Our numerical tests showed that there is a strong correlation between breather velocity and focusing in the frequency space. The breather focuses when it slows down and defocuses when it speeds up. In some rare cases we could observe time intervals of constant breather velocity with no focusing or defocusing in frequency space. In the next section we show that the same focusing effect is occurring in adjacent chains.
Focusing in adjacent chains
In this section we demonstrate the focusing properties of the propagating breather in adjacent chains of atoms. We consider the same numerical example above and collect time series data of displacement functions ∆x m+1 (t), ∆y m+1 (t), ∆x m+2 (t) and ∆y m+2 (t) of equally spaced atoms in lattice. Recall that the breather propagates on the lattice chain y m . For the atom displacement functions on adjacent lines, we produce equivalent figures to Fig. 9(a), see Fig. 12. The dashed line indicates the dominant frequency of the Fig. 9(a). All plots of Fig. 12 show the same focusing effect, but with smaller amplitudes as indicated by the colour bars.
It is interesting that Figs. 12(b) and 12(c) show focusing towards the same dominant frequency of the displacement function ∆x m (t) while the displacement functions ∆x m+1 (t) and ∆y m+2 (t) appear to be focusing on two frequencies above the phonon band, see Figs. 12(a) and 12(d), respectively. This split of frequencies can be attributed to modulation by the rotational frequencies. That can be seen in Fig. 13, where we plot phase portraits of the displacement functions of one atom in the middle of the computational domain over the time interval when the breather passes through. In other words, Fig. 13 shows 2D breather displacements of atoms in lattice chains adjacent to the main lattice chain y m . In Figure 13(a), we illustrate the phase portrait of the functions ∆x m+1 (t) and ∆y m+1 (t), and in Fig. 13(b) we illustrate the phase portrait of the functions ∆x m+2 (t) and ∆y m+2 (t). Notice the rotational character of the 2D propagating discrete breather in the adjacent lattice chains y m+1 and y m+2 .
Long-lived breather solutions
In the sections above, we performed numerical simulations of propagating discrete breathers and discussed their properties, in particular, the localization of energy and focusing in frequency space. It is still an open question if exact propagating discrete breathers exist in our model. Figure 11 shows that the propagating breather slows down and loses its energy. Long time studies of propagating discrete breathers are subject to the chaotic nature of molecular dynamics model, round-off errors and interactions with the phonon background. All these aspects lead to the unpredictable nature of results and sensitive dependence on initial conditions. Thus analytical studies are needed to answer the question regarding existence of propagating discrete breathers. At the same time this serves as a good motivation to study breather interactions with phonons, i.e. breather propagation in thermalized crystals, and interactions between breathers themselves, which we report elsewhere. To contribute to the discussion of the existence of the breather solutions, we perform a conceptual numerical study of breather lifespan in a periodic lattice. We consider a lattice: N x = 200 and N y = 16, with periodic boundary conditions. We excite a breather solution with atomic momenta pattern (15), γ = 0.5, in the (1, 0) T crystallographic lattice direction on a chain y m .
This initial condition is integrated in time until the breather has passed one million lattice sites, i.e. crossed the computational domain 5000 times. In our example, that took less than 10 7 time units. As before, we kept damping at the upper and lower boundaries, in this case until the breather has passed 2000 sites.
In Figure 14(a), we plot the number of sites the breather has passed versus time. The normalized breather velocity is plotted in time in Figure 14(b), computed from the curve in Fig. 14(a). In addition, after each 50 breather propagation cycles, we collected the time series of displacement function ∆x m (t) of an atom at the middle of the computational domain on the lattice chain y m . The computed frequency spectrum of time series is illustrated in Fig. 14(c), where the x axis of the figure refers to the number of sites the breather has passed in Fig. 14 The numerical results in Figure 14 demonstrates long-lived mobile breather solution in a weakly thermalized background by phonons. The breather solution has propagated over one million lattice sites, i.e. more than a factor 100× of that reported by Marín et al. [17]. It would be interesting to establish a relation between the breather's lifespan and the lattice temperature for different values ofǭ.
Kink solutions
In this section we briefly discuss kink solutions. Recall that our model allows atoms to be displaced outside unit cells, in comparison to the model by Marín et al. [17]. To excite kink solutions, we consider one atom momentum initial kicks v 0 = γ in (1, 0) T in a crystallographic lattice direction on the lattice chain y m . We were not able to excite kink solutions with parameterǭ = 0.05 and τ = 0.04 values. By reducing the parameter values ofǭ and τ we were able to observe a highly localized short-lived kink propagating initially along the atomic chain y m .
In Figure 15, we show the position of the kink, as estimated by the position of the maximal energy density, as it evolves. It starts at the hollow circle on the left, and initially travels in a straight line along a crystallographic lattice direction. We suppress plots of other lower energy excitations created by the kick, such as breathers, phonons, etc. The kink radiates energy through phonons and eventually switches to a more random route, eventually coming to a halt at the position marked by a filled circle on the right. For topological reasons the kink cannot be destroyed unless it collides with an anti-kink.
We have performed numerical tests with different values of initial kicks v 0 , ǫ, r cut and time step τ , and did not observe solutions which persisted along a single direction, in contrast to [6], where the same egg-box carton on-site potential (2) was considered, but used a piecewise polynomial interaction potential instead of the Lennard-Jones (5). In this latter paper with this different potential, we did observe long-lived kink solutions. These findings suggest that the choice of interaction potential and its relative strength with respect to the on-site potential may play a significant role in the stability of kink solutions. It remains to be seen if existing or novel 2D materials can exhibit such kinks in physical situations.
Summary
In this article, we provide a detailed qualitative study of propagation breather solutions, building on and expanding the work of of Marín et al. [17] and performed a qualitative study of propagating breather solutions. By considering a periodic smooth "egg-box carton" on-site potential, and a scaled Lennard-Jones interaction potential with a cut-off radius, we have derived a dimensionless system of equations with one dimensionless parameterǭ. This parameter is the ratio of the depth of the interaction potential to that of the on-site potential. We have found a range of parameterǭ values with which mobile breathers can be observed. This parameter range was found to be in a good agreement with numerical observations and the considerations of [17] that both potentials should be of equal relative strength.
The Lennard-Jones interaction potential with cut-off was constructed such that harmonic approximation agrees with the harmonic approximation of the Lennard-Jones potential itself. With this in mind, we derived the nearest neighbour linearised equations of phonons. The derivation of the linearised equations and its dispersion relation allow us to confirm that breather internal frequency spectrum is above the phonon band. Thus, together with numerical observations, we were able to confirm that the propagating localized modes are optical breather solutions. In addition, we argue that the linear nearest neighbour interaction model, together with egg-box on-site potential could be a suitable model of lower level of complexity for analytical investigations. Such an argument follows from a natural analogy with the 1D discrete sine-Gordon equation.
In the study of propagating breather solutions, we confirmed its quasione-dimensional nature as well as its 2D characteristics. We showed that the most of the breather energy is localized on the main chain of atoms along which the breather propagates, and that it propagates in crystallographic lattice directions. In addition, we showed that there is also a strong localization of energy present in adjacent chains of atoms. From the time series of atomic displacements in both the x and y axis directions, we were able to demonstrate the 2D rotational character of the atomic motion in adjacent chains over the time interval when breather passes through. From the same time series data, we computed the frequency spectrum and presented the novel finding of breather localisation in frequency space as it evolves. This behaviour causes the breather to spread in time, while preserving its amplitude. We found a correlation between the localization in frequency space and the breather's velocity. We found the same localization property in adjacent chains of atoms. It would be very interesting to see if other 1D or 2D models with breather solutions support the same localization property in frequency space as the breather evolves.
To reach the saturation regime where the frequency sharpening stabilises would take a very long time with long strip lattice simulations, a computationally challenging task. In addition, the chaotic nature of the molecular dynamics system, together with the existent phonon background added by the initial condition, give further challenges to long time simulations. We chose to add a small amount of damping for short initial time interval to reduce the phonon density in the system.
To contribute to the open question of whether propagating breather solutions exist, we performed conceptual long time simulation on a small lattice with periodic boundary conditions. Despite the presence of the phonon background, we were able to observe a long-lived breather solution travelling over one million lattice sites, a factor of 100 more than previous results. This numerical experiment showed a relative saturation in the breather's velocity, that is, an almost constant velocity with small variations, and a relative saturation in the frequency spectrum. The fluctuations in the velocity and spectral results are due to the presence of the weak phonon background. Importantly, this numerical result serves as a good motivation for a study of breather interactions and propagation in thermalized lattices, hinting to the possibility of long-lived breather solutions in more realistic physical situations.
We concluded our findings with a brief discussion of kink solutions. Our model allows the displacements of atoms out of the unit cell, in comparison to the model by Marín et al. [17]. We found no kink solutions travelling long distances, in contrast to the findings in [6], where long-lived kink solutions were observed in a 2D hexagonal crystal lattice with a different inter-particle potential. It is clear that kink solutions are strongly affected by the choice of interaction potentials, and it remains to be shown if materials exist which can exhibit such kinks in physical situations. v l,m = − 3v l,m + 3 4
(v l+1,m+1 + v l+1,m−1 + v l−1,m+1 + v l−1,m−1 )
+ √ 3 4 (u l+1,m+1 − u l+1,m−1 − u l−1,m+1 + u l−1,m−1 ) − κv l,m ,
where w l,m = (u l,m , v l,m ), that is, the displacements of atom (l, m) from its equilibrium position in the x and y directions.
Figure 1 :
1Smooth periodic on-site potential function with hexagonal symmetry in (x, y) coordinates. (a) on-site potential with sixteen sites, σ = 1 and U 0 = 1. (b) configuration of potassium atoms in dynamical equilibrium states with discrete labels (l, m) in (x, y) coordinates.
Figure 3 :
3Unrelaxed potential dependent on dimensionless parameterǭ as seen by a K atom moving in any of three lattice directions, illustrated in (x, 0) coordinates, r c = √ 3. The solid line shows the on-site potential U, the dashed line shows the Lennard-Jones interaction potential V normalized to positive values, and the dashed dotted line shows the sum of both potentials U + V .
Figure 4 :
4Dispersion relation of the linearised 2D hexagonal crystal lattice equations. (a) two branches of positive frequency ω/2π,ǭ = 0.05. (b) normalized upper branch dispersion curves for equal components of the wavenumber, i.e. k 1 = k 2 , and different values ofǭ.
Figure 5 :
5Evolution of the energy density function in time and energy localization around the propagating discrete breather solution.
Figure 6 :
6Atomic displacements in space from their equilibrium states at final computational time T end = 1000. (a) displacement function ∆x in the x axis direction. (b) displacement function ∆y in the y axis direction.
Figure 7 :
7Spacial displacements and energy of the propagating discrete breather. (a) maximal and minimal displacements in x and y axis directions on atomic chains over computational time interval. (b) maximal energy in chains of atoms over computational time interval.
Figure 8 :
8Contour plots of the breather energy on atomic chains in time. (a) breather energy on the main chain y m . (b) breather energy on the adjacent chain y m+1 . (c) breather energy on the adjacent chain y m+2 (note different colour scales).
Figure 9 :
9Propagating breather frequency spectrum. (a) amplitude squared of the time series of displacement function ∆x m (t) on the main lattice chain y m . (b) ten cross-sections of plot (a).
Figure 10 :
10Spreading of the propagating breather solution in time, as demonstrated by the displacement function ∆x m (t) at three locations in the spacial domain on the chain y m . Computational times are normalized to the same time axis.
Figure 11 :
11Long strip lattice simulation, (a) normalized breather velocity in time, (b) normalized breather energy in time.
Figure 12 :Figure 13 :
1213Frequency spectrum of a propagating breather in adjacent chains (m + 1 and m + 2). (a) amplitude squared of the time series of the displacement function ∆x m+1 (t) on the lattice chain y m+1 . (b) amplitude squared of the time series of the displacement function ∆y m+1 (t) on the lattice chain y m+1 . (c) amplitude squared of the time series of the displacement function ∆x m+2 (t) on the lattice chain y m+2 . (d) amplitude squared of the time series of the displacement function ∆y m+2 (t) on the lattice chain y m+2 . 2D displacements of atoms in lattice chains y m+1 and y m+2 over the time when breather has passed through. (a) phase portrait of ∆x m+1 (t) and ∆y m+1 (t). (b) phase portrait of ∆x m+2 (t) and ∆y m+2 (t).
(a). Figures 14(b) and 14(c) show the relative saturation in breather velocity and frequency spectrum over time. Small scale variations are attributed to the presence of the weak phonon background.
Figure 14 :
14Long-lived breather simulation, N x = 200 and N y = 16. (a) number of sites the breather has passed versus time. (b) normalized breather velocity in time. (c) breather frequency spectrum of the displacement function ∆x m (t) on the y m chain.
Figure 15 :
15Kink solution, position trace of the maximal energy density function. N x = 160, N y = 16, T end = 10 4 , r cut = √ 3,ǭ = 0.01, v 0 = 4 and τ = 0.01.
(v l+1,m+1 − v l+1,m−1 − v l−1,m+ + v l−1,m−1 ) − κu l,m ,
AcknowledgementsJB and BJL acknowledge the support of the Engineering and Physical Sciences Research Council which has funded this work as part of the Numerical Algorithms and Intelligent Software Centre under Grant EP/G036136/1.A Cut-off coefficientsIn this Appendix we derive a linear system of equations and its solution to find the cut-off coefficients A j for the interaction potential(6). The cutoff coefficients are determined from the conditions (7) which form a linear system of equations:whereṼ = V (r c )/ǫ andṼ r = ∂ r V (r c )/ǫ. In addition we require thatṼ → 0 and r cṼr → 0 when r c → ∞.The system of equations(16)can be solved analytically to giveThe coefficients A j are shown in non-dimensionless form. To obtain them in dimensionless form, we set σ = 1 and use dimensionalized functionsṼ and V r .B Linear system and outer products of direction cosinesIn this appendix, we give explicit expressions for the direction cosine vectors, their outer products and the linear system(10).Applying these matrices to (10), we derive linear equations in componentwise form:u l,m = − 3u l,m + 1 4 (u l+1,m+1 + u l+1,m−1 + u l−1,m+1 + u l−1,m−1 ) + (u l+2,m + u l−2,m ) + √ 3
Nonlinear differential-difference equations and Fourier analysis. M J Ablowitz, J F Ladik, Journal of Mathematical Physics. 171011M. J. Ablowitz and J. F. Ladik. Nonlinear differential-difference equa- tions and Fourier analysis. Journal of Mathematical Physics, 17:1011, 1976.
Computer Simulation of Liquids. M P Allen, D J Tildesley, Oxford University PressUSAOxford science publicationsM. P. Allen and D. J. Tildesley. Computer Simulation of Liquids. Oxford science publications. Oxford University Press, USA, 1989.
Procceedings of Qudons in mica: nonlinear localized travelling excitations in crystals, meeting in honour of Prof. Mike Russell. J. ArchillaSpringer-VerlagAlteain pressJ. Archilla, editor. Procceedings of Qudons in mica: nonlinear localized travelling excitations in crystals, meeting in honour of Prof. Mike Rus- sell, Altea, September 18-21, 2013, Springer Material Science. Springer- Verlag, in press.
Discrete breathers: Localization and transfer of energy in discrete Hamiltonian nonlinear systems. S Aubry, Physica D: Nonlinear Phenomena. 2161S. Aubry. Discrete breathers: Localization and transfer of energy in dis- crete Hamiltonian nonlinear systems. Physica D: Nonlinear Phenomena, 216(1):1-30, 2006.
Mobility and reactivity of discrete breathers. S Aubry, T Cretegny, Physica D: Nonlinear Phenomena. 1191-2S. Aubry and T. Cretegny. Mobility and reactivity of discrete breathers. Physica D: Nonlinear Phenomena, 119(1-2):34-46, 1998.
Procceedings of the meeting "Quodons in Mica: nonlinear localized travelling excitations in crystals. J Bajars, J C Eilbeck, B Leimkuhler, 1408.6853Quodons in Mica: nonlinear localized travelling excitations in crystals. J. ArchillaAltea, SpainSpringer-Verlag, in presshonour of Prof. Mike RussellJ. Bajars, J. C. Eilbeck, and B. Leimkuhler. Numerical simulations of nonlinear modes in mica: past, present and future. In J. Archilla, editor, Quodons in Mica: nonlinear localized travelling excitations in crystals, Springer Material Science, pages ??-?? Springer-Verlag, in press. Proc- ceedings of the meeting "Quodons in Mica: nonlinear localized travelling excitations in crystals", in honour of Prof. Mike Russell, Altea, Spain, September 18-21, 2013. (ArXiv 1408.6853).
1D phonon scattering by discrete breathers. T Cretegny, S Aubry, S Flach, Physica D: Nonlinear Phenomena. 1191-2T. Cretegny, S. Aubry, and S. Flach. 1D phonon scattering by discrete breathers. Physica D: Nonlinear Phenomena, 119(1-2):73-87, 1998.
Breathers and kinks in a simulated crystal experiment. Q Dou, J Cuevas, J C Eilbeck, F M Russell, Discrete and Continuous Dynamical Systems -Series S. 4Q. Dou, J. Cuevas, J. C. Eilbeck, and F. M. Russell. Breathers and kinks in a simulated crystal experiment. Discrete and Continuous Dynamical Systems -Series S, 4:1107-1118, 2011.
Discrete breathers -Advances in theory and applications. S Flach, A V Gorbach, Physics Reports. 4671-3S. Flach and A. V. Gorbach. Discrete breathers -Advances in theory and applications. Physics Reports, 467(1-3):1-116, 2008.
Moving discrete breathers? Physica D: Nonlinear Phenomena. S Flach, K Kladko, 127S. Flach and K. Kladko. Moving discrete breathers? Physica D: Non- linear Phenomena, 127(1-2):61-72, 1999.
Graphene: Status and prospects. A K Geim, Science. 324A. K. Geim. Graphene: Status and prospects. Science, 324:1530-1534, 2009.
Van der Waals heterostructures. A K Geim, I V Grigorieva, Nature. 499A. K. Geim and I. V. Grigorieva. Van der Waals heterostructures. Na- ture, 499:419-425, 2013.
Chaotic breathers of two types in a two-dimensional Morse lattice with an on-site harmonic potential. K Ikeda, Y Doi, B F Feng, T Kawahara, Physica D. 225K. Ikeda, Y. Doi, B. F. Feng, and T. Kawahara. Chaotic breathers of two types in a two-dimensional Morse lattice with an on-site harmonic potential. Physica D, 225:184-196, 2007.
Simulating Hamiltonian Dynamics. Benedict Leimkuhler, Sebastian Reich, Cambridge University PressBenedict Leimkuhler and Sebastian Reich. Simulating Hamiltonian Dy- namics. Cambridge University Press, 2005.
Proof of existence of breathers for timereversible or Hamiltonian networks of weakly coupled oscillators. R S Mackay, S Aubry, Nonlinearity. 761623R. S. MacKay and S. Aubry. Proof of existence of breathers for time- reversible or Hamiltonian networks of weakly coupled oscillators. Non- linearity, 7(6):1623, 1994.
Effective Hamiltonian for travelling discrete breathers. R S Mackay, J.-A Sepulchre, Journal of Physics A: Mathematical and General. 35183985R. S. MacKay and J.-A. Sepulchre. Effective Hamiltonian for travelling discrete breathers. Journal of Physics A: Mathematical and General, 35(18):3985, 2002.
Localized moving breathers in a 2D hexagonal lattice. J L Marín, J C Eilbeck, F M Russell, Physics Letters A. 248J. L. Marín, J. C. Eilbeck, and F. M. Russell. Localized moving breathers in a 2D hexagonal lattice. Physics Letters A, 248:225-229, 1998.
2-D Breathers and applications. J L Marín, J C Eilbeck, F M Russell, Nonlinear Science at the Dawn of the 21th Century. P. L. Christiansen, M. P. Søerensen, and A. C. ScottBerlinSpringerJ. L. Marín, J. C. Eilbeck, and F. M. Russell. 2-D Breathers and ap- plications. In P. L. Christiansen, M. P. Søerensen, and A. C. Scott, editors, Nonlinear Science at the Dawn of the 21th Century, pages 293- 306. Springer, Berlin, 2000.
Breathers in cuprate-like lattices. J L Marín, F M Russell, J C Eilbeck, Physics Letters A. 281J. L. Marín, F. M. Russell, and J. C. Eilbeck. Breathers in cuprate-like lattices. Physics Letters A, 281:21-25, 2001.
Identification and selection criteria for charged lepton tracks in mica. F M Russell, International Journal of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measurements. 15F. M. Russell. Identification and selection criteria for charged lepton tracks in mica. International Journal of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measurements, 15:41-44, 1988.
Decorated track recording mechanisms in muscovite mica. F M Russell, International Journal of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measurements. 19F. M. Russell. Decorated track recording mechanisms in muscovite mica. International Journal of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measurements, 19:109-114, 1991.
Lattice-solitons and non-linear phenomena in track formation. F M Russell, D R Collins, Radiation Measurements. 25F. M. Russell and D. R. Collins. Lattice-solitons and non-linear phe- nomena in track formation. Radiation Measurements, 25:67-70, 1995.
Anharmonic excitations in high T c materials. F M Russell, D R Collins, Physics Letters A. 216F. M. Russell and D. R. Collins. Anharmonic excitations in high T c materials. Physics Letters A, 216:197-202, 1996.
Evidence for moving breathers in a layered crystal insulator at 300K. F M Russell, J C Eilbeck, Europhysics Letters. 7810004F. M. Russell and J. C. Eilbeck. Evidence for moving breathers in a layered crystal insulator at 300K. Europhysics Letters, 78:10004, 2007.
Persistent mobile lattice excitations in a crystalline insulator. F M Russell, J C Eilbeck, Discrete and Continuous Dynamical Systems -Series S. 4F. M. Russell and J. C. Eilbeck. Persistent mobile lattice excitations in a crystalline insulator. Discrete and Continuous Dynamical Systems - Series S, 4:1267-1285, 2011.
MeV ion-induced movement of lattice disorder in single crystalline silicon. P Sen, J Akhtar, F M Russell, Europhysics Letters. 51401P. Sen, J. Akhtar, and F. M. Russell. MeV ion-induced movement of lattice disorder in single crystalline silicon. Europhysics Letters, 51:401, 2000.
Numerical Experiments on the Stochastic Behavior of a Lennard-Jones Gas System. S D Stoddard, J Ford, Physical Review A. 83S. D. Stoddard and J. Ford. Numerical Experiments on the Stochastic Behavior of a Lennard-Jones Gas System. Physical Review A, 8(3):1504- 1512, 1973.
Observation of intrinsically localized modes in a discrete low-dimensional material. B I Swanson, J A Brozik, S P Love, G F Strouse, A P Shreve, A R Bishop, W.-Z Wang, M I Salkola, Physical Review Letters. 82B. I. Swanson, J. A. Brozik, S. P. Love, G. F. Strouse, A. P. Shreve, A. R. Bishop, W.-Z. Wang, and M. I. Salkola. Observation of intrinsically localized modes in a discrete low-dimensional material. Physical Review Letters, 82:3288-3291, 1999.
Rectification and phase locking in overdamped two-dimensional Frenkel-Kontorova model. Y Yang, W S Duan, L Yang, J M Chen, M M Lin, Europhysics Letters. 93116001Y. Yang, W. S. Duan, L. Yang, J. M. Chen, and M. M. Lin. Rectification and phase locking in overdamped two-dimensional Frenkel-Kontorova model. Europhysics Letters, 93(1):16001, 2011.
| []
|
[
"A revised density split statistic model for general filters",
"A revised density split statistic model for general filters"
]
| [
"Pierre Burger [email protected] \nArgelander-Institut für Astronomie\nAuf dem Hügel 7153121BonnGermany\n",
"Oliver Friedrich \nKavli Institute for Cosmology\nUniversity of Cambridge\nCB3 0HACambridgeUK\n\nChurchill College\nUniversity of Cambridge\nCB3 0DSCambridgeUK\n",
"Joachim Harnois-Déraps \nSchool of Mathematics, Statistics and Physics\nNewcastle University\nNE1 7RUNewcastle upon TyneUK\n\nAstrophysics Research Institute\nLiverpool John Moores University\n146 Brownlow HillL3 5RFLiverpoolUK\n",
"Peter Schneider \nArgelander-Institut für Astronomie\nAuf dem Hügel 7153121BonnGermany\n"
]
| [
"Argelander-Institut für Astronomie\nAuf dem Hügel 7153121BonnGermany",
"Kavli Institute for Cosmology\nUniversity of Cambridge\nCB3 0HACambridgeUK",
"Churchill College\nUniversity of Cambridge\nCB3 0DSCambridgeUK",
"School of Mathematics, Statistics and Physics\nNewcastle University\nNE1 7RUNewcastle upon TyneUK",
"Astrophysics Research Institute\nLiverpool John Moores University\n146 Brownlow HillL3 5RFLiverpoolUK",
"Argelander-Institut für Astronomie\nAuf dem Hügel 7153121BonnGermany"
]
| []
| Context. Studying the statistical properties of the large-scale structure in the Universe with weak gravitational lensing is a prime goal of several current and forthcoming galaxy surveys. The power that weak lensing has to constrain cosmological parameters can be enhanced by considering statistics beyond second-order shear correlation functions or power spectra. One such higher-order probe that has proven successful in observational data is density split statistics (DSS), in which one analyses the mean shear profiles around points that are classified according to their foreground galaxy density. Aims. In this paper, we generalise the most accurate DSS model to allow for a broad class of angular filter functions used for the classification of the different local density regions. This approach is motivated by earlier findings showing that an optimised filter can provide tighter constraints on model parameters compared to the standard top-hat case. Methods. As in the previous DSS model we built on large deviation theory approaches and approximations thereof to model the matter density probability distribution function, and on perturbative calculations of higher-order moments of the density field. The novel addition relies on the generalisation of these previously employed calculations to allow for general filter functions and is validated on several sets of numerical simulations. Results. It is shown that the revised model fits the simulation measurements well for many filter choices, with a residual systematic offset that is small compared to the statistical accuracy of current weak lensing surveys. However, by use of a simple calibration method and a Markov chain Monte Carlo analysis, we studied the expected sensitivity of the DSS to cosmological parameters and find unbiased results and constraints comparable to the commonly used two-point cosmic shear measures. Hence, our DSS model can be used in competitive analyses of current cosmic shear data, while it may need refinements for forthcoming lensing surveys. | 10.1051/0004-6361/202141628 | [
"https://arxiv.org/pdf/2106.13214v2.pdf"
]
| 235,623,979 | 2106.13214 | 52a766a90053aa0e8dd507a7f97dc433d4cab234 |
A revised density split statistic model for general filters
May 23, 2022
Pierre Burger [email protected]
Argelander-Institut für Astronomie
Auf dem Hügel 7153121BonnGermany
Oliver Friedrich
Kavli Institute for Cosmology
University of Cambridge
CB3 0HACambridgeUK
Churchill College
University of Cambridge
CB3 0DSCambridgeUK
Joachim Harnois-Déraps
School of Mathematics, Statistics and Physics
Newcastle University
NE1 7RUNewcastle upon TyneUK
Astrophysics Research Institute
Liverpool John Moores University
146 Brownlow HillL3 5RFLiverpoolUK
Peter Schneider
Argelander-Institut für Astronomie
Auf dem Hügel 7153121BonnGermany
A revised density split statistic model for general filters
May 23, 2022Received 24 June 2021/ Accepted 02 March 2022Astronomy & Astrophysics manuscript no. new_DSS_modelgravitational lensing: weak -methods: statistical -surveys -Galaxy: abundances -(cosmology:) large-scale structure of Universe
Context. Studying the statistical properties of the large-scale structure in the Universe with weak gravitational lensing is a prime goal of several current and forthcoming galaxy surveys. The power that weak lensing has to constrain cosmological parameters can be enhanced by considering statistics beyond second-order shear correlation functions or power spectra. One such higher-order probe that has proven successful in observational data is density split statistics (DSS), in which one analyses the mean shear profiles around points that are classified according to their foreground galaxy density. Aims. In this paper, we generalise the most accurate DSS model to allow for a broad class of angular filter functions used for the classification of the different local density regions. This approach is motivated by earlier findings showing that an optimised filter can provide tighter constraints on model parameters compared to the standard top-hat case. Methods. As in the previous DSS model we built on large deviation theory approaches and approximations thereof to model the matter density probability distribution function, and on perturbative calculations of higher-order moments of the density field. The novel addition relies on the generalisation of these previously employed calculations to allow for general filter functions and is validated on several sets of numerical simulations. Results. It is shown that the revised model fits the simulation measurements well for many filter choices, with a residual systematic offset that is small compared to the statistical accuracy of current weak lensing surveys. However, by use of a simple calibration method and a Markov chain Monte Carlo analysis, we studied the expected sensitivity of the DSS to cosmological parameters and find unbiased results and constraints comparable to the commonly used two-point cosmic shear measures. Hence, our DSS model can be used in competitive analyses of current cosmic shear data, while it may need refinements for forthcoming lensing surveys.
Introduction
Studying the matter distribution of the present large-scale structure reveals a wealth of information about the evolution of the Universe. In particular, its distorting effect on the propagation of light from distant galaxies, known as cosmic shear, can be captured by analysing weak lensing surveys. By comparing the results of cosmological models with the observed signal, one can constrain cosmological parameters (see e.g. Asgari et al. 2021;Abbott et al. 2022;Hamana et al. 2020).
The preferred methods used to infer statistical properties of the matter and galaxy distribution concentrate on second-order statistics, such as the two-point correlation functions or their Fourier counterparts, the power spectra. Although these statistics have an impressive accuracy to describe for instance primordial perturbations visible in the cosmic microwave background (CMB; e.g. Planck Collaboration et al. 2020) they probe only the Gaussian information present in the density fluctuations. However, these initial conditions developed significant non-Gaussian features by means of non-linear gravitational instability, which can only be investigated with higher-order statistics. Although they are typically more time consuming to model and measure, these higher-order statistics scale differently with cosmological parameters, and are not affected in the same way by residual systematics. Hence, by jointly investigating second-and higherorder statistics, the constraining power on cosmological parameters increases (see e.g. Bergé et al. 2010;Pyne & Joachimi 2021;Pires et al. 2012;Fu et al. 2014;Kilbinger & Schneider 2005).
A large number of analytical models for the two-point statistics exists in the literature (Takahashi et al. 2012;Heitmann et al. 2014;Euclid Collaboration et al. 2021;Mead et al. 2020;Nishimichi et al. 2019); however, the analysis of higher-order statistics is usually based on simulations. Analytical models for higher-order lensing statistics are rare, although they are important not only for scientists to understand physical processes, but also to cross-check simulations, which are usually only tested against Gaussian statistics. For example, Reimberg & Bernardeau (2018) and Barthelemy et al. (2021) used large deviation theory (LDT) to compute the reduced-shear correction to the aperture mass probability distribution function (PDF); Munshi et al. (2020) and Halder et al. (2021) analytically modelled the integrated shear three-point function; the lensing peak count function was modelled in Fan et al. (2010); Lin & Kilbinger (2015) and Shan et al. (2018), while the lensing PDF is modelled in Boyle et al. (2021).
The examples mentioned above all pertain to the analysis of cosmic shear data. However, it has been established in recent Article number, page 1 of 20 arXiv:2106.13214v2 [astro-ph.CO] 20 May 2022 A&A proofs: manuscript no. new_DSS_model analyses that the addition of foreground clustering data, and their cross-correlation with the background source galaxies, yield significantly better constraints (Abbott et al. 2018;Heymans et al. 2021). While the central analyses focused again on two-point statistics, Friedrich et al. (2018, hereafter F18) developed a competitive model based on density split statistics (hereafter DSS). The idea is to measure the mean tangential shear around small sub-areas of the survey, and to stack the signal according to the foreground galaxy density in these sub-areas. We expect the tangential shear to be larger around points with a high density of foreground galaxies, given they correspond to a large matter overdensity on average. The model derived in F18 is based on non-perturbative calculations of the matter density PDF, which predicts the shear profiles and the probability density of galaxy counts in the sub-areas, for a given cosmological model, a redshift distribution for the source and lens galaxy populations, and a mean galaxy density. In Gruen et al. (2018, hereafter G18), the F18 model is used to constrain cosmological parameters from DSS measurements from the Dark Energy Survey (DES) First Year and Sloan Digital Sky Survey (SDSS) data, which yields constraints on the matter density Ω m = 0.26 +0.04 −0.03 that agree and are competitive with the DES analysis of galaxy and shear twopoint functions (see Abbott et al. 2018).
One of the motivations of this work is based on Burger et al. (2020, hereafter B20), who use a suite of numerical simulations to show that using matched filter functions for searching peaks and troughs in the galaxy and matter density contrast has clear advantages compared to the top-hat filter used in the F18 model, both in terms of the overall signal S/N and in recovering accurately the galaxy bias term. Another motivation of using compensated filters is that these filters are more confined in Fourier space and are therefore better at smoothing out large -modes where baryonic effects play an important role (Asgari et al. 2020). Therefore, it is of interest to generalise the DSS to a broader set of filter functions. Smoothing cosmic density fields with filters other than top-hat ones significantly complicates the LDT-like calculations used by F18 and G18 (cf. Barthelemy et al. 2021) because for top-hat filters the Lagrangian to Eulerian mapping inherent in LDT is particularly simple. However, we find here that density split statistics with non-top-hat filters that are sufficiently concentrated around their centres can still be accurately modelled with computationally feasible extensions of approximations made by F18. This paper describes our modifications to the F18 model that will allow us to optimise filtering strategies when applying density split statistics to Stage III weak lensing surveys such as KiDS. Throughout this paper, if not otherwise stated, we assume a spatially flat universe.
This work is structured as follows. In Sect. 2 we review the basics of the aperture statistics; we then detail our changes to the F18 model in Sect. 3. In Sect. 4 we describe the simulations, and the construction of our mock data used to validate the revised model. In Sect. 5 we compare the model predictions with simulations, and establish the model's limitations. We summarise our work in Sect. 6.
Aperture statistics
The lensing convergence κ and shear γ are related via the lensing potential ψ (Schneider et al. 1992) as κ(θ) = 1 2 (∂ 2 1 + ∂ 2 2 )ψ(θ) , γ(θ) = 1 2 (∂ 2 1 − ∂ 2 2 + 2i∂ 1 ∂ 2 )ψ(θ) , (1) with ∂ i = ∂ ∂θ i and θ the angular position on the sky; we employ the flat-sky approximation. Given a reference point in a Carte-sian coordinate system on the sky and a second point whose separation to the first is oriented at an angle φ with respect to that coordinate system, we can express the shear at the second point in terms of the tangential and cross-shear with respect to the first point as
γ t = −Re(γ e −2iφ ) , γ × = −Im(γ e −2iφ ) ,(2)
where the factor 2 in the exponent is due to the polar nature of the shear. Given a convergence field κ(θ), the aperture mass at position θ is defined as
M ap (θ) d 2 θ κ(θ + θ ) U(|θ |) ,(3)
where U(ϑ) is a compensated axisymmetric filter function, such that ϑ U(ϑ) dϑ = 0. As shown in Schneider (1996), if U is compensated, M ap can also be expressed in terms of the tangential shear γ t and a related filter function Q as
M ap (θ) = d 2 θ γ t (θ + θ ) Q(|θ |) ,(4)
where
Q(ϑ) = 2 ϑ 2 ϑ 0 dϑ ϑ U(ϑ ) − U(ϑ) ,(5)
which can be inverted, yielding
U(ϑ) = 2 ∞ ϑ dϑ Q(ϑ ) ϑ − Q(ϑ) .(6)
In analogy to M ap , we define, as done in B20, the aperture number counts (Schneider 1998), or aperture number, as
N ap (θ) d 2 θ n(θ + θ ) U(|θ |) ,(7)
where U(ϑ) is the same filter function as in Eq.
(3) and n(ϑ) is the (foreground) galaxy number density on the sky. This definition of the aperture number is equivalent to the 'Counts-in-Cell' (CiC) from Gruen et al. (2016) if a top-hat filter of the form
U th (ϑ) = 1 A H(ϑ th − ϑ) ,(8)
is used, where H is the Heaviside step function and A is the area of the filter. However, B20 demonstrated that top-hat filters are not optimal, and that a better performance is achieved by an adapted filter in terms of signal-to-noise-ratio (S/N) and in recovering accurately the galaxy bias term. In this paper we compute aperture mass statistics with Eq. (4) using simulated weak lensing catalogues of background source galaxies, notably regarding positions and ellipticities, and aperture number statistics with Eq. (7) from the position of simulated foreground lens galaxies (see Sect. 4).
Revised model
In this section we describe our modifications of the original F18 model. Although the derivations shown here are self-contained, we recommend the interested reader to consult the original F18 paper. In particular, it is shown there that the full nonperturbative calculation of the PDF within large deviation theory (LDT) can be well approximated with a log-normal PDF that matches variance and skewness of the LDT result. This allowed F18 and G18 to replace the expensive LDT computation with a faster one, hence making the calculation of full Markov chain Monte Carlo (MCMC) functions feasible. The reason why this approximation works well is that, for top-hat filters, the scaling of variance and higher-order cumulants in LDT is similar to that found in log-normal distributions. This cannot be expected a priori for other filter functions. However, through comparison with N-body simulations we find here (cf. Sect. 5) that either a simple log-normal or a combination of two log-normal distributions still accurately describes the density PDFs required to analyse density split statistics with more general classes of filters. The following section describes these calculations. In order to reduce the mathematical calculations in this section, some derivations are detailed in Appendix A. We start by defining the line-of-sight projection of the 3D matter density contrast δ m,3D , weighted by a foreground (lens) galaxy redshift probability distribution n f (z) as
δ m,2D (θ) = dχ q f (χ) δ m,3D (χθ, χ) ,(9)
where χ is the co-moving distance and the projection kernel
q f (χ) is q f (χ) = n f (z[χ]) dz[χ] dχ .(10)
This 2D matter density contrast can then be used together with a linear bias term to represent a tracer density contrast (see Sect. 3.3 or Sect. 4). Following F18, the next step consists of smoothing the results with a filter U of size Θ:
δ m,U (θ) ≡ |θ |<Θ d 2 θ δ m,2D (θ + θ ) U(|θ |) .(11)
This simplifies in the case of a top-hat filter of size Θ to
δ Θ m,th (θ) = 1 A |θ |<Θ d 2 θ δ m,2D (θ + θ ) .(12)
Similar to the 2D density contrast, the convergence, which is needed to describe the DSS signal, is given by
κ(θ) = dχ W s (χ) δ m,3D (χθ, χ) ,(13)
where W s (χ) is the lensing efficiency defined as
W s (χ) = 3Ω m H 2 0 2c 2 ∞ χ dχ χ(χ − χ) χ a(χ) q s (χ ) ,(14)
with q s (χ) = n s (z[χ]) dz[χ] dχ being the line-of-sight probability density of the sources, Ω m the matter density parameter, H 0 the Hubble parameter, and c the speed of light. The mean convergence inside an angular separation ϑ, κ <ϑ , follows then in analogy to Eq. (12) by substituting δ m,2D (θ) with κ(θ).
The aim of our model is to predict the tangential shear profiles γ t given a quantile Q of the foreground aperture number N ap , γ t |Q , where for instance the highest quantile is the set of lines of sight of the sky that have the highest values of N ap . Therefore, to determine γ t |Q the model calculates γ t |N ap and sums up all that belong to the corresponding quantile Q. The expectation value of γ t |N ap is computed from the convergence profile as
γ t (ϑ)|N ap = κ <ϑ |N ap − κ ϑ |N ap = − ϑ 2 d dϑ κ <ϑ |N ap ,(15)
where κ ϑ is the azimuthally averaged convergence at angular separation ϑ from the centre of the filter, and κ <ϑ is the average convergence inside that radius. The latter quantity, conditioned on a given N ap , can be specified by
κ <ϑ |N ap = dδ m,U κ <ϑ |δ m,U , N ap p(δ m,U |N ap ) (16) ≈ dδ m,U κ <ϑ |δ m,U p(δ m,U |N ap ) ,(17)
where in the second step we assumed that the expected convergence within ϑ only depends on the projected matter density contrast δ m,U and not on the particular realisation of shot-noise in N ap found within that fixed matter density contrast 1 . By use of Bayes' theorem, we can express the conditional PDF as
p(δ m,U |N ap ) = p(N ap |δ m,U )p(δ m,U ) p(N ap ) ,(18)
where p(N ap |δ m,U ) is the probability of finding N ap given the smoothed density contrast δ m,U . The normalisation in the denominator of Eq. (18) follows by integrating over the numerator,
p(N ap ) = dδ m,U p(δ m,U ) p(N ap |δ m,U ) .(19)
As seen in the derivation above, we are left with three ingredients in order to calculate the tangential shear profiles given a quantile Q of the aperture number γ t (ϑ)|N ap :
(I) the PDF of the matter density contrast smoothed with the filter function U (used in Eqs. 18,19) p(δ m,U ) ;
(II) the expectation value of the convergence inside a radius ϑ given the smoothed density contrast (used in Eq. 17) κ <ϑ |δ m,U ;
(III) the distribution of N ap for the given filter function U given the smoothed density contrast (used in Eqs. 18,19) p(N ap |δ m,U ) .
Since all three ingredients are sensitive to the filter U, we need to adjust all of them coherently with respect to the top-hat case.
(I) : p(δ m,U )
As shown by F18 the full LDT computation of the matter density PDF can accurately approximated by a shifted log-normal distribution with vanishing mean (Hilbert et al. 2011), which is fully characterised by two parameters, σ and δ 0 , as
p(δ m,U ) = 1 √ 2πσ(δ m,U + δ 0 ) exp − [ln δ m,U /δ 0 + 1 + σ 2 /2] 2 2σ 2 .(23)
The two free parameters can be determined by specifying the variance δ 2 m,U and skewness δ 3 m,U of the PDF as (Hilbert et al. 2011)
δ 2 m,U = δ 2 0 exp σ 2 − 1 ,(24)δ 3 m,U = 3 δ 0 δ 2 m,U 2 + 1 δ 3 0 δ 2 m,U 3 ;(25)
we derive the expression of δ 2 m,U and δ 3 m,U in Appendix A (see Eq. A.28).
As we show later, this approximation works well for nonnegative filter functions like top-hat or Gaussian filters. However, the log-normal PDF approximation becomes less accurate for compensated filters that include negative weights. In these cases we instead divide U into U > (ϑ) = U(ϑ)H(ϑ ts − ϑ) and U < (ϑ) = −U(ϑ)H(ϑ − ϑ ts ), where ϑ ts is the transition scale from positive to negative filter weights. As a consequence, we obtain two correlated log-normal random variables, δ m,U > and δ m,U < , whose joint distribution can be represented by a bi-variate log-normal distribution as
p(δ m,U > , δ m,U < ) = 1 2πσ > (δ m,U > + δ 0,> ) σ < (δ m,U < + δ 0,< ) 1 − ρ 2 × exp − 1 2(1 − ρ 2 ) ∆ 2 > + ∆ 2 < − 2ρ∆ > ∆ < ,(26)
where we defined
∆ > = ln δ m,U > /δ 0,> + 1 + σ 2 > /2 σ >(27)
and similarly for ∆ < . The correlation coefficient ρ is determined by
ρ = ln δ m,U > δ m,U < δ 0,> δ 0,< + 1 1 σ > σ < ,(28)
and in order to calculate the difference of two independent random variables δ m,U = δ m,U > − δ m,U < we can use the convolution theorem (Arfken & Weber 2008) to get
p(δ m,U ) = ∞ −∞ dδ m,U > p(δ m,U > , δ m,U > − δ m,U ) .(29)
3.2. (II) : κ <ϑ |δ m,U In order to calculate the expectation value of the mean convergence inside an angular radius ϑ, κ <ϑ , given the matter density contrast δ m,U , we assume that both follow a joint log-normal distribution (see e.g. the discussion in Appendix B of G18). In this case, the expectation value can be written as
κ <ϑ |δ m,U κ 0 = exp C [2 ln δ m,U /δ 0 + 1 + V − C] 2V − 1 ,(30)
where δ 0 is determined with Eq. (25) and the three variables C, V, and κ 0 can be calculated from the moments δ 2 m,U , κ <ϑ δ m,U , and κ <ϑ δ 2 m,U , which follow from the derivation in Appendix A (see Eq. A.29):
V = ln 1 + δ 2 m,U δ 2 0 ,(31)C = ln 1 + κ <ϑ δ m,U δ 0 κ 0 ,(32)κ 0 = κ <ϑ δ m,U 2 e V κ <ϑ δ 2 m,U − 2 κ <ϑ δ m,U δ 2 m,U /δ 0 .(33)
We note that the assumption that δ m,U is log-normal distributed is not well justified for filters with negative weights as we mentioned in the previous section. A possible improvement could be done for instance by assuming again that δ m,U is made up of two log-normal random variables, and we would need to calculate conditional moments like κ <ϑ |δ m,U > − δ m,U < . This would significantly increase the amount of joint moments needed in our calculation and would render fast modelling unfeasible. However, an improved modelling is also unnecessary at present, given the statistical uncertainties we expect for Stage III weak lensing surveys such as KiDS-1000. We demonstrate this empirically in Sect. 5 by comparison to N-body simulated data, but we also want to give a brief theoretical motivation. The average value of κ <ϑ , given that δ m,U lies within the range [δ min , δ max ], is given by
κ <ϑ |δ m,U ∈ [δ min , δ max ] = δ max δ min dδ m,U p(δ m,U ) κ <ϑ |δ m,U δ max δ min dδ m,U p(δ m,U ) .(34)
If κ <ϑ and δ m,U were joint Gaussian random variables, then p(δ m,U ) would be a Gaussian PDF and we would have κ <ϑ |δ m,U = δ m,U δ m,U κ <ϑ / δ 2 m,U . We now argue that the leading-order correction to this Gaussian approximation consists of replacing p(δ m,U ) by our full non-Gaussian model, without changing κ <ϑ |δ m,U , since this would be exactly correct in the limit of strong correlation between the two variables. Our lognormal approximation to κ <ϑ |δ m,U is then already a next-toleading-order correction and a bi-variate log-normal approximation for κ <ϑ |δ m,U would be of even higher order. While this reasoning is admittedly only heuristic, it is proven correct by the accuracy of our model predictions for the lensing signals in Sect. 5. The third basic ingredient is the PDF of N ap given the projected matter density contrast smoothed with the filter U. Assuming a Poisson distribution for N ap , which is the most straightforward ansatz, is unfortunately not possible because negative values are expected with a compensated filter (i.e. in some of the U < contributions). We use instead a completely new approach compared to F18, and derive an expression for p(N ap |δ m,U ) by use of the characteristic function (Papoulis & Pillai 1991, hereafter CF), which is an alternative representation of a probability distribution, similar to the moment generating functions, but based on the Fourier transform of the PDF. Of interest to us, the n-th derivative of the CFs can be used to calculate the n-th moment of the PDF. The CF corresponding to p(N ap |δ m,U ) is defined as
Ψ(t) = e itN ap δ m,U = R dN ap p(N ap |δ m,U )e itN ap ,(35)
where in our particular case, we derive in Appendix A.4 a closed expression as
Ψ(t) = exp 2πn 0 ∞ 0 dϑ ϑ 1 + b w ϑ |δ m,U e itU(ϑ) − 1 (36)
with n 0 being the mean number density of foreground galaxies on the sky. The assumption of linear galaxy bias enters here by the term b w ϑ |δ m,U , with
w ϑ = 1 2π 2π 0 dφ δ m,2D (ϑ, φ) .(37)
Hence, n 0 (1 + b w ϑ |δ m,U ) is the effective number density at ϑ given δ m,U . The conditional expectation value w ϑ |δ m,U is given in analogy to Eq. (30), but replacing κ <ϑ δ k m,U → w <ϑ δ k m,U in Eqs. (31-33) for k = 1, 2 and using that
w ϑ |δ m,U = w <ϑ |δ m,U + ϑ 2 d dϑ w <ϑ |δ m,U ,(38)
where the joint moments w <ϑ δ k m,U are also derived in Appendix A (see Eq. A.30). Next, we re-express Eq. (36) as the product of two terms,
Ψ(t) = exp p(t) exp iq(t) ,(39)
where
p(t) = 2πn 0 R max 0 dϑ ϑ 1 + b w ϑ |δ m,U (cos[tU(ϑ)] − 1) ,(40)q(t) = 2πn 0 R max 0 dϑ ϑ 1 + b w ϑ |δ m,U sin[tU(ϑ)] ,(41)
and R max is the angular radius beyond which U vanishes. We note that G18 and F18 found super-Poisson shot-noise in their work. They interpret these deviations from Poisson noise as having a number 1 of galaxies per Poisson halo. This would suggest that we could incorporate non-Poissonian behaviour by replacing n 0 with an effective density of Poisson halos and making this a free parameter of our model. However, more recent investigations (e.g. Friedrich et al. in prep.) cast doubt on the simplified interpretation of F18 and G18. A proper investigation of the problem of non-Poissonian shot-noise is beyond the scope of this work, and we will address it in future investigations. Finally, the probability density function p(N ap |δ m,U ) follows from the inverse Fourier transform of the CF
p(N ap |δ m,U ) = 1 2π R dt exp −itN ap Ψ(t) = 1 2π R dt cos q(t) − tN ap exp p(t) ,(42)
where the second step follows from the fact that the imaginary part cancels out. In Appendix A.4 we discuss a similar approach, where we assume that p(N ap |δ m,U ) is log-normal distributed. In that case, to specify the PDF, only the first three moments are needed, which follow from derivatives of the CF. As shown in Appendix A.4 both methods yield almost identical results, and since the lognormal approach is significantly faster, we use it hereafter, unless otherwise stated.
To summarise, the major changes compared to the F18 model are the following:
1. To determine p(δ m,U ) we updated the calculation of the variance δ 2 m,U and of the skewness δ 3 m,U in Appendix A to general filter functions; combine in Eqs. (26-29) two log-normal random variables for the positive and negative parts for compensated filters to obtain the final expression for any filter shape. 2. To determine p(N ap |δ m,U ) we calculate the characteristic function of galaxy shot-noise around a given matter density profile via Eq. (35); use log-normal approximation or inverse Fourier transform Eq. (42) to obtain the PDF of shot-noise from its characteristic function. 3. To determine κ <ϑ |δ m,U we updated the calculations of κ <ϑ δ m,U and κ <ϑ δ 2 m,U to general filter functions (see Appendix A).
Simulation data
Before using our revised model in data analyses, it is mandatory to quantify its precision and range of validity. We use for this validation exercise three simulations suites:
the full-sky gravitational lensing simulations described in Takahashi et al. (2017, hereafter T17), with which we carry out a detailed investigation of the model in a simple survey configuration; the cosmo-SLICS simulations, described in Harnois-Déraps et al. (2019), with which we validate our model on a independent simulation suite; the SLICS simulations, described in Harnois-Déraps et al.
(2018), with which we construct a KiDS-1000 like covariance matrix.
T17 simulations
The T17 simulations are constructed from a series of nested cubic boxes with side lengths of L, 2L, 3L... placed around a fixed vertex representing the observer's position, with L = 450 Mpc/h. Each box is replicated eight times and placed around the observer using periodic boundary conditions. The number of particles per box is fixed to 2048 3 , which results in higher mass and spatial resolutions at lower redshifts. Within each box three spherical lens shells are constructed, each with a width of 150 Mpc/h, which are then used by the public code GRayTrix 2 to trace the light-ray trajectories from the observer to the last scattering surface 3 . With the N-body code gadget2 (Springel et al. 2001) the gravitational evolution of dark matter particles without baryonic processes are followed from the initial conditions, which in turn are determined by use of second-order Lagrangian perturbation theory. The initial linear power spectrum followed from the Code for Anisotropies in the Microwave Background (CAMB; Lewis et al. 2000) with Ω m = 1 − Ω Λ = 0.279, Ω b = 0.046, h = 0.7, σ 8 = 0.82, and n s = 0.97. The matter power spectrum agrees with theoretical predictions of the revised Halofit (Takahashi et al. 2012) within 5%(10%) for k < 5(6) h Mpc −1 at z < 1. In order to account for the finite shell thickness and angular resolution, T17 provide correction formulae, which we repeat in Appendix B. Although various resolution A&A proofs: manuscript no. new_DSS_model options are available, for our purpose the realisations with a resolution of nside = 4096 are sufficient. We use the publicly available matter density contrast maps to create a realistic lens galaxy catalogue that mimics the second and third redshift bins of the luminous red galaxies sample constructed from the KiDS-1000 data (Vakili et al. 2019), as shown by the solid lines in Fig. 1. The reason to mock the LRG sample is that the galaxy bias for this kind of galaxies can be roughly described with a constant linear bias, which is needed for the analytical model. We excluded the lowest-redshift lens bin, first because of its low galaxy number density (n 0 = 0.012 gal/arcmin 2 ) in which the shot-noise level is significant, and second because the density field is more non-linear, and hence we expect the log-normal approximation to break down. Since there is a significant overlap between the KiDS-1000 sources and the lenses in the fourth LRG redshift bin, we reject it as well. To create our lens galaxy samples we first project the T17 3D density maps δ m,3D following the n(z) shown as the step functions in Fig. 1 to get two δ m,2D maps. For both maps we then distribute galaxies following a Poisson distribution with parameter λ = n(1 + b δ m,2D ), where b is a constant linear galaxy bias and n is chosen such that the galaxy number density is n 0 = 0.028 gal/arcmin 2 for the second bin (hereafter the lowredshift bin z low l ) and n 0 = 0.046 gal/arcmin 2 for the third lens bin (hereafter the high-redshift bin z high l ). Since our method requires a constant linear galaxy bias, we specify a bias of 1.72 for lens bin two and 1.74 for lens bin three, similar to those reported in Vakili et al. (2019). F18 found this linear bias assumption to be accurate enough for year 1 data of the Dark Energy Survey, which is similar in constraining power to our target KiDS data (but we note that an investigation of higher-order biasing is underway in Friedrich et al., in prep.).
In our validation test, we use a shear grid at a single source plane located at z = 0.8664, indicated by the black dashed line in Fig. 1. F18 showed that the model works for realistic redshift distributions, and this choice simplifies the generation of our source catalogues. Furthermore, in order to determine a realistic covariance matrix, we transform the shear field into an observed ellipticity field by adding shape noise to the shear grid as
obs = s + g 1 + s g * ,(43)
where obs , s , and γ are complex numbers, and the asterisk ( * ) indicates complex conjugation. The source ellipticities s per pixel are generated by drawing random numbers from a Gaussian distribution with width
σ pix = σ n gal A pix ≈ 0.29 ,(44)
where A pix is the pixel area of the shear grid, and the effective number density n gal and σ are chosen such that they are consistent with the KiDS data. While this transformation is valid in terms of the reduced shear g = γ/(1 − κ), we use throughout this paper the approximation γ ≈ g, as the typical values for the convergence are small, |κ| 1. We neglect the intrinsic alignment of galaxies in this work.
Extracting the model components from the T17 simulations
In order to validate the different components of our model, we need to extract p(δ m,U ), p(N ap ), and γ t |Q from the simulation. The first two follow directly by smoothing the maps of the projected density contrast and the lens galaxy with the corresponding filters. This smoothing can be performed in two different ways. The first is to use the healpy function query_disc, which finds all pixel centres that are located within a given radius, whereas the second approach uses the healpy function smoothing, with a given beam window function created by the function beam2bl. The two approaches result in PDFs that differ slightly, since the query_disc does not reproduce an exact tophat, while the smoothing approach is only over a finite -range. Nevertheless, we found that both approaches are consistent for nside = 4096 well within the uncertainty we estimate from 48 sub-patches (see discussion below), and hence we use the second approach which is significantly faster.
The tangential shear information γ t |Q is measured for each quantile Q by the software treecorr (Jarvis et al. 2004) in 15 log-spaced bins with angular separation Θ/20 < ϑ < Θ, where Θ is the size of the filter being used. For the top-hat filter we measured the shear profiles from 6 < ϑ < 120 , corresponding to a filter with a size of 120 . We note here that for all measured shear profiles the shear around random points is always subtracted, which ensures that the shear averaged over all quantiles for one realisation vanishes by definition.
In order to have an uncertainty for all three model quantities, we divide the full-sky map into 48 sub-patches, such that each patch has a size of approximately 859.4 deg 2 . For p(δ m,U ) and p(N ap ) we determined for each sub-patch one distribution, such that we were able to calculate a standard deviation from 48 values for each bin in the PDF. For the covariance matrix we use 10 out of the 108 realisations and divide each full-sky map in 48 sub-patches, which then results in a covariance matrix measured from 480 fields. Furthermore, both for the covariance and for the error bars in the plotted shear profiles we use Eq. (43) to create noisy shear profiles for each sub-patch, which are then re-scaled to the effective KiDS-1000 area (see Giblin et al. 2021).
Cosmo-SLICS
We use the cosmo-SLICS simulations described in Harnois-Déraps et al. (2019) to determine the validity regime of our revised model for different cosmologies. These are a suite of weak lensing simulations sampling 26 points (listed in Table B.1) in a broad cold dark matter (CDM) parameter space, distributed in a Latin hypercube to minimise interpolation errors. Specifically, the matter density Ω m , the dimensionless Hubble parameter h, the normalisation of the matter power spectrum σ 8 , and the timeindependent equation-of-state parameter of dark energy w 0 are varied over a range that is large enough to complement the analysis of current weak lensing data (see e.g. Harnois-Déraps et al. 2021). Each simulation follows 1536 3 particles inside a cube of co-moving side length L box = 505 h −1 Mpc and n c = 3072 grid cells on the side, starting with initial conditions produced with the Zel'dovich approximation. Moreover, the cosmo-SLICS evolve a pair of simulations at each node, designed to suppress the sampling variance (see Harnois-Déraps et al. 2019, for more details). Each cosmological model is ray-traced multiple times to produce 50 pseudo-independent light cones of size 100 deg 2 .
For each realisation, we create KiDS-1000-like sources and KiDS-LRG-like lens catalogues, following the pipeline described in Harnois-Déraps et al. (2018); notably, we reproduce exactly the source galaxy number density and n(z) that is used in Asgari et al. (2021), who report a total number density n gal = 6.93/arcmin 2 and a redshift distribution estimated from selforganising maps (see Wright et al. 2020). These mock galaxies are then placed at random angular coordinates on 100 deg 2 light cones. In contrast to the T17 mocks, we test our model with two source redshift bins, corresponding to the KiDS-1000 fourth and fifth tomographic bins (hereafter z low s and z high s ). The source galaxies are assigned a shear signal γ from a series of lensing maps, following the linear interpolation algorithm described in Sect. 2 in Harnois-Déraps et al. (2018). For our lens sample we opted to include the second and third tomographic bin of the LRG galaxies described in Vakili et al. (2019)
SLICS
In total the SLICS 4 are a set of over 800 fully independent realisations similar to the fiducial ΛCDM cosmo-SLICS model. The underlying cosmological parameters for each run are the same, fixed to Ω m = 0.2905, Ω Λ = 0.7095, Ω b = 0.0473, h = 0.6898, σ 8 = 0.826 and n s = 0.969 (see Hinshaw et al. 2013). For Fourier modes k < 2.0 h Mpc −1 , the SLICS and cosmo-SLICS three-dimensional dark matter power spectrum P(k) agrees within 2% with the predictions from the Extended Cosmic Emulator (see Heitmann et al. 2014), followed by a progressive deviation for higher k-modes (Harnois-Déraps et al. 2018). We use the SLICS to estimate a reliable covariance ma-trix, which, combined with the cosmo-SLICS, allows us to test our model for a simulation that is independent of T17. Similar to the T17 simulations, the signal of the SLICS is combined with the randomly oriented intrinsic shapes s to create ellipticities, whereas s is drawn from a Gaussian distribution with width σ directly since the shear information are given here per galaxy. We added an additional layer of realism and used a redshiftdependent shape noise that better reproduces the data properties. Specifically, we used σ = 0.25 and 0.27 for the source bins, as reported in Giblin et al. (2021).
Extracting the SLICS and cosmo-SLICS data vector
The extraction of the data vector for the SLICS and cosmo-SLICS analyses is similar to the T17 case, where shape noise was not included for the cosmo-SLICS data vector to better capture the cosmological signal. Another slight difference is that the light cones are now square, which accentuates the edge effects when the aperture filter overlaps with the light-cone boundaries. In principle, it is possible to weight the outer rims for each N ap map, so that the whole map can be used. Although this would increase our statistical power, it could also introduce a systematic offset. We opted instead to exclude the outer rim for each realisation resulting in an effective area of (10 − 2Θ) 2 deg 2 with Θ the size of the corresponding filter. This procedure also ensures that roughly the same number of background galaxies are used to calculate the shear profile around each pixel.
Testing the revised model
We used the simulations described in Sect. 4 to test our revised model and its accuracy in predicting shear profiles. Following the results of F18 we chose a top-hat filter of 20 as our starting point and we considered a number of more general filters with a similar angular extent, shown in Fig. 3. Our motivation for studying these filters is as follows: We use a Gaussian filter to test whether the model performs well for non-constant but positive filters; the 'adapted' filter is the filter that results from B20; the 'Mexican' filter removes the local minimum at ϑ ∼ 40 ; the 'broad Mexican' has a larger width; finally, the 'wide Mexican' suppresses the negative tail. In order to lower the amplitude of the negative part while keeping a similar width, we adjusted the A&A proofs: manuscript no. new_DSS_model The red dashed curve corresponds to a log-normal PDF with the measured moments δ 2 m,U and δ 3 m,U from the smoothed T17 density maps, and indicates the accuracy using a log-normal PDF. The green and the black dashed lines are both from the model; the green corresponds to the PDF of δ m,U when using log-normal and the black using the bi-variate approach described in Eq. (26). The lower panels show the residuals ∆p(δ m,U ) of all lines with respect to the simulations. upper bound of the wide-Mexican filter to conserve the compensation to 150 , which makes it better suited to large contiguous survey areas. Before comparing our model to the simulations, we note that we are using here the revised model even for the top-hat filter, for which we could instead use the F18 model directly. Notably, the derivations of δ 2 m,U and δ 3 m,U are identical in the revised model, and we show in the following plots for the top-hat filters that both models yield almost identical results in predicting the shear profiles with a top-hat filter. Therefore, from here on, we only show results from the revised model. In the following three sections, we validate the key model ingredients introduced in Sect. 3.1-3.3.
Validating p(δ m,U )
We show in Fig. 4 the PDF of the smoothed two-dimensional density contrast for all six filters, and for the two lens bins. We see by inspecting the different panels that the predictions agree with the simulations for the two lens bins within 1 σ cosmic variance expected for KiDS-1000. We note here that this PDF cannot be measured in real data, and that the real test for the accuracy of our model are the shear signals, with larger uncertainties due to shape noise. Nevertheless, for the top-hat and the Gaussian we have an agreement between model and simulation well within the 1 σ, which indicates that the log-normal approximation for theses filters is good. The other filters show stronger deviations when using a log-normal approximation, but these are weaker when the negative part of the filter approaches zero (wide Mexican) or when the width of the filter increases (broad Mexican), although the negative part of the broad Mexican is stronger than for the Mexican filter. This indicates that probing on larger scales either with a broader or wider filter the log-normal approximation is more accurate. Furthermore, when using the bi-variate log-normal approach discussed in Sect. 3.1, the residuals are even more suppressed, and thus we cannot recognise differences in the match between predicted and measured PDF for all compensated filters. Although the model for the compensated filters is not as good as for the non-negative filters (top-hat and Gaussian), the revised model remains consistent throughout with the T17 simulations.
Validating p(N ap )
We show in Fig. 5 how well the model can predict p(N ap ) given the galaxy distributions described in Sect. 4. As for p(δ m,U ), the best matches are observed for the non-negative filters, where the simple log-normal PDF is used. For the compensated filters with the bi-variate log-normal p(δ m,U ) we note a slight deviation in the skewness of p(N ap ). These discrepancies are not seen when placing galaxies at random positions regardless of any underlying matter density field as shown in Fig. A.1, which indicates that they must originate either from p(δ m,U ) or from the w ϑ |δ m,U term (we set the latter to 0 for uniform random fields). It might be that the deviations seen in p(N ap ) are exclusively caused by the deviations in p(δ m,U ), but since they are much smaller, we expect that the assumptions made in computing w ϑ |δ m,U induce additional inaccuracies. Nevertheless, we show next that these deviations result in shear signals whose residuals are well within the statistical uncertainties of Stage III weak lensing surveys such as KiDS-1000. However the accuracy of the w ϑ |δ m,U term will likely need to be improved for future surveys like Euclid, as discussed in Sect. 3.2.
Validating γ t |Q
Having quantified the accuracy of the basic ingredients of our model, we are now in a position to compare the predicted and measured shear profiles. This is a major result of our paper, which is shown in Fig. 6. Following G18, we used five quantiles and we measured the shear profiles up to 120 (or 150 for the wide Mexican case). For the top-hat, Gaussian, and wide Mexican filters we see no significant deviations between the model and the simulations. For the adapted and the smaller Mexicans the shear profiles show minor discrepancies in some quantiles and at large angular scales, but are always consistent within the KiDS-1000 accuracy. The shapes of the signals are affected by the choice of the filter. We can observe shifts in the peak positions and changes in the slope of the signals especially at small scales. This will allow us in the future to select filters that optimise the signal-to-noise ratio of the measurement, while being clean of systematics related to small-scale inaccuracies. Finally, we show in Fig. B.2 that for the compensated filter the difference in using the proposed bi-variate log-normal approach is slightly more accurate than using a plain log-normal. Although the differ-ence does not change the final results noticeably, and although it introduces some inconsistency in the sense that we use a bivariate approach for p(δ m,U ) but not for κ <θ |δ m,U 5 , we decided to stay with the proposed ansatz because it is slightly more accurate, and we plan to use p(N ap ) in future analysis.
In order to check whether the discrepancies seen for some compensated filters yield biased results, we performed an MCMC analysis. As our data vector we used the T17 shear profiles shown in Fig. 6, where we made a conservative cut and included only scales above 14 , since as shown in F18 the model is not fully accurate for small angular scales. For the comparison we decided to use the adapted filter and the top-hat filter to have one analysis with and one without these discrepancies. Furthermore, since the mean aperture mass summed over all quantiles vanishes per definition, one of the five shear signals is fully determined by the others, and so we discarded for all cases the middle quantile with the lowest signal. Thus, we ended up with data and model vectors of size 88. As explained previously, we measured our covariance matrix from ten T17 simulations, each divided into 48 sub-patches, for a total of 480 sub-patches. We note here that the galaxy number density can slightly deviate between the different realisations due to the Poisson sampling. Given the amplitude of these small fluctuations, these can be safely neglected. Next we de-biased the inverse covariance matrix C −1 following The orange shaded region is the standard deviation on the mean from 48 sub-patches, scaled to the KiDS-1000 area. The residuals between model and simulations were tested to determine whether they can be erased when the PDF of the aperture number is fixed to the measured value from T17, but the same discrepancies were present. Hartlap et al. (2007),
C −1 = n − p − 2 n − 1Ĉ −1 ,(45)
where n is the number of simulations (480) and p the size of the data vector (88). Finally, given our data d measured from only one noise-free T17 realisation, and our model vector m, we measured the χ 2 statistics as
χ 2 = [m − d] T C −1 [m − d] .(46)
Given this set-up we ran an MCMC varying the matter density parameter Ω m and normalisation of the power spectrum σ 8 for the adapted and the top-hat filters, where we marginalised over the biases of the lens samples. As shown in Fig. 7 the analysis with the adapted filter results in a biased inference for the Ω m -σ 8 -plane (although still within 1 σ); this is not the case for the top-hat filter. We note here that this bias is due to the systematic offset in the slope of the highest quantile, which in turn is is highly sensitive to Ω m . Since the amplitude of the shear profiles are correct and these are highly correlated with the S 8 = σ 8 √ Ω m /0.3 parameter, the contours shift to smaller σ 8 values in order to compensate for the bigger Ω m value 6 . In the next section we calibrate the model to investigate whether this systematic bias can be corrected.
Calibrating the model
In this section we calibrate the remaining small inaccuracies of the analytical model seen in Fig. 6 which result in the systematic bias we had observed in the parameter constraints shown in Fig. 7. For this we decided to divide out for each quantile the residuals between the model, γ M T , at the T17 cosmological parameters, p T17 , and the noiseless shear profiles measured from the T17 simulations, γ T17 , such that the calibrated model at parameters p is defined as
γ M,cal (p) = γ M (p) γ T17 γ M (p T17 ) .(47)
Since we used the n(z) combinations of the fiducial cosmo-SLICS shown in Fig. 2 to validate the calibration, we decided to use the n(z) shown in Fig. 1, and in order to have the source n(z) as close as possible to the one of the cosmo-SLICS we averaged several T17 shear grids at different redshifts for the same realisation weighted by the source n(z) shown in Fig. 2. In Fig. B.4 we show the calibration vectors for the highest and lowest quantile for the top-hat and adapted filter, where it can be seen that the different lens n(z) is more important than the source n(z). Next, in order to investigate whether the calibration decreases the systematic biases we performed another MCMC analysis on independent simulations, where our data vector is the fiducial cosmology from the cosmo-SLICS shear profiles shown in Fig. 8, with the original model in red and the calibrated one in black. As before we used the adapted filter and the top-hat filter. The match between the predicted and mea- sured shear profiles is slightly degraded compared to the T17 simulations, which could be caused by edge effects (contiguous full-sky vs 100 deg 2 patches), the smaller statistic (41 253 deg 2 vs 5000 deg 2 ), or differences in the underlying matter power spectrum P(k) that is used in the model. We use the Takahashi et al. (2012) Halofit function throughout this paper, which is calibrated on the same N-body code that is used to create the T17 simulations (Springel et al. 2001, Gadget2), and which is known to have an excess power of 5-8% in the mildly nonlinear regime (Heitmann et al. 2014). The cosmo-SLICS, in contrast, are produced from cubep 3 m (Harnois-Déraps et al. 2013), whose P(k) agrees better with the Cosmic Emulator of Heitmann et al. (2014). We tested different choices of power spectra models calculated with the pyccl package (Chisari et al. 2019), but found differences in the predicted shear profiles that are negligible compared to the expected KiDS-1000 uncertainties.
Discarding again the middle quantile with the lowest signal, using four different redshift combinations (Sect. 4.3) and the signal at all scales because the model is calibrated at all scales, we have data and model vectors of size 160. In this scenario we calculated our covariance matrix from 614 SLICS simulations 7 with shape noise that mimics the KiDS-1000 data. After de-biasing the inverse covariance matrix C −1 with Eq. (45) we calculated the χ 2 with Eq. (46). Given this set-up we ran multiple MCMC, where we used the original model and the calibrated model. As shown in Fig. 9 the calibrated model for the adapted filter results compared to the original model in less a biased inference. Interestingly, the results for the top-hat filter seen in Fig. B.3 are slightly more biased than the calibrated model for the adapted filter. Since this offset is still inside 1 σ, it is likely to be only a statistical fluke due to the remaining residual between model and cosmo-SLICS simulations. The constraining power between tophat and adapted filter are different because the smoothing scales of the two filters were not adjusted as in Burger et al. (2020), and are here sensitive to different physical scales. Nevertheless, we show in Table 1 the resulting constrains for both filters, where it is seen that the calibration moves the results, also for the top-hat filter, closer to the truth. Table 1: Overview of the maximum-posterior cosmologies with the constraining power that we obtain for the original and calibrated model. For all results we marginalise over the biases b of the lenses. The input cosmology of the fiducial cosmo-SLICS is Ω m = 0.2905, σ 8 = 0.8364 and thus S 8 = σ 8 √ Ω m /0.3 = 0.8231. We fixed the time-independent equation-of-state parameter of dark energy w 0 = −1.0 and Hubble parameter h = 0.6868 to their true values. We note that the parameter uncertainties increase slightly if we also vary parameters like w 0 , h, or the scalar spectral index n s . In order to compare our results with those of G18, who derived constraints of Ω m = 0.26 +0.04 −0.03 and S 8 = 0.90 +0.10 −0.08 with their fiducial analysis, we need to multiply our uncertainty intervals by √ 777.4/1321 to account for the smaller area of KiDS-1000 (777.4 deg 2 ) compared to the DES Y1 area (1321 deg 2 ). Furthermore, we exclusively used information about the shear profiles, whereas G18 also used the mean aperture number in each quantile. For this work we were a bit sceptical about using the aperture number here for the compensated filters because we have significant residual discrepancies between model and simulation, which would affect our analysis. The match of the shear profiles in turn is very accurate in our simulations, which shows that they are robust against uncertainties in P(N ap ) 8 . For instance, if one monotonically transforms the N ap values, the predicted P(N ap ) changes, but the segmentation into quantiles is not affected, hence the shear profiles would remain the same. In order to use P(N ap ) in future analysis we need to model shot noise in the galaxy distribution and investigate if the residuals between model and simulations result in systematic biases, but we will keep this for future work. Nevertheless, we see that our constraints from using only information about the shear profiles can be similar to the ones in G18. In addition, due to the calibration method used here, smaller smoothing scales are available than those recommended in F18, where even the top-hat filter has significant deviations. This could allow us to further improve the significance for future DSS analyses or to investigate effects such as baryonic feedback and intrinsic alignments, which are typically relevant on scales < 10Mpc/h.
Summary and conclusion
In our previous work (Burger et al. 2020) we showed that using compensated filters in the density split statistic (DSS) to quantify over-and underdense regions on the sky have advantages compared to the top-hat filter, both in terms of the overall S/N and of It is clearly seen that the calibrated model is less biased than the original one. The contours are marginalised over the lens galaxy bias parameters.
recovering accurately the galaxy bias term. Furthermore, we expect that compensated filters are less influenced by baryonic effect, since they are more confined in Fourier space and therefore are better in smoothing out large -modes where baryonic effects play an important role. This will be investigated in more detail in a follow-up paper, when we start dealing with real data. Gruen et al. (2016) demonstrated that the DSS is a powerful cosmological tool by constraining cosmological parameters with DSS measurements from the Dark Energy Survey (DES) First Year and Sloan Digital Sky Survey (SDSS) data, using the DSS model derived in Friedrich et al. (2018) which uses a top-hat filter. They found for the matter density parameter Ω m = 0.26 +0.04 −0.03 , a constraint that agrees with and is competitive with the DES analysis of galaxy and shear two-point functions (see Abbott et al. 2018).
Following these works, we modify the model of Friedrich et al. (2018) in such a way that it can predict the shear profiles γ t |Q for a given quantile Q of the aperture number N ap for general filters (Gaussian and also compensated filters). This is achieved by recalculating the three basic ingredients, which are the PDF of the projected matter density contrast smoothed with the filter function, p(δ m,U ); the expectation value of the convergence inside a radius ϑ for a fixed smoothed matter density contrast, κ <ϑ |δ m,U ; and the distribution of N ap for the given filter function U given the smoothed matter density contrast, p(N ap |δ m,U ). For κ <ϑ |δ m,U we modified the calculation of the moments for general filters, while we introduced new approaches to calculate p(N ap |δ m,U ) and p(δ m,U ) for compensated filters. For non-negative filters, δ m,U is well described by a lognormal PDF, although we found significant deviations for compensated filters. To solve this issue we used a bi-variate lognormal ansatz, where we assumed that δ m,U can be divided into two log-normal random variables with each separately following a log-normal distribution. For the calculation of p(N ap |δ m,U ) we derived an expression for the corresponding characteristic function, which can be used either directly to calculate p(N ap |δ m,U ) by inverse Fourier transformation or by calculating the first three moments, which then specify a log-normal distribution for p(N ap |δ m,U ). The differences between these two approaches are considerably smaller than the statistical uncertainty, and so we used the latter approach because of its smaller computational time.
In order to validate the revised model, we compared it to the Takahashi et al. (2017) simulations. For non-negative filters like a top-hat or a Gaussian, no significant difference between the model and simulations for the PDF or the tangential shear profiles were detected. For compensated filters, however, we found some discrepancies in the predicted PDF of N ap and shear signals, which results in a biased inference, although still inside 1 σ. To correct this biased result, we calibrated the model to match the noiseless Takahashi et al. (2017) and tested the calibrated model with the independent fiducial cosmology of cosmo-SLICS (Harnois-Déraps et al. 2019). With the calibration applied, all systematic biases are removed, so we are confident that we can apply the model to Stage III surveys such as KiDS-1000. Although this calibration is less important for the top-hat and Gaussian filter, it is still an interesting approach because it allows even smaller scales to be used for both the shear profiles and the filter scales. The use of smaller scales, where the original models fail, makes it possible to increase the constraining power or to study baryonic effects that normally play an important role only at small scales.
After passing all these tests, we are confident that the revised model can be readily applied to Stage III lensing data. We note that a number of systematic effects related to weak lensing analyses will require external simulations, notably regarding the inclusion of secondary signal from the intrinsic alignments of galaxies, or from the impact of baryonic feedback on the matter distribution. However, our model is able to capture the uncertainty on the lens and source redshift distribution, the shape calibration bias, or the galaxy bias at a low computational cost, and is therefore ideally suited to perform competitive weak lensing analyses in the future.
2J 1 (k ⊥ R) k ⊥ R ≡ 1 (2π) 3 sin Lk || /2 Lk || /2 W th R (k ⊥ ) , (A.1)
where J 1 is the first Bessel function, and k || and k ⊥ are the components of k parallel and orthogonal to the cylinder, respectively. The variance of the matter contrast within such a cylinder is given at leading order by
δ 2 R,L (χ) = D 2 + dk || d 2 k ⊥ sin 2 (Lk || /2) Lk || /2 2 W th R (k ⊥ ) 2 P lin,0 (k ⊥ ) ≈ 2πD 2 + L dk k W th R (k) 2 P lin,0 (k) , (A.2)
where the last expression follows from L R, and since the integration depends from now on only on k ⊥ we write the orthogonal component as k. The linear matter power spectrum of P lin,0 is calculated using Eisenstein & Hu (1998), and D + is the growth factor which depends on the conformal time. We note that the factor 1/L cancels out when projecting the moments in Eq. (A.28-A.30) using the Limber approximation (Limber 1953). According to this derivation for a top-hat filter we get for a general filter U that
W U χ (k) = 2π 0 ∞ 0 dr dϑ U χ (r) e −ikr cos ϑ = 2π ∞ 0 dr J 0 (kr) r U χ (r) , (A.3)
where U χ (r) = U(r/χ) = U(ϑ)/χ 2 , with U(ϑ) being a filter measured in angular coordinates (see Fig. 3). Correspondingly, the variance of the matter density contrast for a general filter U in the flat-sky approximation is
δ 2 U,L (χ) = 2πD 2 + L dk k W 2 U χ (k) , P lin,0 (k) . (A.4)
Following lines similar to those of Appendix B.4 of F18, the leading-order contribution to the skewness of matter density contrast for the general filter U can be calculated as δ 3 U,L (χ) = 3ĉπ −1 dq 1 dq 2 q 1 q 2 W U χ (q 1 ) W U χ (q 2 ) P lin,0 (q 1 ) P lin,0 (q 2 ) dφ W U χ q 2 1 + q 2 2 + 2q 1 q 2 cos φ F 2 (q 1 , q 2 , φ) ≡ 3ĉπ −1 dq 1 dq 2 q 1 q 2 W U χ (q 1 ) W U χ (q 2 ) P lin,0 (q 1 ) P lin,0 (q 2 ) Φ U χ (q 1 , q 2 ) , (A.5) whereĉ = 4π 2 D 4 + L 2 . The function F 2 in a general ΛCDM universe is given by
F 2 (q 1 , q 2 , φ) = 1 2 2 + q 1 q 2 cos φ + q 2 q 1 cos φ + (1 + µ) (cos 2 φ − 1) = 1 + 1 2 cos φ q 1 q 2 + q 2 q 1 − (1 − µ) sin 2 φ , (A.6)
where µ results from perturbation theory and is a function of the growth factor D + (see Appendix B.1 in F18 for more details) 9 , and φ is the angle between the vectors with absolute values q 1 and q 2 . Given the definition of W U χ in Eq. (A.3), Φ U χ can be written as
Φ U χ (q 1 , q 2 ) = 2π ∞ 0
dr r U χ (r) dφ J 0 r q 2 1 + q 2 2 + 2q 1 q 2 cos φ F 2 (q 1 , q 2 , φ) . (A.7)
Next, we use Graf's addition theorem (see e.g. Abramowitz & Stegun 1972), which states that J 0 q 2 1 + q 2 2 + 2q 1 q 2 cos φ = ∞ m=−∞ (−1) m J m (q 1 ) J m (q 2 ) e imφ = J 0 (q 1 ) J 0 (q 2 ) + 2 ∞ m=1 (−1) m J m (q 1 ) J m (q 2 ) cos(mφ) ,
(A.8) such that Φ U χ (q 1 , q 2 ) becomes 2π ∞ 0 r U χ (r)dr 2π 0 dφ J 0 (rq 1 ) J 0 (rq 2 ) + 2 ∞ m=1 (−1) m J m (rq 1 ) J m (rq 2 ) cos(mφ) 1 + 1 2 cos(φ) q 1 q 2 + q 2 q 1 − (1 − µ) sin 2 φ = 2π 2 (1 + µ) ∞ 0 r U χ (r)dr J 0 (rq 1 ) J 0 (rq 2 ) A − 2π 2 ∞ 0 r U χ (r)dr J 1 (rq 1 ) J 1 (rq 2 ) q 1 q 2 + q 2 q 1 B + 2π 2 (1 − µ) ∞ 0 r U χ (r)dr J 2 (rq 1 ) J 2 (rq 2 ) C , (A.9)
where we made use of the orthogonality of the trigonometric functions. Plugging Φ U χ (q 1 , q 2 ) back into Eq. (A.5) and considering each term separately we get A : 3ĉπ −1 dq 1 dq 2 q 1 q 2 W U χ (q 1 ) W U χ (q 2 ) P lin,0 (q 1 ) P lin,0 (q 2 ) 2π ∞ 0 dr rU χ (r) π(1 + µ) J 0 (rq 1 ) J 0 (rq 2 )
= 6πĉ(1 + µ) ∞ 0 dr rU χ (r) dq q W U χ (q) P lin,0 (q) J 0 (rq) 2 , (A.10)
and by analogy C : 3ĉπ −1 dq 1 dq 2 q 1 q 2 W U χ (q 1 )W U χ (q 2 ) P lin,0 (q 1 )P lin,0 (q 2 ) 2π ∞ 0 dr rU χ (r) π(1 − µ) J 2 (rq 1 )J 2 (rq 2 )
= 6πĉ(1 − µ) ∞ 0 dr rU χ (r) dq q W U χ (q) P lin,0 (q) J 2 (rq) 2 , (A.11)
and finally
B : − 3ĉπ −1 dq 1 dq 2 q 1 q 2 W U χ (q 1 ) W U χ (q 2 ) P lin,0 (q 1 ) P lin,0 (q 2 ) 2π ∞ 0 dr rU χ (r) πJ 1 (rq 1 ) J 1 (rq 2 ) q 1 q 2 + q 2 q 1 = −12πĉ ∞ 0 dr rU χ (r) dq 1 q 2 d 2 dr 2 W th r (q) . (A.16)
Using these relations together with the following notation
Q 1 (r, χ) = 2πD 2 + L dk k W U χ (k) W th r (k) P lin,0 (k) , (A.17) Q 2 (r, χ) = 2πD 2 + L dk k W U χ (k) d d ln(r) W th r (k) P lin,0 (k) ,(A.A : 6πĉ(1 + µ) ∞ 0 dr r U χ (r) dq q W U χ (q) P lin,0 (q) J 0 (rq) 2 = 6π(1 + µ) ∞ 0 dr r U χ (r) Q 1 (r, χ) + 1 2 Q 2 (r, χ) 2 , (A.20) C : 6πĉ(1 − µ) ∞ 0 dr r U χ (r) dq q W U χ (q) P lin,0 (q) J 2 (rq) 2 = 6π(1 − µ) ∞ 0 dr r U χ (r) − 1 2 Q 2 (r, χ) 2 , (A.21)
and B : − 12πĉ ∞ 0 dr r U χ (r) dq 1 q 1 W U χ (q 1 ) P lin,0 (q 1 ) rq 1 J 1 (rq 1 ) dq 2 q 2 W U χ (q 2 ) P lin,0 (q 2 ) 1 rq 2 J 1 (rq 2 )
= −12π ∞ 0 dr r U χ (r) − 3 2 Q 2 (r, χ) − r 2 2 Q 3 (r, χ) 1 2 Q 1 (r, χ) . (A.22)
Finally, combining A, B, and C, the skewness of δ U χ ,L simplifies to
δ 3 U,L (χ) = 6π ∞ 0 dr r U χ (r) (1 + µ) Q 1 (r, χ) + 1 2 Q 2 (r, χ) 2 + (1 − µ) 1 4 Q 2 2 (r, χ) + 3 2 Q 1 (r, χ)Q 2 (r, χ) + r 2 2 Q 1 (r, χ) Q 3 (r, χ) = 3π ∞ 0 dr r U χ (r) 2(1 + µ) Q 2 1 (r, χ) + Q 1 (r, χ) Q 2 (r, χ) + 3Q 1 (r, χ) Q 2 (r, χ) + Q 2 2 (r, χ) + r 2 Q 1 (r, χ) Q 3 (r, χ) = 3π dr U χ (r) d dr r 2 (1 + µ) Q 2 1 (r, χ) + Q 1 (r, χ) Q 2 (r, χ) , (A.23)
where it is seen that for a top-hat of size ϑ with U χ (r) = H(ϑ − χr) the result in Eq. (B.35) immediately follows. Although all necessary ingredients for specifying the PDF of δ m,U are derived already, we still need moments like δ χϑ,L δ 2 U,L (χ) to compute quantities like κ <ϑ |δ m,U or w <ϑ δ k m,U . With the definitions of two further integrals,
Q 4 (r, χϑ) = 2πD 2 + L dq q W th χϑ (q) P lin (q) W th r (q) , (A.24) Q 5 (r, χϑ) = 2πD 2 + L dq q W th χϑ (q) P lin (q) d d ln r W th r (q) , (A.25)
and using the result of F18 that for a top-hat filter of size R Φ th R (q 1 , q 2 ) = dφ W th R q 2 1 + q 2 2 + 2q 1 q 2 cos φ F 2 (q 1 , q 2 , φ) = π(1 + µ) W th
R (q 1 ) W th R (q 2 ) + π 2 d d ln R W th R (q 1 ) W th R (q 2 ) , (A.26)
the joint filter moments between the matter density contrast smoothed with the general filter and the matter density contrast smoothed with a top-hat of size ϑ follows analogously to the skewness, and is given by δ ϑ,L δ 2 U,L (χ) =ĉ π dq 1 dq 2 q 1 q 2 W U χ (q 1 ) W U χ (q 2 ) P lin,0 (q 1 ) P lin,0 (q 2 ) Φ th χϑ (q 1 , q 2 ) + 2ĉ π dq 1 dq 2 q 1 q 2 W U χ (q 1 ) W th χϑ (q 2 ) P lin,0 (q 1 ) P lin,0 (q 2 ) Φ U χ (q 1 , q 2 )
= (1 + µ) Q 2 1 (χϑ, χ) + Q 1 (χϑ, χ) Q 2 (χϑ, χ) + 2π dr U χ (r) d dr r 2 (1 + µ) Q 1 (r, χ) Q 4 (r, χϑ) + r 2 2 Q 1 (r, χ) Q 5 (r, χϑ) + Q 2 (r, χ) Q 4 (r, χϑ) . (A.27)
In order to go to the non-linear regime for second-order moments, we replace the linear power spectrum in the above calculations with the non-linear power spectrum, which in turn is determined with the halofit model from Takahashi et al. (2012) using an analytic approximation for the transfer function (Eisenstein & Hu 1998).
For the third-order moments we use that for a top-hat filter of size R the filter simplifies to U χ (r) = 1 πR 2 H(R − r), such that
Q 1 (R, χ) = 2πD 2 + L dk k W U χ (k) W th R (k) P lin,0 (k) = 2πD 2 + L dk k W th R (k) W th R (k) P lin,0 (k) = δ 2 R,L (χ) , (A.31) and Q 2 (R, χ) = 2πD 2 + L dk k W U χ (k) d d ln(r) W th R (k) P lin,0 (k) = 2πD 2 + L dk k W th R (k) d d ln(R) W th R (k)P lin,0 (k)
For the general filter we use that the numerical integration of r in δ 3 U χ ,L (χ) results basically in a sum of top-hat filters, such that we make use of S 3 to scale each term individual to the nonlinear regime. For the joint filter moment δ χϑ,L δ 2 U χ ,L (χ) we use a generalised version of S 3 , which states that for two different top-hat filters of size R 1 and R 2
δ 2 R 1 ,L δ R 2 ,L (χ) ∝ δ R 1 ,L δ R 2 ,L (χ) δ 2 R 1 ,L (χ) . (A.36)
Using again that the r-integration results in a sum of tophat filters and factoring out the non-derivative terms similar to Eq. (A.34), we scale individually all the non-derivative terms to the non-linear regime.
Appendix A.4: Characteristic function
We consider a large circle of radius R, inside of which there are N = n 0 πR 2 galaxies, where n 0 is the galaxy number density. The probability of finding a galaxy at separation ϑ is
p(ϑ; δ m,U ) = 2ϑ R 2 η (1 + b w ϑ |δ m,U ) , (A.37)
where w ϑ |δ m,U is the expectation of the mean 2D density contrast on a circle at ϑ (see Eq. 37) given the smoothed density contrast defined in Eq. (38). The assumption of linear galaxy bias enters here by the term b w ϑ |δ m,U . The normalisation is
η = R 0 2ϑ R 2 (1 + b w ϑ |δ m,U ) dϑ , (A.38)
which goes to unity for R → ∞. The characteristic function (CF) of the aperture number N ap , given the smoothed 2D density contrast δ m,U , is given by
Ψ(t) = e itN ap δ m,U = R dN ap p(N ap |δ m,U )e itN ap = N i=1 R 0 dϑ i p(ϑ i ; δ m,U ) e it j U(ϑ j ) = R 0 dϑ 2ϑ R 2 η (1 + b w ϑ |δ m,U )e itU(ϑ) N = R 0 dϑ 2ϑ R 2 η (1 + b w ϑ |δ m,U ) e itU(ϑ) − 1 + 1 N = 1 + πn 0 Nη R 0 dϑ 2ϑ (1 + b w ϑ |δ m,U ) e itU(ϑ) − 1 N −→ N,R→∞ exp 2πn 0 ∞ 0 dϑ ϑ (1 + b w ϑ |δ m,U ) e itU(ϑ) − 1 ,
where we used in the second line that
N ap = ∞ 0 d 2 ϑ U(|ϑ|) n(ϑ) = j U(ϑ j ) ,
with n(ϑ) = j δ D (ϑ − ϑ j ), and that the galaxy positions ϑ i independently trace the density profile w ϑ |δ m,U . As discussed in Sect. 3.3, the exact approach is to transform the CF to the probability density function p(N ap |δ m,U ) by use of the inverse Fourier transformation. Alternatively, we assume that the PDF is well approximated by a log-normal distribution as µ 2 = (E 1 ) 2 + E 2 , µ 3 = (E 1 ) 3 + 3E 1 E 2 + E 3 , and so µ 2 = µ 2 − (µ 1 ) 2 = (E 1 ) 2 + E 2 − (E 1 ) 2 = E 2 , (A.46) µ 3 = µ 3 − 3µ 1 µ 2 + 2(µ 1 ) 2 = E 3 . To check this derivation and compare it to the direct approach of using the inverse Fourier transform, we created an idealised case of a full-sky uniform random field n side = 4096 with a number density n 0 ≈ 0.034/arcmin 2 . Next we calculated by use of the healpy internal smoothing function N ap for the top-hat filter of size 20 and the for the adapted filter. In the determination of the predicted PDF we set, w ϑ |δ m,U = 0 so p(N ap ) follows immediately with Eq. (A.39) or Eq. (42). It is clearly seen in Fig. A.1 that the model for both filters has an excellent fit with the measured PDF of the aperture number. Additionally, we show in Fig. A.2 a comparison between the predicted p(N ap ) using the full characteristic function Eq. (36) versus the log-normal approach Eq. (A.39) for the low-redshift bin z low l from the Takahashi set-up. In the lower panel the residual difference between the two methods is three orders of magnitude smaller than the signal itself, which shows that the two approaches are identical given the uncertainties we expect for Stage III surveys. Since the log-normal approach is faster to compute we can use this approach in future analyses where computational speed is essential. To account for the finite angular resolution T17 suggested a simple damping factor at small scales as C κ → C κ 1 + ( / res ) 2 , (B.1)
where res = 1.6 × N side . Additionally, to take the shell thickness into account they conducted a simple fitting formula by which the matter power spectrum should be modified to
P δ (k) → P W δ (k) = (1 + c 1 k −α 1 ) α 1 (1 + c 2 k −α 2 ) α 3 P δ (k) , (B.2)
where the parameters are simulation specific and are c 1 = 9.5171 × 10 −4 , c 2 = 5.1543 × 10 −3 , α 1 = 1.3063, α 2 = 1.1475, α 3 = 0.62793, and the wavenumber k is in units of h/Mpc. We note that although we incorporated these corrections in the following, they have very little effect on the scales we are considering. Comparison between the uncalibrated shear profiles for the adapted filter with and without using the bi-variate log-normal approach discussed in Sect. 3.1 The ratio is calculated between the measured shear profiles from T17 for the lower LRG source bin and for sources where several T17 shear grids were averaged, weighted by the n(z) given in Fig. 2. The bi-variate log-normal shear profiles are more consistent with the measured shear profiles and thus (although the shear signals were calibrated) the more accurate model was chosen. Here only the highest and lowest two quantiles are shown because the middle one is to close to zero.
Article number, page 19 of 20 The systematic biases are likely to be statistical flukes due to the noise in the data vector. The contours are marginalised over the lens galaxy bias parameters.
3. 3 .
3(III) : p(N ap |δ m,U )
Fig. 1 :
1Lens galaxy redshift distribution constructed from the T17 simulation given the true n(z) of the second z et al. 2019). The black dashed line shows the redshift of the source galaxies.
Fig. 2 :
2to the T17 values, the n(z) of the cosmo-SLICS LRG mocks have a coarser redshift resolution of the simulations. Moreover, the n(z) vary slightly for different underlying cosmologies, due to variations in the relation between comoving distance and redshift. FollowingVakili et al. (2019), we generate our LRG catalogues assuming a constant linear galaxy bias of 1.72 and 1.74, with a galaxy number density of n 0 = 0.028 gal/arcmin 2 and n 0 = 0.046 gal/arcmin 2 . Redshift distributions of the second and third LRG (lens) bins and the last two KiDS-1000 (source) bins of the SLICS simulations. The n(z) are scaled such that a comparison is possible.
Fig. 3 :
3Different filters U used in this work to verify the new model. For all filters we scaled the first bin value to 1/arcmin −2 for comparison. The corresponding Q-filters are shown in Fig. B.1. The wide Mexican filter extends up to 150 .
Fig. 4 :
4PDF of δ m,U smoothed with the filters shown inFig. 3. The orange shaded region is the standard deviation of 48 sub-patches scaled by a 777.4/859.4, where 777.4 deg 2 is the effective survey area of KiDS-1000 (seeGiblin et al. 2021) and 859.4 deg 2 is the area of one sub-patch.
Fig. 5 :
5PDF of N ap calculated with the filters U in Fig. 3. The orange lines are determined with the simulations and the orange shaded region is the standard deviation from 48 sub-patches. The black dashed lines correspond to the results from the new model, and for comparison the red dashed line in the upper left panel is from the old model. The lower panels show the residuals ∆p(N ap ) of all lines with respect to the simulations.
Fig. 6 :
6Predicted shear profiles for the two lens samples (dashed black line) and measured shear profiles (in orange) for the new model with filter U.
Fig. 7 :
7MCMC results for the top-hat and adapted filter using the model and the T17 simulations as our data vector and a covariance matrix calculated from ten T17 realisations each divided into 48 sub-patches. For the adapted filter a systematic bias for σ 8 and Ω m is found, although it cancels out for the S 8 = σ 8 √ Ω m /0.3 parameter. The contours here are marginalised over the lens galaxy bias parameters.
Fig. 8 :Fig. 9 :
89Shear profiles for the top-hat filter (left) and for the adapted filter (right) for the fiducial cosmology of cosmo-SLICS. The orange lines are the mean shear profiles and the orange shaded region is the expected KiDS-1000 uncertainty. The red dashed line corresponds to the original model and the black to the calibrated model. MCMC results for the adapted filter using the original and calibrated model. The data vector is calculated from the fiducial cosmology of cosmo-SLICS and a covariance matrix from 614 SLICS realisations.
−
p(N ap |δ m,U ) = 1 √ 2πS (N ap + L) ln N ap + L − M parameters S , M, L are fixed with the first raw momentµ 1 = N ap |δ m,U = N ap δ m,U = exp M + = (N ap − N ap ) 2 δ m,U = exp 2M + S 2 e S 2 − 1 , (A.41) µ 3 = (N ap − N ap ) 3 δ m,U = exp 3M + (1 + b w ϑ |δ m,U )U n (ϑ) ,
=
the parameters of the log-normal distribution Eq. (A.39) by use of the raw and central moments in Eqs. (A.45-A.exp S2 − 1 2 + exp S 2 = q − 1 (2 + q) , (A.48)where we defined in the last step q = exp S 2 . Modifying γ we get0 = q 3 + 3q 2 − 4 − γ 2 , (A.49)which always has one real solution q 0 , and so the parameters follow to S = ln(q 0 ) ,
Fig. A. 1 :
1Probability distribution of the aperture number resulting in a uniform random field smoothed with the top-hat filter of size 20 in the upper panel and for the adapted filter U of size 120 in the lower panel. The orange shaded region is the standard deviation determined from 48 sub-patches.
Fig
. A.2: Comparison between the two approaches to calculate p(N ap ). It is clearly seen that both methods yield almost the same result.
Fig. B. 3 :
3MCMC results for the top-hat filter using the original and calibrated model. The data vector is calculated from the fiducial cosmology of cosmo-SLICS and a covariance matrix from 614 SLICS realisations.
Table B . 1 :
B1Overview of all the different cosmological parameters for the 26 cosmo-SLICS models, which are used in Sect. 5 for the cosmological analysis. fid 0.2905 0.6898 −1.0000 0.8364 0.8231Fig. B.1: Different filters Q resulting from the corresponding U filters shown in Fig. 3 used in this work to verify the new model.Ω m
h
w 0
σ 8
S 8
0
20
40
60
80
100
120
140
[arcmin]
0.00
0.05
0.10
0.15
0.20
0.25
0.30
Q( ) [arcmin 2
]
adapted
Mexican
broad Mexican
wide Mexican
10 1
10 2
[arcmin]
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
t |Q meas
/
t |Q pred
adapted: z low
s + z low
l
10 1
10 2
[arcmin]
adapted: z high
s + z low
l
bi-variate log-normal
Q1
Q2
Q4
Q5
plain log-normal
Q1
Q2
Q4
Q5
Fig. B.2:
This assumption is not evident per se, since via mode coupling the large-scale profile of a given density perturbation may well be correlated to the shot-noise (i.e. small-scale fluctuations) of galaxy formation in the centre of that perturbation. F18 have found the approximation κ <ϑ |δ m,U , N ap ≈ κ <ϑ |δ m,U to be accurate in the Buzzard N-body simulations (DeRose et al. 2019), but a more stringent investigation of this assumption is left for future work.
http://th.nao.ac.jp/MEMBER/hamanatk/GRayTrix/ 3 These maps are freely available for download at http://cosmo. phys.hirosaki-u.ac.jp/takahasi/allsky_raytracing/
The SLICS are made publicly available on the SLICS portal at https://slics.roe.ac.uk/.
Since the impact is already quite small when adjusting p(N ap ), we are confident that also using a bi-variate approach for κ <θ |δ m,U would result in even smaller improvements as discussed in greater detail at the end of Sect. 3.2.Article number, page 9 of 20 A&A proofs: manuscript no. new_DSS_model
The calibration of the residual in the highest quantile alone led to an unbiased result.
For the remaining ∼ 200 realisations we have no corresponding lens galaxy mocks.
The predicted shear profiles do not change significantly even if the predicted P(N ap ) is substituted with the measured P(N ap )
For an Einstein-de Sitter universe µ = 5/7. Article number, page 14 of 20 Pierre Burger et al.: A revised density split statistic model for general filters
W U χ (q 1 ) P lin,0 (q 1 ) J 1 (rq 1 )
0.3282 0.6766 −1.2376 0.6677 0.6984 2 0.1019 0.7104 −1.6154 1.3428 0.7826 3 0.2536 0.6238 −1.7698 0.6670 0.6133
Acknowledgements. We thank the anonymous referee for the very constructive and fruitful comments. This paper went through the whole KiDS review process, where we especially want to thank the KiDS-internal referee Benjamin Joachimi for his fruitful comments to improve this work. Further, we would like to thank Mike Jarvis for maintaining treecorr and Ryuichi Takahashi for making his simulation suite publicly available. PB acknowledges support by the Deutsche Forschungsgemeinschaft, project SCHN342-13. OF gratefully acknowledges support by the Kavli Foundation and the International Newton Trust through a Newton-Kavli-Junior Fellowship and by Churchill College Cambridge through a postdoctoral By-Fellowship. JHD is supported by a STFC Ernest Rutherford Fellowship (project reference ST/S004858/1). Author contributions: all authors contributed to the development and writing of this paper.Appendix A: Detailed derivations for the new modelIn this appendix we show more detailed derivations of the results than in the main text. We start with the calculation of the variances or covariances in the flat-sky approximation, continue with calculations of the third-order moments and finish with the derivation of the PDF of the aperture number given a smoothed density contrast by use of the characteristic function.Appendix A.1: Variance and skewness for general filters at leading order in perturbation theory Although analytical possible we decided against using the bi-spectrum to calculate third-order moments like the skewness. Instead we use a formalism where we calculate the second-and third-order moments of the smoothed density contrasts within cylinder of physical radius R and physical length L using the flat-sky approximation shown in Appendix B in F18 for a top-hat filter, and apply it to our case with a general filter U. Numerically, this approach is faster since, as we will see below, it is possible to express the third-order moments in terms of second-order moments. Another advantage is that the projection is only along one dimension (radius of the cylinder) compared to the bi-spectrum, where the projection is at least along a 2D grid. Following F18 we start by considering a cylinder of radius R and length L. In Fourier space the top-hat filter for such a cylinder is given byThe following transformations provide a more compressed expression for δ 3 U χ ,L (χ), which can then be used to verify our derivation by comparing it with the result from F18 for a top-hat filter. For this, we rewrite the expression of Bessel functions in terms of W th r (q) asand with 1 rqAppendix A.2: Limber projectionGiven the moments of the smoothed density contrasts at comoving distance χ derived in the previous section, the moments in Eqs.(24,25)and Eqs. (31-33) for k = 1, 2, or 3 follow (see e.g.Bernardeau & Valageas 2000),where q f (χ) is the projection kernel defined in Eq. (10) and W s (χ) the lensing efficiency defined in Eq.(14). We note that these three equations employ a Limber approximation, which consists of L → ∞(Limber 1953), and that the physical radius r of filter U scales with χ as described below Eq. (A.3). We also note that these expectation values are independent of L.47), shown for the highest and lowest quantile for the adapted and top-hat filter. The corresponding redshift distributions of the lenses are given inFig. 1and for the sources several T17 shear grids are averaged, weighted by the n(z) given inFig. 2.
. T M C Abbott, F B Abdalla, A Alarcon, Phys. Rev. D. 9843526Abbott, T. M. C., Abdalla, F. B., Alarcon, A., et al. 2018, Phys. Rev. D, 98, 043526
. T M C Abbott, M Aguena, A Alarcon, Phys. Rev. D. 10523520Abbott, T. M. C., Aguena, M., Alarcon, A., et al. 2022, Phys. Rev. D, 105, 023520
Handbook of mathematical functions with formulas, graphs, and mathematical tables. Abramowitz, M. & Stegun, I. A.US Government Printing Office)55Washington, DCApplied mathematics series. 10th edn.Abramowitz, M. & Stegun, I. A., eds. 1972, Applied mathematics series, Vol. 55, Handbook of mathematical functions with formulas, graphs, and mathemati- cal tables, 10th edn. (Washington, DC: US Government Printing Office)
Mathematical methods for physicists. G Arfken, H J Weber, Elsevier Academic PressAmsterdam, Heidelberg6th edn.Arfken, G. & Weber, H. J. 2008, Mathematical methods for physicists, 6th edn. (Amsterdam, Heidelberg: Elsevier Academic Press)
. M Asgari, C.-A Lin, B Joachimi, A&A. 645104Asgari, M., Lin, C.-A., Joachimi, B., et al. 2021, A&A, 645, A104
. M Asgari, T Tröster, C Heymans, A&A. 634127Asgari, M., Tröster, T., Heymans, C., et al. 2020, A&A, 634, A127
. A Barthelemy, S Codis, F Bernardeau, MNRAS. 5035204Barthelemy, A., Codis, S., & Bernardeau, F. 2021, MNRAS, 503, 5204
. J Bergé, A Amara, A Réfrégier, ApJ. 712992Bergé, J., Amara, A., & Réfrégier, A. 2010, ApJ, 712, 992
. F Bernardeau, S Colombi, E Gaztañaga, R Scoccimarro, Phys. Rep. 3671Bernardeau, F., Colombi, S., Gaztañaga, E., & Scoccimarro, R. 2002, Phys. Rep., 367, 1
. F Bernardeau, P Valageas, A&A. 3641Bernardeau, F. & Valageas, P. 2000, A&A, 364, 1
. A Boyle, C Uhlemann, O Friedrich, MNRAS. 5052886Boyle, A., Uhlemann, C., Friedrich, O., et al. 2021, MNRAS, 505, 2886
. P Burger, P Schneider, V Demchenko, A&A. 642161Burger, P., Schneider, P., Demchenko, V., et al. 2020, A&A, 642, A161
. N E Chisari, D Alonso, E Krause, ApJS. 2422Chisari, N. E., Alonso, D., Krause, E., et al. 2019, ApJS, 242, 2
. J Derose, R H Wechsler, M R Becker, arXiv:1901.02401DeRose, J., Wechsler, R. H., Becker, M. R., et al. 2019, arXiv:1901.02401
. D J Eisenstein, W Hu, ApJ. 496605Eisenstein, D. J. & Hu, W. 1998, ApJ, 496, 605
. M Knabenhans, Euclid CollaborationJ Stadel, Euclid CollaborationMNRAS. 5052840Euclid Collaboration, Knabenhans, M., Stadel, J., et al. 2021, MNRAS, 505, 2840
. Z Fan, H Shan, J Liu, ApJ. 7191408Fan, Z., Shan, H., & Liu, J. 2010, ApJ, 719, 1408
. O Friedrich, D Gruen, J Derose, Phys. Rev. D. 9823508Friedrich, O., Gruen, D., DeRose, J., et al. 2018, Phys. Rev. D, 98, 023508
. L Fu, M Kilbinger, T Erben, MNRAS. 4412725Fu, L., Kilbinger, M., Erben, T., et al. 2014, MNRAS, 441, 2725
. B Giblin, C Heymans, M Asgari, A&A. 645105Giblin, B., Heymans, C., Asgari, M., et al. 2021, A&A, 645, A105
. D Gruen, O Friedrich, A Amara, MNRAS. 4553367Gruen, D., Friedrich, O., Amara, A., et al. 2016, MNRAS, 455, 3367
. D Gruen, O Friedrich, E Krause, Phys. Rev. D. 9823507Gruen, D., Friedrich, O., Krause, E., et al. 2018, Phys. Rev. D, 98, 023507
. A Halder, O Friedrich, S Seitz, T N Varga, MNRAS. 5062780Halder, A., Friedrich, O., Seitz, S., & Varga, T. N. 2021, MNRAS, 506, 2780
. T Hamana, M Shirasaki, S Miyazaki, PASJ. 7216Hamana, T., Shirasaki, M., Miyazaki, S., et al. 2020, PASJ, 72, 16
. J Harnois-Déraps, A Amon, A Choi, MNRAS. 4811337Harnois-Déraps, J., Amon, A., Choi, A., et al. 2018, MNRAS, 481, 1337
. J Harnois-Déraps, B Giblin, B Joachimi, A&A. 631160Harnois-Déraps, J., Giblin, B., & Joachimi, B. 2019, A&A, 631, A160
. J Harnois-Déraps, N Martinet, T Castro, MNRAS. 5061623Harnois-Déraps, J., Martinet, N., Castro, T., et al. 2021, MNRAS, 506, 1623
. J Harnois-Déraps, U.-L Pen, I T Iliev, MNRAS. 436540Harnois-Déraps, J., Pen, U.-L., Iliev, I. T., et al. 2013, MNRAS, 436, 540
. J Hartlap, P Simon, P Schneider, A&A. 464399Hartlap, J., Simon, P., & Schneider, P. 2007, A&A, 464, 399
. K Heitmann, E Lawrence, J Kwan, S Habib, D Higdon, ApJ. 780111Heitmann, K., Lawrence, E., Kwan, J., Habib, S., & Higdon, D. 2014, ApJ, 780, 111
. C Heymans, T Tröster, M Asgari, A&A. 646140Heymans, C., Tröster, T., Asgari, M., et al. 2021, A&A, 646, A140
. S Hilbert, J Hartlap, P Schneider, A&A. 53685Hilbert, S., Hartlap, J., & Schneider, P. 2011, A&A, 536, A85
. G Hinshaw, D Larson, E Komatsu, ApJS. 20819Hinshaw, G., Larson, D., Komatsu, E., et al. 2013, ApJS, 208, 19
. M Jarvis, G Bernstein, B Jain, MNRAS. 352338Jarvis, M., Bernstein, G., & Jain, B. 2004, MNRAS, 352, 338
. M Kilbinger, P Schneider, A&A. 44269Kilbinger, M. & Schneider, P. 2005, A&A, 442, 69
. A Lewis, A Challinor, A Lasenby, ApJ. 538473Lewis, A., Challinor, A., & Lasenby, A. 2000, ApJ, 538, 473
. D N Limber, ApJ. 117134Limber, D. N. 1953, ApJ, 117, 134
. C.-A Lin, M Kilbinger, A&A. 57624Lin, C.-A. & Kilbinger, M. 2015, A&A, 576, A24
. A J Mead, T Tröster, C Heymans, L Van Waerbeke, I G Mccarthy, A&A. 641130Mead, A. J., Tröster, T., Heymans, C., Van Waerbeke, L., & McCarthy, I. G. 2020, A&A, 641, A130
. D Munshi, J D Mcewen, T Kitching, J. Cosmology Astropart. Phys. 202043Munshi, D., McEwen, J. D., Kitching, T., et al. 2020, J. Cosmology Astropart. Phys., 2020, 043
. T Nishimichi, M Takada, R Takahashi, ApJ. 88429Nishimichi, T., Takada, M., Takahashi, R., et al. 2019, ApJ, 884, 29
A Papoulis, S U Pillai, Probability, random variables, and stochastic processes. BostonMcGraw-Hill3rd edn.Papoulis, A. & Pillai, S. U. 1991, Probability, random variables, and stochastic processes, 3rd edn. (Boston: McGraw-Hill)
. S Pires, A Leonard, J.-L Starck, MNRAS. 423983Pires, S., Leonard, A., & Starck, J.-L. 2012, MNRAS, 423, 983
. N Aghanim, Planck CollaborationY Akrami, Planck CollaborationA&A. 6415Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A5
. S Pyne, B Joachimi, MNRAS. 5032300Pyne, S. & Joachimi, B. 2021, MNRAS, 503, 2300
. P Reimberg, F Bernardeau, Phys. Rev. D. 9723524Reimberg, P. & Bernardeau, F. 2018, Phys. Rev. D, 97, 023524
. P Schneider, MNRAS. 283837Schneider, P. 1996, MNRAS, 283, 837
. P Schneider, ApJ. 49843Schneider, P. 1998, ApJ, 498, 43
P Schneider, J Ehlers, E E Falco, Gravitational Lenses. Berlin and HeidelbergSpringerSchneider, P., Ehlers, J., & Falco, E. E. 1992, Gravitational Lenses, Astronomy and Astrophysics Library (Berlin and Heidelberg: Springer)
. H Shan, X Liu, H Hildebrandt, MNRAS. 4741116Shan, H., Liu, X., Hildebrandt, H., et al. 2018, MNRAS, 474, 1116
. V Springel, N Yoshida, S D M White, New A. 679Springel, V., Yoshida, N., & White, S. D. M. 2001, New A, 6, 79
. R Takahashi, T Hamana, M Shirasaki, ApJ. 85024Takahashi, R., Hamana, T., Shirasaki, M., et al. 2017, ApJ, 850, 24
. R Takahashi, M Sato, T Nishimichi, A Taruya, M Oguri, ApJ. 761152Takahashi, R., Sato, M., Nishimichi, T., Taruya, A., & Oguri, M. 2012, ApJ, 761, 152
. M Vakili, M Bilicki, H Hoekstra, MNRAS. 4873715Vakili, M., Bilicki, M., Hoekstra, H., et al. 2019, MNRAS, 487, 3715
. A H Wright, H Hildebrandt, Van Den, J L Busch, A&A. 64014Wright, A. H., Hildebrandt, H., van den Busch, J. L., et al. 2020, A&A, 640, L14
| []
|
[
"Identity-based Trusted Authentication in Wireless Sensor Network",
"Identity-based Trusted Authentication in Wireless Sensor Network"
]
| [
"Yusnani Mohd Yussoff ",
"Habibah Hashim ",
"Mohd Dani Baba \nComputer Engineering Department\nUniversity Teknologi MARA\n40450Shah AlamSelangorMalaysia\n",
"\n1,2\n"
]
| [
"Computer Engineering Department\nUniversity Teknologi MARA\n40450Shah AlamSelangorMalaysia",
"1,2"
]
| []
| Secure communication mechanisms in Wireless SensorNetworks (WSNs) have been widely deployed to ensure confidentiality, authenticity and integrity of the nodes and data. Recently many WSNs applications rely on trusted communication to ensure large user acceptance. Indeed, the trusted relationship thus far can only be achieved through Trust Management System (TMS) or by adding external security chip on the WSN platform. In this study an alternative mechanism is proposed to accomplish trusted communication between sensors based on the principles defined by Trusted Computing Group (TCG). The results of other related study have also been analyzed to validate and support our findings. Finally the proposed trusted mechanism is evaluated for the potential application on resource constraint devices by quantifying their power consumption on selected major processes. The result proved the proposed scheme can establish trust in WSN with less computation and communication and most importantly eliminating the need for neighboring evaluation for TMS or relying on external security chip. | null | [
"https://arxiv.org/pdf/1207.6185v1.pdf"
]
| 9,569,872 | 1207.6185 | adce2e22df96060afc7f0a8389e5610c52f3e6b5 |
Identity-based Trusted Authentication in Wireless Sensor Network
Yusnani Mohd Yussoff
Habibah Hashim
Mohd Dani Baba
Computer Engineering Department
University Teknologi MARA
40450Shah AlamSelangorMalaysia
1,2
Identity-based Trusted Authentication in Wireless Sensor Network
TrustedSecurityAuthenticationWireless Sensor NetworkIdentity-based cryptography
Secure communication mechanisms in Wireless SensorNetworks (WSNs) have been widely deployed to ensure confidentiality, authenticity and integrity of the nodes and data. Recently many WSNs applications rely on trusted communication to ensure large user acceptance. Indeed, the trusted relationship thus far can only be achieved through Trust Management System (TMS) or by adding external security chip on the WSN platform. In this study an alternative mechanism is proposed to accomplish trusted communication between sensors based on the principles defined by Trusted Computing Group (TCG). The results of other related study have also been analyzed to validate and support our findings. Finally the proposed trusted mechanism is evaluated for the potential application on resource constraint devices by quantifying their power consumption on selected major processes. The result proved the proposed scheme can establish trust in WSN with less computation and communication and most importantly eliminating the need for neighboring evaluation for TMS or relying on external security chip.
Introduction
Wireless Sensor Networks (WSNs) is network consisting of sensor nodes or motes communicating wirelessly with each other. Advancement in sensor, low power processor, and wireless communication technology has greatly contributed to the tremendous wide spread use of WSNs applications in contemporary living. Example of these applications include environmental monitoring, disaster handling, traffic control and various ubiquitous convergence applications and services [1]. Low cost and without the need of cabling are two key motivations towards future WSN applications. These applications however demand for considerations on security issues especially those regarding nodes authentications, data integrity and confidentiality. Commonly, the sensor nodes are left unattended, and are vulnerable to intruders. The situation becomes critical when the nodes are equipped with cryptographic materials such as keys and other important data in the sensor nodes. Moreover, adversaries can introduce fake nodes similar to the nodes available in the network which further leave the sensor nodes as untrusted entities. Two approaches have been widely researched to ensure the validity of nodes in the networks thus further confirm the need of trusted communication between nodes in the network. The following paragraph briefly discusses the two approaches.
TMS is one of the more widely used mechanisms in aiding WSN member (trustors) in dealing with the uncertainties in participants (trustees) future actions [2]. It basically studies the behaviour of the nodes in the networks for a certain period and calculates the trust value. However, TMS can only detect the existence of fake nodes in the network after a certain period. Hence, adversary nodes may have participated in the network and may have caused network disorders by the time TMS identifies them. Furthermore, since TMS is mathematical-based it indirectly imposes burdens to sensor nodes such as extra processing power, memory requirement, and communication in the networks.
The node's trustworthiness can also be achieved through the Trusted Platform Module (TPM) crypto-processor chip. In a recent work, Wen Hu [3,4] used the TPM hardware which is based on Public Key (PK) platform to augment the security of the sensor nodes. It was claimed that the SecFleck architecture proposed in [3] provides the internet-level PK services with reasonable energy consumption and financial overhead. Unfortunately, the drawbacks of TPM chip which include extra hardware entailed on the platform and the superfluous of commands required to perform its functions both contribute to higher energy utilizations.
To avoid the infeasibility of deploying TPM chip in wireless sensor nodes, this study proposes the use of the ARM1176JZF-S processor with Trustzone features as described in [5]. This paper proposes a secure mechanism to accomplish a trusted relationship between sensors in the wireless networks according to TCG specifications. Firstly it describes how the trusted platform is established; follows by the description on trusted authentication protocol that confirms only trusted nodes existed in the network. Finally it presents an analysis on the energy consumption for the trusted platform and the authentication protocol.
The remainder of this paper is structured into six major parts: Section 2 addresses the current security challenges in WSNs followed with some introductory notes on trust as outlined by TCG and Identity Based Encryption (IBE) in section 3 and 4 respectively. Section 5 introduces the design for a trusted platform based on ARM1176JZF-S processor. Further, Section 5 describes the proposed IBE-Trust security framework. Then the analysis on the proposed scheme is discussed in Section 6. Finally, Section 7 concludes the paper.
Security in WSNs
Security mechanisms for WSNs can be divided into three related phases. The first phase is to secure the sensor node or the platform itself so that the network originator can guarantee the integrity of the sensor node of the network. Next phase is the big challenge in securing the network infrastructure or the wireless medium to ensure reliable, secure and trusted communication. The final phase involves protecting the confidentiality and integrity of the data since in wireless communication anyone can intercept the data. Hence these three components namely the sensor node, network infrastructure and data are the crucial entities that need to be protected in wireless sensor network. This served as the fundamental requirement in the design of a trusted wireless sensor framework. The following sub-sections present the proposed security goals and the simplified TCG specifications adopted as the basis in the design of secured framework.
Proposed Security Goals
In acknowledging the various types of attacks in WSNs as discussed in [6], the secured framework in this study proposes the following security features.
Trusted Platform -Trusted Platform is achieved through a chain-of-trust with image identified as "bootloader1" in the SoC ROM as the Root of Trust (ROT) and a secure boot process that measures the integrity of software images, applications, and components on the sensor nodes. Also, the trusted platform offers secure memory location for sensitive credentials such as private keys.
Trusted Authentication -Verifies that a sender is a trusted user or node and will behave in a trusted manner for the network. The authentication protocol which is developed on Identity based Cryptography is identified as IBE_Trust. This protocol confirms the authenticity of nodes and also the confidentiality and integrity of the exchanged message.
TCG Specifications for Trust Establishment
According to [7], trust can be defined as an entity that always behaves in an expected way for any intended functions. The basic properties of a trusted computer or system can be listed as follows;
• Isolation of programs -prevents program A from accessing data of program B • Clear separation between user and supervisor process -there should be a system to prevent user applications from being interfered by the operating system. • Long term protected storage -secret values are stored in a place that last across power cycles and other events. • Identification of current configuration -provides identity of the platform as well as software or hardware executing on it. • Verifiable report of the platform identity and current configuration -a way for other users to validate a platform. • Include Hardware based protection-protection in a combination of hardware and software.
The basic building block of a trusted platform according to TCG definition consists of properties, measurement and reporting. Properties refer to unique or unaltered values over the life of the platform. Measurement is the process of obtaining the identity of the platform function and should begin at the ROT of the platform. It will measure the hash value of the platform component before passing the control to the next process. The flow of the measurement process is called the 'Chain -of -Trust'. The ROT is an entity that must be trusted as well as properly protected as there is no mechanism available to measure it. Finally, the reporting will provide the evidence to those wishing to rely on the information and is established through report or attestation.
Note that trust is established through two different processes which are measurement and reporting or attestation. In order to ensure message integrity and confidentiality during the reporting process, the message will be encrypted using Identity Based Encryption (IBE) algorithm. Brief discussion on IBE is presented in the subsequent section.
Identity-Based Encryption (IBE)
IBE was proposed by Adi Shamir in 1984 and only in 2001, Boneh and Franklin [8] have successfully implemented a fully functioning IBE scheme. The IBE has simplified the certificate based public key encryption scheme by using publicly known unique identifiers to derive public keys and eliminate the needs of certificate authority.
In IBE, an arbitrary string is used as a public key. The public key can be calculated from any string such as email, project name or any other string. According to RFC 5408 [9], an IBE public key can be calculated by anyone who has the essential public key while a cryptographic secret (master key) is needed to calculate the IBE private key, in which the calculation can only be performed by a trusted server that has this secret. In WSN, the trusted authority or trusted entities is the BS which has to be placed in the most secured place and controlled directly by the network proprietor. Besides that, the existence of pre-deployment stage offers better security and controlled environment for the key distribution phase. This criterion does not exist in other Public Key Cryptography (PKC) infrastructure.
Another characteristic that differentiates IBE from other server-based cryptography is that no communication is required with the server during encryption operation whereby the sender only needs to know the recipient's ID to encrypt the message. Additionally, IBE implementation also consumes less memory for storing public keys of the other nodes. These factors have supported the use of IBC instead of PKC in this implementation. Fig. 1 portrays the difference in concept between PKC and IBC followed by four stages in standard IBC implementation.
Fig. 1 IBC and PKC standard implementation
Setup -This process should be done by any Trusted Agent (TA). In WSNs, TA can be the BS. A security parameter k is provided as the input and BS will generates the public BF parameters (G 1 , G T , ê, n, P, sP, H 1 , H 2 , H 3 , H 4 ) and its master key, s. The parameters are pre-loaded to all sensor nodes in the network. Interested readers can refer to the book by Luther Martin [10] for more details.
Extract-
The extract process needs public parameters and master key values from the setup process. The public keys associated with sensor node ID are identified by mapping the identity on the elliptic curve E/F q : y 2 = x 3 + 1 using Eq.
(1). The outcome from the cryptographic hash function Q IDx is then multiplied with the master key, s to obtain the private key d x .
Q IDx =H 1 (ID x ) (1) d x =sQ IDx (2)
Encrypt -The input to this process includes common parameters, recipient ID and message M ∈ M and the output ciphertext C∈C. C = encrypt(params,ID,M)
Decrypt -The input to this process are common parameters, private key d x, and C∈C while the output is
M ∈M. M=decrypt(params,d x ,C)(4)
Framework of Trusted Sensor Node
This section discusses the methods used to accomplish the previously mentioned security features. It is divided into two major sections which are identified as Trusted Platform and IBE-Trust for simplicity.
Trusted Platform
The security provided by cryptography mainly depends on safeguarding the cryptographic keys from adversaries. It grants the need to adequately protect the keys to ensure confidentiality and integrity of sensitive data. This section discusses on how this study manipulates ARM1176JZF-S security features to fulfil the TCG trust definition for a trusted platform. Listed are ARM1176JZF-S features that are used to realize the basic properties of trusted platform design. Fig. 2 correlates TCG trust specifications with the proposed solution.
Secure world -sensitive resources such as encryption and decryption images will be placed in the secured world memory locations. Trust Zone Address space controller (TZASC) is used to configure regions as either secure or non-secure. All non-secure processes will be rejected from the secure region. This ensures the confidentiality of important data and images.
Single physical core -safe and efficient execution of code from both normal and secure world. Secure monitor codes are developed to switch from normal to secure and vice versa.
On-SoC RAM and ROM -will ensure no highly sensitive data leaves the chip thus reduce the possibility of physical attacks.
Secure boot -a process to ensure the integrity of the software images and devices on the platform and generate management value as platform unique entity.
IBE-Trust protocol -confirming secure communication between sensors and BS. The overall process was developed based on ARM1176JZF-S development board. Codes are written in assembly language to minimize memory size and to speed up processing time. At the point of writting, this study has successfully developed a secure boot process up to Level 2 (L 2 ) of a total of three levels. The proposed Chain-of-Trust is best described as follows:
Level 1 (L 1 ):
The ROT which is the entity that must be trusted is located in the 16KB on-SoC ROM of the ARM1176JZF-S processor. The integrity (I 0 ) of the image that is burned into it which is the 1 st Boot loader image is assumed to be unmodifiable and therefore is always TRUE. This assigns 1 to level 1 (L 1 )
ROT Boot Loader (BL 1 )[assume trust] Integrity (I 1 ) = True= 1 ∴ L 1 = 1 Level 2 (L 2 ): Verifies image of the second bootloader (BL 2 ) residing in the external storage by measuring the hash value of the image. The referenced value is predetermined and is stored together with the first level bootloader. If the integrity of 2 nd bootloader (I 2 ) is verified, then the 2 nd bootloader image is loaded and executed.
Hash (BL 2 )' == Hash value of [(BL 2 ) in BL 1 ] Integrity (I 2 ) = True = 1 ∴ L 2 = 1
Where the prime symbol " ' " substitutes as the new measure values. At this stage, if I 2 equals to 0, the process will halt. The sensor node will be able to complete the secure boot process only if the integrity in each level is true. Once successful, the unique value generated from the secured boot process will then be used to establish the trust relationship with the BS. Due to the limited register space in ARM1176JZF-S, the secured boot design will only consider eight hexadecimal characters as the comparison value. For validation, hundreds of different images were hashed using SHA-2 algorithm and it was found that none of the output produced an identical eight hash value in a location. For security reason, the location is undisclosed. The secure boot integrity (I) is checked using the Boolean equation as in Eq. (5).
I = I 1 .I 2 .I 3...... I N-1. I N(5)
Where, N represents the level in the secured boot process or the last entity in the chain of trust. The integrity checking is transitive from 1 to 2 to 3 and to N and does not invert where trusting entity 1 does not imply to trusting entity N and trusting entity N requires trusting entity 1 to N-1.
SHA-256
SHA is a type of cryptographic hash function that guarantees the integrity. As the security-performance tradeoff is relatively linear, two factors are identified contributing to the selection of the hash algorithm. First is the size of the algorithm itself; it must be small enough to fit into the secured location and the second is that the algorithm must be powerful enough to resist from attacks with no collision [11] in the algorithm. This study takes advantages of the 256-bit SHA-2 as the hash algorithm.
Although there are several algorithms in the SHA family, SHA-2 has proven to be safe in literatures to date [12]. SHA-2 output 64 hexadecimal characters or 256-bit hash values and is considered as "sufficiently high" for the foreseeable future. Moreover the run time necessary for a birthday attack is on the order of 2 128 and therefore is currently assumed to be collision free.
Secure and Non-secure world
Trustzone architecture of the ARM processor enables the construction of a programmable environment that allows the confidentiality and integrity of almost any asset to be protected from specific attacks. In other words, there are two different modes in ARM processor. Normal mode allows access to all system resources while secure mode restricts access to resources. Trustzone state is controlled by the Secure Monitor Code (SMC) that handles switching between secured and non-secured world. SMC requires complex codes to allow calls from complex Real Time Operating System (RTOS). Other method is by specifying the secured and non-secured region in the scatter file. Sensitive processes such as SHA-2, encryption and decryption were configured to run in a secured environment by calling the monitor switch function prior to process. This is straight forward and sufficient for currently proposed system.
Address Space Partitioning and Interrupts
Trustzone address spaces are divided into secure (only accessible in trust world) and non-secure regions (accessible from both state). TrustZone Protection Controller (TZPC) is one of the ways used to configure different regions in the memory as secured or non-secured. However, this work defines secured and non-secure regions using page-table file because it was found much simpler and less complex. The world in which the processor is executed is indicated by the non-secure bit (NS-bit) in the secured configuration register (CP15). Low value of NS-bit indicates the secured world execution. IRQ and FIQ are two interrupt vectors that are used to switch the processor into monitor mode.
The abovementioned sections have discussed the methods to accomplish the first two basic building block of becoming a trusted platform which are property and measurement. Following sections discuss on communication procedures in registering valid sensor nodes into the network utilizing the unique entity derived earlier, thus fulfill the third specification which is reporting through attestation or report.
IBE-Trust Security Model
Typical WSNs scenarios adopted in the proposed framework are uncontrolled environment, random node placement and self configuration. The networks consist of several sensor nodes and a BS as the trusted agent. All sensor nodes communicate via bi-directional wireless link with equal transmission range. Each node has a unique, string based, non-zero identity and are loosely synchronized. During the first implementation, all sensors in the network will report its ID to BS.
Standard four IBE stages have been reduced to three stages in the implementation. The earlier two stages which are setup and extract are combined together. The combination of the two stages was made possible due to the proposed IBE implementation procedure. The overall development used the Tate Pairing algorithm by [13] downloaded from Shamus website [14]. The MIRACL library was than compiled into ARM single image library and was included in the executable images (ibe_gen, encrypt, and decrypt) to benchmark elliptic curve point manipulation.
For implementation, this study suggests four different stages starting from generation of keys and common parameters to on-line node registration. Scopes of the different stages are discussed in the following subsections.
Delivery phase (DP)
The DP stage is offline with the intention to provide the networks with complete information such as the identity of sensor nodes, private keys, master key, and BF parameters except master key. All newly joined nodes need to go through this stage thus allowing the BS to have a list of nodes appear in the network.
Pre-Deployment (PDP)
Once configured with the necessary information, the sensor nodes will go through a boot-up process under a controlled environment to generate its unique management value. The generated value together with the sensor node ID will securely be sent to BS for further verification.
Deployment Stage (DY)
This process happens immediately after node deployment at the intended location. At this stage the sensor node will boot up and go through the secure boot process. Outcome from this stage is the same unique trust value.
Trusted Authentication (TA)
This stage aims to register node's unique ID with BS for further communication. Successful boot up node will report it trust value to the trusted authority, which in this case is the BS. The BS will then decrypt the message, verify the unique ID together with the trust value in its database. Upon successful authentication, BS will generate a new list containing the trusted node's identity (trustID). This new list, which is smaller than the trust list will be distributed to sensors in its network for faster verification process between nodes. To this stage, this study has not finalized any secure distribution methods of trustID table to existing nodes in the network.
Packet format
According to IEEE 802.15.4 compliant radio transceiver standard, the maximum packet length is 127 bytes. However, maximum data payload for CC2420 transceiver according to TinyOS packet format is 114 bytes. However, to enable extra information for IBE_trust protocol, the maximum data size or payload is now reduced to 106 bytes. The IBE_trust packet consists of 2 bytes sender ID, 2 bytes random nonce value, bytes message and 4 bytes truncated MAC. Fig. 4 depicts the packet format starting with raw message followed by payload data structure according to TinyOS and finally IBE_trust packet format. Trusted nodes in the network remain in trusted condition as long as it remains in the ON state. Once rebooted or shutdown for any reason, the nodes will need to reauthenticate with the BS. Failure to authenticate will lead to node termination process where the node's ID will be removed from the trust list. Formal analysis of the protocol will be discussed in our later publications. Due to availability of trust list in each sensor, subsequent communications between sensors are very much simplified. Receiving sensor will be authenticated based on sender ID and upon successful, receiver will locally generate session key using pre-installed key derivation function (KDF) based on the receiving value for its subsequent secure communication. To utilize the installed parameters, the AKE is based on the symmetric bilinear pairings.
A:
Picks random number r∈Z * q,
Computes R= rQ A where Q A is public key of A and send R to B over public channel using packet format as depicted in Fig. 5. The idea towards this implementation is adopted from IDbased one-pass AKE technique [15] and the only difference is in the authentication value where, [28] authenticated using sender public key and this work used sender ID.
Suggested applications for the proposed scheme include health and medical monitoring where nodes assigned to users may first have to register with the BS and once successful they are free to move and data can travel securely direct to the BS or in multi-hop manner.
Energy Equation
To confirm the practicability of the proposed IBE-trust scheme in WSNs, this study also calculates the energy consumption for a newly joined sensor node. Due to the availability of switching process between secured and nonsecured modes in ARM1176JZF-S, the switching energy is added to the energy consumption equation for more accurate values. The total energy for sending encrypted data (unique trust value) to the BS is calculated as in Eq. (6). All energy values are presented in joules using Eq. (9).
E T = E Boot +E SW + E enc/bit *data(bits) + E ta (6) E ta = E Tx *bytes transmitted + E Rx *bytes received (7)
Substituting (7) into (6), makes (8):
E T = E Boot +E SW + E enc *data(bits) + E Tx *bytes + E Rx *bytes (8) E(J) = Power in watts * time (9) Table 1 describes the notations used in the proposed scheme.
Notations
Analysis of the proposed scheme
This section presents the analysis of the proposed scheme.
Energy Utilization
The results are obtained by conducting the analysis on the ARM1176JZF-S development board. The processor runs at 20mA, 3.6V with frequency 667MHz. Since the encryption and communication processes consume most of the energy [16], this study only considers the amount of energy used by these processes. As part of the benchmarking, this study also compares the work with secFleck [3] implementation that utilizes TPM chip in providing the public key technology for WSN.
The energy per bit used in the encryption process for secFleck (hardware/software) is 5.4/7030µJ while in this study the energy/bit is 22.5 µJ used for the encryption process which is fully implemented using software. Although the energy used in the proposed scheme is higher compared to SecFleck hardware based implementation, it does not employ external crypto-processor chip on the sensor node platform.
Based on a preliminary testing in this study, the switching process takes about 0.23s and consumes around 16.56mJ of energy which is higher than that required for the encryption process. This limits switching from normal to secure mode and vice versa for important processes only. However the delay can be reduced in actual implementation where function calls to clock and standard input-output can be eliminated.
Tate pairing as seen in Table 3 consumed the highest energy due to its complicated computation. However, this study obtains 0.148J lower than the result obtained by Doyle et al. [17] that utilized ARM7. This shows an indirect relationship between the processor specifications and sensor node lifetime. Hence, it implicates the use of dedicated low power processor for embedded applications.
To realize computational complexity of trusted authentication and authenticated key exchange, energy utilization of the above processes is calculated and presented. As energy to transmit and receive is proportional to message size, total energy is calculated using equation 6. Assuming 106 bytes payload and 21 bytes header, nodes performing trusted authentication process needs to transmit an encrypted message of 280 bytes consisting of key file and cipher text message and receive an acknowledgement packet with list of trusted node ID (one time only during node deployment).
Assuming 200 nodes, size of trustID will be around 400bytes and total bytes received are around 480 bytes including packet header ((400/106)*127). For nodes to nodes authentication and key exchange, node only needs to send a single message, sized 85 bytes (64 bytes of rQ A and 21 bytes header) to its neighbor. Table 4 tabulated the energy consumption based on CC2420 transceiver used in the proposed work. Total energy used for a one time trusted authentication process calculated using equation (8) is 0.027J. Energy incurs during Fast tate pairing is not included as its can be done offline. Assuming nodes with limited 1000J full battery capacity [17], the percentage of energy used for the above processes is to be less than 1%. It is believed that the results obtained are an acceptable cost for one-time or rare distribution of trust management values to establish trust relationship between sensor nodes and the BS.
Efficiency
To confirm the efficiency of the proposed scheme, the comparison of energy utilization for user authentication scheme is tabulated in Table 4. Data for existing work in the table were adopted from Rehana's et. al [18] work. It is clearly seen that the proposed mechanism consumes the least energy as compared to other schemes for the same security features of user authentication and secure communications.
Physical Attacks
The use of ARM1176JZF-S as the processor with its On-Soc memory has helped in this study to protect important credentials such as sensor node private keys. Moreover, in this scheme, only part of the private key is stored in the sensor node memory thus further protect the sensor nodes and network. Images such as encryption and decryption are stored in the secured memory region of flash memory and are only accessible in the secured mode environment. The effect of BSL attacks can also be reduced through the secure boot process where the integrity of loaded images has been verified to prevent sensor nodes from running malicious code.
Node impersonation
Node impersonation happens when intruders manage to duplicate the unique identity of the sensor node that is being used during authentication. Non-regeneration of the same trust value through secure boot process has significantly reduced the possibility of having a masquerade node in the network.
Typical wireless attacks
This study also confirms that the communication during trusted authentication is free from active attack such as message modification, replay attack, false message through packet encryption, nonce value as well as entity and data authentication. The confirmation is done through formal analysis method and is discuss in another paper.
Security of the proposed scheme
The security of IBE-Trust protocol is best realized through the security of full BF IBE scheme. In this scheme, the public key can be written as Q ID = tP for some unknown t. Therefore ê(rQ ID ,sP) = ê(rtP,sP) = ê(P,P) rst and ciphertext C = (rP,M ⊕ H 2 ê(P,P) rst ). If an adversary manages to get P and sP from the public parameters, they can calculate Q ID = tP from receiver's identity and observes rP in the ciphertext. Moreover, if the adversary manages to calculate ê(P,P) rst from P,sP,rP and tP then it will be able to recover the message M by calculating (M ⊕ H 2 ê(P,P) rst ⊕ H 2 (ê(P,P) rst )) = M. Calculating ê(P,P) rst is actually solving Bilinear Diffie-Hellman Problem (BDHP) and is very difficult [10]. This brief analysis confirms the confidentiality of unique trust value of each sensor node in the network that is sent to BS encrypted with IBE scheme. For node to node authentication, our proposed scheme uses the ID-based one-pass AKE. In the existing protocol presented by [15], the authentication is established when both parties manage to generate similar shared key locally. In our proposed protocol, beneficiary node will first check the identity of nodes requesting to authenticate and proceed to compute the secret shared key if the ID exists in the table provided by BS. This somehow has provided a two tier security mechanism and has limited this expensive operation to valid nodes only. Identity-based one-pass AKE is based on symmetric bilinear pairings and is secure by assuming the hardness of BDHP with H 1 ,H 2 and κ modeled as random oracle. Interested readers can find proof to this method in [15].
Conclusion
This paper has presented an alternative method to confirm the trustworthiness of nodes in WSN. The proposed scheme involves designing a trusted platform and an energy efficient authentication protocol. For the trusted platform, ARM1176JZF-S processor together with CC2420 chip has been chosen as the platform processor and transceiver respectively. Both processor and transceiver chips have greatly supported the design of low energy trusted platform. Besides low energy, most importantly the proposed trusted platform fulfills the trust requirement as outlined in the TCG documentation. Consequently, the proposed trusted mechanism has contributes to enhance security in WSNs by reducing the probability of fake or clone sensor node through nonregenerated unique platform identity. Finally, the proposed work has opened a new research area towards trusted sensor node platform.
Fig. 2
2Process flow of the frameworkFig. 3 Correlation between TCG specification and our proposed solution 4.2 Secure Boot Process as the Chain-Of-Trust
Fig. 4
4Packet Structure
Fig. 5
5Fig. 5 IBE_Trust Authentication protocol 5.6 ID based-one-pass Authenticated Key Exchange (AKE)
A
B: Snd(A.B.IDA.R'.Na'.Mac(IDA.R'.Na'))where h = H 2 (R,ID A ||ID B ) and is computed by both parties and S A is the private key of node A which is securely stored in On-SoC ROM. Both parties A and B then compute the shared secret as K AB = e((r+h)S A ,Q B ) and K BA = e(R + hQ A ,S B ) and finally the session key is computed by A as κ(K AB ) and by B as κ(K BA ) where κ is key derivation function.
Table 1 :
1Notations used in the proposed scheme
Symbol
Description
IDA
Identifier of sensor node A
PvKA
Private key of sensor node A
A.S
Sender ID.Receiver ID
Hm_A'
New trust value (DY) stage
Hm_A
Trust value at PDP stage
Na',Nb'
Random Nonce
Ks , KA
Public key BS and A
Snd
Send packet
_Ks
Encrypted packet with KS public key
Mac
Hash function
Table 2 :
2Energy consumption for major processes
Process
Delay(s)
Energy
Secure Bootup (1 st stage only)
0.059
4.24mJ
Encryption (C++)
0.05
22.5µJ/bit
Sha2 (asm)
0.05
3.6mJ
Switching (asm+ C)
0.23
16.56mJ
Fast tate pairing
4.05s
0.292J
Table 3 :
3Communication overhead of our trusted authentication scheme based on CC2420 transceiverProposed Work
Process
Bytes
(data +
header)
Energy
Trusted
Authentication
Transmit
(1.83µJ/byte)
319
0.58mJ
Receive
(1.98 µJ/byte)
480
0.95mJ
Key exchange
Transmit
(1.83µJ/byte)
85
0.15mJ
Receive
(1.98 µJ/byte)
0
0
Table 4 :
4Energy comparison of proposed user authentication scheme with proposed scheme This section generally demonstrates how the proposed protocol can prevent typical attacks on sensor networks.Schemes
Authenticatio
n scheme
Energ
y
Costs
(mJ)
Storage
Overhead
(bytes)
Session
Key
RRUAN
ECDSA
106.84
0
No
DP 2 AC
RSA
14.05
+ TE
10N
No
Rehana[18]
IBS
72.90
0
Yes
Proposed
Scheme
IBE-
trust+one-way
AKE
26.9
2N
Yes
** N = number of nodes
6.3 Security Analysis
Acknowledgments
Efficient sensor node authentication in third generation-wireless sensor networks integrated networks. K Han, K Kim, J Park, Communications, IET. 512K. Han, K. Kim, J. Park et al., "Efficient sensor node authentication in third generation-wireless sensor networks integrated networks," Communications, IET, vol. 5, no. 12, pp. 1744-1754.
Trust Management System for Wireless Sensor Networks:Best Practise. J Lopez, R Roman, I Agudo, Computer Communications. J. Lopez, R. Roman, I. Agudo et al., "Trust Management System for Wireless Sensor Networks:Best Practise," Computer Communications, 2010.
SecFleck: A public key technology platform for wireless sensor networks. W Hu, P Corke, W C Shih, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics. Springer VerlagW. Hu, P. Corke, W. C. Shih et al., "SecFleck: A public key technology platform for wireless sensor networks," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 2009, pp. 296-311.
Toward trusted wireless sensor networks. W Hu, H Tan, P Corke, ACM Trans. Sen. Netw. 71W. Hu, H. Tan, P. Corke et al., "Toward trusted wireless sensor networks," ACM Trans. Sen. Netw., vol. 7, no. 1, pp. 1-25, 2010.
Trusted Wireless Sensor Node Platform. Y M Yussoff, H Hashim, Proceedings of The World Congress on Engineering 2010 London. The World Congress on Engineering 2010 LondonUnited KingdomY. M. Yussoff, and H. Hashim, "Trusted Wireless Sensor Node Platform," in Proceedings of The World Congress on Engineering 2010 London, United Kingdom, 2010, pp. pp774-779.
An Ontology for Attacks in Wireless Sensor Networks. W Znaidi, M Minier, J.-P Babau, National De Recherche En Informatique Et En Automatique, Montbonnot Saint IsmierW. Znaidi, M. Minier, and J.-P. Babau, An Ontology for Attacks in Wireless Sensor Networks, National De Recherche En Informatique Et En Automatique, Montbonnot Saint Ismier, 2008.
D Grawrock, Dynamics of a Trusted Platform. Intel PressD. Grawrock, Dynamics of a Trusted Platform: Intel Press, 2009.
Identity-based encryption from weil pairing. D Boneh, M Franklin, Advance in cryptology-crypto. 213929D. Boneh, and M. Franklin, "Identity-based encryption from weil pairing," Advance in cryptology-crypto, vol. 2139, pp. 29, 2001.
RFC5408 -Identity-Based Encryption Architecture and Supporting. L Martin, G Appenzeller, M Schertler, Network working Group. L. Martin, G. Appenzeller, and M. Schertler, "RFC5408 - Identity-Based Encryption Architecture and Supporting," Network working Group, 2009.
Introduction to Identity-Based Encryption. L Martin, Artech HouseNorwoodL. Martin, Introduction to Identity-Based Encryption, Norwood: Artech House, 2008.
Cryptographic Hash-Function Basics: Definitions, Implications, and Separations for Preimage Resistance, Second-Preimage Resistance, and Collision Resistance. B Roy, W Meier, P Rogaway, Lecture Notes in Computer Science. SpringerFast Software EncryptionB. Roy, W. Meier, P. Rogaway et al., "Cryptographic Hash- Function Basics: Definitions, Implications, and Separations for Preimage Resistance, Second-Preimage Resistance, and Collision Resistance," Fast Software Encryption, Lecture Notes in Computer Science, pp. 371-388: Springer Berlin / Heidelberg, 2004.
Finding Collisions in the Full SHA-1. V Shoup, X Wang, Y Yin, Advances in Cryptology -CRYPTO 2005. Berlin / HeidelbergSpringerV. Shoup, X. Wang, Y. Yin et al., "Finding Collisions in the Full SHA-1," Advances in Cryptology -CRYPTO 2005, Lecture Notes in Computer Science, pp. 17-36: Springer Berlin / Heidelberg, 2005.
Efficient pairing computation on supersingular Abelian varieties. P S Barreto, S D Galbraith, C , Des. Codes Cryptography. 423P. S. Barreto, S. D. Galbraith, C. \ et al., "Efficient pairing computation on supersingular Abelian varieties," Des. Codes Cryptography, vol. 42, no. 3, pp. 239-271, 2007.
MIRACL-Multiprecision Integer and rational Arithmetic C/C++ Library. M Scott, Shamus Software Ltd. M. Scott, "MIRACL-Multiprecision Integer and rational Arithmetic C/C++ Library," Shamus Software Ltd., 2010.
ID-based One-pass Authenticated Key Establishment. M C Gorantla, C Boyd, J M G Nieto, Australasian Information Security Conference, AISC'08 Australia. M. C. Gorantla, C. Boyd, and J. M. G. Nieto, "ID-based One-pass Authenticated Key Establishment," in Australasian Information Security Conference, AISC'08 Australia, 2008, pp. 39-46.
Energy Analysis of Public-Key Cryptography for Wireless Sensor Networks. S W Arvinderpal, G Nils, E Hans, S.W. Arvinderpal, G. Nils, E. Hans et al., "Energy Analysis of Public-Key Cryptography for Wireless Sensor Networks," 2008.
Security Considerations and Key Negotiation Techniques for Power Constrained Sensor Networks. B Doyle, S Bell, A F Smeaton, Computer Journal. 494B.Doyle, S. Bell, A. F. Smeaton et al., "Security Considerations and Key Negotiation Techniques for Power Constrained Sensor Networks," Computer Journal, vol. 49, no. 4, pp. 11, 2006.
An Authentication Framework for Wireless Sensor Networks using Identity-Based Signatures. R Yasmin, E Ritter, G Wang, Proceedings of the 2010 10th IEEE International Conference on Computer and Information Technology. the 2010 10th IEEE International Conference on Computer and Information TechnologyR. Yasmin, E. Ritter, and G. Wang, "An Authentication Framework for Wireless Sensor Networks using Identity- Based Signatures," in Proceedings of the 2010 10th IEEE International Conference on Computer and Information Technology.
| []
|
[
"Fractional Quantum Hall Effect at High Fillings in a Two-subband Electron System",
"Fractional Quantum Hall Effect at High Fillings in a Two-subband Electron System"
]
| [
"J Shabani \nDepartment of Electrical Engineering\nPrinceton University\n08544PrincetonNJUSA\n",
"Y Liu \nDepartment of Electrical Engineering\nPrinceton University\n08544PrincetonNJUSA\n",
"M Shayegan \nDepartment of Electrical Engineering\nPrinceton University\n08544PrincetonNJUSA\n"
]
| [
"Department of Electrical Engineering\nPrinceton University\n08544PrincetonNJUSA",
"Department of Electrical Engineering\nPrinceton University\n08544PrincetonNJUSA",
"Department of Electrical Engineering\nPrinceton University\n08544PrincetonNJUSA"
]
| []
| Magneto-transport measurements in a clean two-dimensional electron system confined to a wide GaAs quantum well reveal that, when the electrons occupy two electric subbands, the sequences of fractional quantum Hall states observed at high fillings (ν > 2) are distinctly different from those of a single-subband system. Notably, when the Fermi energy lies in the ground state Landau level of either of the subbands, no quantum Hall states are seen at the even-denominator ν = 5/2 and 7/2 fillings; instead the observed states are at ν = (i + p/(2p ± 1)) where i = 2, 3, and p = 1, 2, 3, and include several new states at ν = 13/5, 17/5, 18/5, and 25/7. | 10.1103/physrevlett.105.246805 | [
"https://arxiv.org/pdf/1004.0979v2.pdf"
]
| 38,239,006 | 1004.0979 | b72a332c9eb3b4d3f2fee4dbbab81e487b007f72 |
Fractional Quantum Hall Effect at High Fillings in a Two-subband Electron System
4 Nov 2010
J Shabani
Department of Electrical Engineering
Princeton University
08544PrincetonNJUSA
Y Liu
Department of Electrical Engineering
Princeton University
08544PrincetonNJUSA
M Shayegan
Department of Electrical Engineering
Princeton University
08544PrincetonNJUSA
Fractional Quantum Hall Effect at High Fillings in a Two-subband Electron System
4 Nov 2010(Dated: November 5, 2010)
Magneto-transport measurements in a clean two-dimensional electron system confined to a wide GaAs quantum well reveal that, when the electrons occupy two electric subbands, the sequences of fractional quantum Hall states observed at high fillings (ν > 2) are distinctly different from those of a single-subband system. Notably, when the Fermi energy lies in the ground state Landau level of either of the subbands, no quantum Hall states are seen at the even-denominator ν = 5/2 and 7/2 fillings; instead the observed states are at ν = (i + p/(2p ± 1)) where i = 2, 3, and p = 1, 2, 3, and include several new states at ν = 13/5, 17/5, 18/5, and 25/7.
PACS numbers:
The ground states of low-disorder two-dimensional electron systems (2DESs) at high Landau level (LL) fillings (ν > 2) have been enigmatic. Early experiments provided evidence for a unique fractional quantum Hall state (FQHS) at the even-denominator filling ν = 5/2 [1]. More recent measurements on the highest quality 2DESs have revealed a plethora of additional ground states including insulating and density-modulated phases [2][3][4][5][6][7][8]. But absent are clear sequences of odd-denominator FQHSs at ν = i + p/(2p ± 1) (where p = 1, 2, 3, ...) that are typically observed at lower fillings (i.e., when i = 0 or 1) [9]. It is believed that, in the higher LLs, the larger extent of the electron wavefunction (in the 2D plane), combined with the presence of extra nodes, leads to a modification of the (exchange-correlation) interaction effects and stabilizes the non-FQHSs at the expense of FQHSs.
Meanwhile, the origin and the stability of the FQHSs at high fillings, especially those at ν = 5/2 and 12/5, have become the focus of renewed interest since these states might obey non-Abelian statistics and be useful for topological quantum computing [10]. In particular, it has been proposed that the ν =5/2 FQHS should be particularly stable in a "thick" 2DES confined to a relatively wide quantum well (QW) [11]. In a realistic, experimentally achievable system, of course, the electrons in a wide QW typically occupy two (or more) electric subbands [12]. Here we report measurements in such a system. Figure 1 highlights our main observations. In contrast to data taken in a narrow (30 nm) GaAs QW where only one electric subband is occupied ( Fig. 1(a)), data for the wider (56 nm) well ( Figs. 1(b,c)) [13] do not exhibit even-denominator states at ν = 5/2 and 7/2. Instead, we observe FQHS sequences at ν = 2 + p/(2p ± 1) and 3+p/(2p±1), reminiscent of the usual composite Fermion (CF) sequences observed at lower ν around 1/2 and 3/2 (i.e., at ν = 0 + p/(2p ± 1) and ν = 1 + p/(2p ± 1)) [9]. The FQHSs we observe include states at ν = 7/3, 8/3, 12/5, 13/5, 10/3, 11/3, 17/5, 18/5, and 25/7, some of which have not been previously seen [6].
Our samples were grown by molecular beam epitaxy and consist of GaAs QWs bounded on each side by un-doped Al 0.24 Ga 0.76 As spacer layers and Si δ-doped layers. We studied several samples with well widths (w) ranging from 30 to 80 nm. Here we focus on data from two samples; a narrow QW (w = 30 nm) in which the electrons occupy one electric subband, and a wide (w = 56 nm) QW where two subbands are occupied. The lowtemperature mobility in our single-subband samples is in excess of ≃ 1000 m 2 /Vs, while the two-subband samples have mobilities which are typically about two to three times smaller. Since our samples were grown under very similar conditions it appears that the lower mobility in the wider samples is a consequence of the occupancy of the second subband. We used an evaporated Ti/Au front-gate and an In back-gate to change the 2DES density n and tune the charge distribution symmetry. The transport traces reported here were all measured in a dilution refrigerator at a temperature of ≃ 30 mK.
In wide QW samples, the electrons typically occupy two electric subbands, separated in energy by an amount which we denote ∆. When the QW is "balanced," i.e., the charge distribution is symmetric, the occupied subbands are the symmetric (S) and anti-symmetric (AS) states. When the QW is "imbalanced," the two occupied subbands are no longer symmetric or anti-symmetric; nevertheless, for brevity, we still refer to these as S (ground state) and AS (the excited state). In our experiments, we carefully control the electron density (n) and charge distribution symmetry in the wide QW via applying back and front gate biases [14,15]. For each pair of gate biases, we measure the occupied subband electron densities from the Fourier transforms of the lowfield (B ≤ 0.4 T) magneto-resistance oscillations. These Fourier transforms exhibit two peaks whose frequencies are directly proportional to the densities of the two occupied subbands (see, e.g., Fig. 1 in Ref. [15]). The difference between these frequencies is therefore a direct measure of ∆. Note that, at a fixed n, ∆ is smallest when the charge distribution is balanced and it increases as the QW is imbalanced. By monitoring the evolution of these frequencies as a function of n and, at a fixed n, as a function of the back-and front-gate biases, we can tune the symmetry of the charge distribution [14,15] and also precisely determine the value of ∆. Throughout this Letter, we quote the experimentally measured values of ∆. Another experimentally determined relevant parameter is the charge imbalance δn, defined as the amount of charge transferred from the back side of the QW to the front side. We note that our measured ∆ for given values of n and δn are in very good agreement with the results of our self-consistent calculations of charge distribution and energy levels in our wide QW. We show examples of such calculations for a balanced and an imbalanced charge distribution in Fig. 2(a) insets. 2(a) captures the evolution of R xx traces taken for the 56 nm-wide QW sample as the charge distribution is imbalanced and ∆ is increased. All the traces were taken at a fixed n = 2.90 × 10 11 cm −2 while the charge distribution was made increasingly more asymmetric so that more charge resided near the front interface of the QW. Traces taken for the opposite direction of imbalance, i.e., when the electrons were pushed toward the back interface, show a very similar behavior. We emphasize that the 2DES mobility decreases by less than 10% as the charge is imbalanced so changes in disorder cannot explain the evolution of FQHSs seen in Fig. 2.
The most striking features of Fig. 2(a) data are the sequences of FQHSs observed at ν = i + p/(2p ± 1) for i = 2 and 3. Also remarkable is the evolution of these FQHSs as a function of increasing the charge imbalance (and therefore ∆) at fixed n (Figs. 2(a,b) and 3), or changing the total density (Fig. 2(c)), as we discuss later in the paper. Note also the absence of FQHSs at ν = 5/2 and 7/2, typically seen in very high mobility 2DESs confined to narrower GaAs QWs (e.g., see Fig. 1(a)).
To discuss these observations, we consider two other relevant energies, the cyclotron energy (E C = eB/m * ) and the Zeeman energy (E Z = µ B |g * |B), where m * and g * are the effective mass and Lande g-factor. Assuming the GaAs band values (m * = 0.067m 0 and g * = −0.44), we have E C = 20 × B and E Z = 0.30 × B in units of K, where B is in T. In a typical (narrow) GaAs QW, ∆ > E C > E Z , so that for 2 < ν < 4 the Fermi energy (E F ) lies in the excited orbital LL of the lowest electric subband (i.e., S1; see Fig. 1(a) inset). In our wide QW, however, ∆ and E Z are both smaller than E C in the range 2 < ν < 4, so that a situation like the one shown in Fig. 1(b) ensues, where E F lies in the lowest (orbital) LLs of the two electric subbands (i.e., S0 and AS0). It is not a priori obvious whether ∆ is smaller or larger than E Z in our sample for 2 < ν < 4 since both ∆ and E Z can be re-normalized because of interaction. If we use the band value of the g-factor, E Z < ∆. However, at n = 2.90 × 10 11 cm −2 , we observe a disappearance of the integer quantum Hall state at ν = 4 when ∆ ≃ 40 K (top trace in Fig. 2(a)). Associating this disappearance with the coincidence of the AS0↓ and S1↑ LLs ( Fig. 1(b)), expected when E C = E Z + ∆, we find E Z + ∆ = 60 K at the field position of ν = 4 (B ≃ 3 T), implying that E Z and/or ∆ are enhanced compared to their low field values; such enhancements have been previously reported for 2DESs in wide GaAs QWs [16,17].
Regardless of the relative magnitudes of E Z and ∆, it is clear that for the bottom three traces of Fig. 2(a), for 2 < ν < 4, E F lies in the lowest orbital LLs of the two electric subbands (i.e., S0 and AS0, see Fig. 1(b)). We believe this is the reason why our wide QW sample does not exhibit even-denominator FQHSs at ν = 5/2 and 7/2, and instead shows FQHS sequences that are typically seen in a narrow QW at lower fillings (ν < 2) when E F also lies in the lowest orbital LL [9]. Strong evidence for this conjecture is provided in Fig. 3, where we present data at a higher density n = 3.65 × 10 11 cm −2 and as the QW is made extremely imbalanced. At low and moderate imbalances (∆ < 30 K), for 3 < ν < 4, E F lies in the AS0↓ level (see Fig. 3(d)), and the data are qualitatively similar to those in Fig. 2(b), i.e., the ν = (3 + p/(2p ± 1)) FQHSs are observed. For ∆ ≃ 42 K, the ν = 4 R xx minimum completely disappears, signaling a coincidence of the S1↑ and AS0↓ LLs at this filling (see Fig. 3(c)). As we further imbalance the QW past the coincidence (top two traces in Fig. 3(a)), E F for 3 < ν < 4 lies in the excited LL of the symmetric subband (i.e., S1↑, see Fig. 3(b)). Consistent with our conjecture, in this case there is no strong FQHS sequence at ν = (3 + p/(2p ± 1)) and instead the even-denominator ν = 7/2 state is observed, similar to the data of Fig. 1(a) for a narrow QW.
It is worth emphasizing that the FQHSs we observe cannot be viewed as simple combinations of two FQHSs (each at ν/2) in two parallel layers; this is obvious for the odd-numerator states at ν = 7/3 and 11/3, as there are no FQHSs at 7/6 or 11/6 fillings [18]. We add that, as qualitatively clear from the data of Figs. 1-3, the energy gaps for the odd-numerator FQHSs we observe are typically much larger when these states are formed in the upper LLs. For example, from the temperature dependence of the R xx minima at ν = 10/3 and 11/3 states in ∆ = 24.6 K trace of Fig. 2(b), we measure a gap of ≃ 0.8 K, clearly much larger than in the top trace where these states are barely developed [19].
Data of Figs. 2 and 3 further demonstrate that, even before the ν = 4 electron LL coincidence occurs, the ν = i + p/(2p ± 1) FQHSs exhibit a subtle evolution as ∆ is increased. In the 3 < ν < 4 range, e.g., the ν = 11/3 FQHS is always present and relatively strong (except at and past the ν = 4 coincidence), while the strengths of the 10/3 state, as well as the weaker 17/5, 18/5 and 25/7 states, critically depend on ∆. A qualitatively similar evolution is also observed when the charge distribution is kept symmetric but n is varied (Fig. 2(c)). These evolutions resemble those seen for the FQHSs in the 1 < ν < 2 range in a narrow GaAs QW as a function of spin polarization [9,20,21], or in an AlAs QW as a function of valley polarization [22]. In those cases, the observations can be explained in terms of LLs for two-component CFs which have either a spin or valley degree of freedom, and the coincidences of these LLs as the degree of CFs' spin or valley polarization is tuned.
We believe the evolutions seen in Figs. 2 and 3 likely have a similar origin. One possibility is to interpret the data in terms of two-component CFs which are formed in the AS0 LL and have a spin degree of freedom. In such a picture, the lowest (S0) LLs are completely filled and inert, so that the FQHSs in the filling range 3 < ν < 4 correspond to integer quantum Hall states of CFs with filling p, with the expression ν = 2 + (2 − p/(2p ± 1)) giving the relation between ν and p. This expression, in which the first term 2 accounts for the two S0 LLs being inert and the second term 2 takes into account particle-hole symmetry, maps the ν = 11/3 FQHS to p = 1, the ν = 10/3 and 18/5 states to p = 2, and the ν = 17/5 and 25/7 states to p = 3. The evolution of the FQHSs can then be explained as coincidences of the CF LLs, similar to what has been reported for electrons in the S0 LLs in singlesubband 2D systems [20,21,23]. Another possibility is that the evolution we observe stems from an interplay between the CFs' spin and subband degrees of freedom which leads to four-component CFs. In this scenario, all the four lowest levels (S0↑, S0↓, AS0↑, and AS0↓) would be relevant for the formation of the CFs, and the mapping of the FQHS fillings in the range 3 < ν < 4 and the corresponding CF integer fillings p is given through the expression ν = (4 − p/(2p ± 1)). This expression, which takes into account the particle-hole symmetry in a fourcomponent CF system, maps the FQHSs in the range 3 < ν < 4 to the same p as in the above two-component picture. We note that our data are qualitatively consistent with either of these CF LL pictures; we defer a detailed comparison of the data with the predictions of these models to a future communication. The results presented here demonstrate that the stability of the FQHSs in the filling range 2 < ν < 4 crucially depends on whether E F resides in the lowest or the excited orbital LLs. When E F lies in an excited LL, as is the case in the standard (narrow) QWs, the even-denominator states are stable. But if two electric subbands are occupied so that E F resides in the ground state LLs of these subbands, then the even-denominator states are absent and instead FQHSs are seen at the CF filling sequence ν = (i + p/(2p ± 1)). Our data also reveal a subtle evolution of the FQHSs in the range 2 < ν < 4 with changes in density and/or subband separation, suggesting coincidences of CF LLs.
FIG. 1 :
1Longitudinal magneto-resistance (Rxx) traces, showing FQHSs in the range 2 < ν < 4 for: (a) a 30 nm-wide, and (b) a 56 nm-wide QW sample. The insets schematically show the positions of the spin-split LLs of the lowest (S) and the second (AS) electric subbands, as well as the position of the Fermi energy (EF ) at ν = 3; the indices 0 and 1 indicate the lowest and the excited LLs, respectively. The vertical lines mark the expected field positions of various fractional fillings. (c) Hall resistance corresponding to the data of (b).
Figure
Figure 2(a) captures the evolution of R xx traces taken for the 56 nm-wide QW sample as the charge distribution is imbalanced and ∆ is increased. All the traces were taken at a fixed n = 2.90 × 10 11 cm −2 while the charge distribution was made increasingly more asymmetric so that more charge resided near the front interface of the QW. Traces taken for the opposite direction of imbalance, i.e., when the electrons were pushed toward the back interface, show a very similar behavior. We emphasize that the 2DES mobility decreases by less than 10% as the charge is imbalanced so changes in disorder cannot explain the evolution of FQHSs seen in Fig. 2. The most striking features of Fig. 2(a) data are the sequences of FQHSs observed at ν = i + p/(2p ± 1) for i = 2 and 3. Also remarkable is the evolution of these FQHSs as a function of increasing the charge imbalance (and therefore ∆) at fixed n (Figs. 2(a,b) and 3), or changing the total density (Fig. 2(c)), as we discuss later in the paper. Note also the absence of FQHSs at ν = 5/2 and 7/2, typically seen in very high mobility 2DESs confined to narrower GaAs QWs (e.g., see Fig. 1(a)). To discuss these observations, we consider two other relevant energies, the cyclotron energy (E C = eB/m * ) and the Zeeman energy (E Z = µ B |g * |B), where m * and g * are the effective mass and Lande g-factor. Assuming the GaAs band values (m * = 0.067m 0 and g * = −0.44), we have E C = 20 × B and E Z = 0.30 × B in units of K, where B is in T. In a typical (narrow) GaAs QW, ∆ > E C > E Z , so that for 2 < ν < 4 the Fermi energy (E F ) lies in the excited orbital LL of the lowest electric subband (i.e., S1; see Fig. 1(a) inset). In our wide QW, however, ∆ and E Z are both smaller than E C in the range 2 < ν < 4, so that a situation like the one shown in Fig. 1(b) ensues, where E F lies in the lowest (orbital) LLs of the two electric subbands (i.e., S0 and AS0). It is not a priori obvious whether ∆ is smaller or larger than E Z in our sample for 2 < ν < 4 since both ∆ and E Z can be re-normalized because of interaction. If we use the band value of the g-factor, E Z < ∆. However, at n = 2.90 × 10 11 cm −2 , we observe a disappearance of the integer quantum Hall state at ν = 4 when ∆ ≃ 40 K (top trace in Fig. 2(a)). Associating this disappearance with the coincidence of the AS0↓ and S1↑ LLs (Fig. 1(b)), expected when E C = E Z + ∆, we find E Z + ∆ = 60 K at the field position of ν = 4 (B ≃ 3 T), implying that E Z and/or ∆ are enhanced compared to their low field values; such enhancements have been previously reported for 2DESs in wide GaAs QWs [16, 17]. Regardless of the relative magnitudes of E Z and ∆, it is clear that for the bottom three traces of Fig. 2(a), for 2 < ν < 4, E F lies in the lowest orbital LLs of the two electric subbands (i.e., S0 and AS0, see Fig. 1(b)). We believe this is the reason why our wide QW sample does not exhibit even-denominator FQHSs at ν = 5/2 and 7/2, and instead shows FQHS sequences that are typically seen in a narrow QW at lower fillings (ν < 2)
FIG. 2 :
2(Color online) (a) and (b) Evolution of Rxx vs B traces and the FQHSs for the 56 nm-wide QW sample at a fixed density n = 2.90 × 10 11 cm −2 . The bottom (red) trace is for the balanced case (δn = 0), and the other traces are for increasingly imbalanced charge distributions. The measured subband separation (∆) for each trace is indicated on the left. Insets: Calculated charge distribution and potential (at zero magnetic field) at n = 2.90 × 10 11 cm −2 for δn/n = 0 and δn/n = 0.16. (c) Evolution of FQHSs in the range 3 < ν < 4 with density. For each trace the charge distribution is kept symmetric; the densities are indicated on the left and the measured values of ∆ on the right.
FIG
. 3: (a) (Color online) (a) Evolution of the FQHSs in the range 3 < ν < 4 at a fixed density of n = 3.65 × 10 11 cm −2 with charge imbalance. The bottom (red) trace is for the balanced case, and the other traces are for increasingly imbalanced charge distributions. The measured ∆ for each trace is indicated on the right. (b-d) Landau level diagrams schematically showing the position of EF for three values of ∆ as indicated.
We thank J.K. Jain and C. Toke for illuminating discussion. We acknowledge support through the NSF (DMR-0904117 and MRSEC DMR-0819860) for sample fabrication and characterization, and the DOE BES (DE-FG02-00-ER45841) for measurements.
. R L Willett, Phys. Rev. Lett. 591776R.L. Willett et al., Phys. Rev. Lett. 59, 1776 (1987).
. M P Lilly, Phys. Rev. Lett. 82394M. P. Lilly et al., Phys. Rev. Lett. 82, 394 (1999).
. W Pan, Phys. Rev. Lett. 833530W. Pan et al., Phys. Rev. Lett. 83, 3530 (1999).
. J P Eisenstein, Phys. Rev. Lett. 8876801J. P. Eisenstein et al., Phys. Rev. Lett. 88, 076801 (2002).
. J S Xia, Phys. Rev. Lett. 93176809J. S. Xia et al., Phys. Rev. Lett. 93, 176809 (2004).
. W Pan, Phys. Rev. B. 7775307W. Pan et al., Phys. Rev. B 77, 075307 (2008).
. H C Choi, Phys. Rev. B. 7781301H. C. Choi et al., Phys. Rev. B 77, 081301(R) (2008).
. C R Dean, Phys. Rev. Lett. 101186806C. R. Dean et al., Phys. Rev. Lett. 101, 186806 (2008).
J K Jain, Composite Fermions. New YorkCambridge University PressJ. K. Jain, Composite Fermions, (Cambridge University Press, New York, 2007).
. C Nayak, Rev. Mod. Phys. 801083C. Nayak et al., Rev. Mod. Phys. 80, 1083 (2008).
. M R Peterson, Th Jolicoeur, S. Das Sarma, Phys. Rev. B. 78155308M. R. Peterson, Th. Jolicoeur and S. Das Sarma, Phys. Rev. B 78, 155308 (2008).
The stability of the 5/2 state in such QWs has also been theoretically discussed. M R Peterson, S. Das Sarma, Phys. Rev. B. 81165304The stability of the 5/2 state in such QWs has also been theoretically discussed [M.R. Peterson and S. Das Sarma, Phys. Rev. B 81, 165304 (2010)].
However, the relevance of this work to ours is unclear since it assumes a fully spin-polarized 2DES. However, the relevance of this work to ours is unclear since it assumes a fully spin-polarized 2DES.
Figure 1(b) trace is for an imbalanced charge distribution with a subband separation of 24.6 K (see Fig. 2). Figure 1(b) trace is for an imbalanced charge distribution with a subband separation of 24.6 K (see Fig. 2).
. Y W Suen, Phys. Rev. Lett. 723405Y.W. Suen et al., Phys. Rev. Lett. 72, 3405 (1994).
. J Shabani, Phys. Rev. Lett. 103256802J. Shabani et al., Phys. Rev. Lett. 103, 256802 (2009).
. K Muraki, Phys. Rev. Lett. 87196801K. Muraki et al., Phys. Rev. Lett. 87, 196801 (2001).
. V V Solovyev, Phys. Rev. B. 80241310V.V. Solovyev et al., Phys. Rev. B 80, 241310 (2009).
A close examination of the evolution of the FQHSs we observe rules out that their origin is a magnetic field induced electron redistribution in the QW as discussed, e.g., in Ref. 17A close examination of the evolution of the FQHSs we observe rules out that their origin is a magnetic field in- duced electron redistribution in the QW as discussed, e.g., in Ref. [17].
. Similarly, we measure a gap of ≃ 1.2 K for the 7/3 state in Fig. 1(b) compared to only ≃ 0.1 K in Fig. 1(aSimilarly, we measure a gap of ≃ 1.2 K for the 7/3 state in Fig. 1(b) compared to only ≃ 0.1 K in Fig. 1(a).
. R R Du, Phys. Rev. Lett. 753926R. R. Du et al., Phys. Rev. Lett. 75, 3926 (1995).
. K Park, J K Jain, Phys. Rev. Lett. 804237K. Park and J.K. Jain, Phys. Rev. Lett. 80, 4237 (1998).
. M Padmanabhan, Phys. Rev. B. 8035423M. Padmanabhan et al., Phys. Rev. B 80, 035423 (2009).
Note that changes in the wavefunction shape can modify the Coulomb energy and hence tune the spin polarization of the CFs [C. Toke and J.K. Jain, unpublished; also, see. S Kraus, Phys. Rev. Lett. 89266801Note that changes in the wavefunction shape can modify the Coulomb energy and hence tune the spin polarization of the CFs [C. Toke and J.K. Jain, unpublished; also, see S. Kraus et al., Phys. Rev. Lett. 89, 266801 (2002).]
| []
|
[
"CORSIKA 8 -Towards a modern framework for the simulation of extensive air showers on behalf of the CORSIKA 8 developers",
"CORSIKA 8 -Towards a modern framework for the simulation of extensive air showers on behalf of the CORSIKA 8 developers"
]
| [
"Maximilian Reininghaus \nInstitut für Kernphysik\nInstitut für Technologie (KIT)\nKarlsruher, KarlsruheGermany\n\nInstitut für Experimentelle Teilchenphysik\nInstitut für Technologie (KIT)\nKarlsruher, KarlsruheGermany\n",
"Ralf Ulrich \nInstitut für Kernphysik\nInstitut für Technologie (KIT)\nKarlsruher, KarlsruheGermany\n"
]
| [
"Institut für Kernphysik\nInstitut für Technologie (KIT)\nKarlsruher, KarlsruheGermany",
"Institut für Experimentelle Teilchenphysik\nInstitut für Technologie (KIT)\nKarlsruher, KarlsruheGermany",
"Institut für Kernphysik\nInstitut für Technologie (KIT)\nKarlsruher, KarlsruheGermany"
]
| []
| Current and future challenges in astroparticle physics require novel simulation tools to achieve higher precision and more flexibility. For three decades the FORTRAN version of CORSIKA served the community in an excellent way. However, the effort to maintain and further develop this complex package is getting increasingly difficult. To overcome existing limitations, and designed as a very open platform for all particle cascade simulations in astroparticle physics, we are developing CORSIKA 8 based on modern C++ and Python concepts. Here, we give a brief status report of the project. | 10.1051/epjconf/201921002011 | [
"https://www.epj-conferences.org/articles/epjconf/pdf/2019/15/epjconf_uhecr18_02011.pdf"
]
| 86,585,648 | 1902.02822 | c3c941e2bb7841e9080de02f534209af8c448447 |
CORSIKA 8 -Towards a modern framework for the simulation of extensive air showers on behalf of the CORSIKA 8 developers
Maximilian Reininghaus
Institut für Kernphysik
Institut für Technologie (KIT)
Karlsruher, KarlsruheGermany
Institut für Experimentelle Teilchenphysik
Institut für Technologie (KIT)
Karlsruher, KarlsruheGermany
Ralf Ulrich
Institut für Kernphysik
Institut für Technologie (KIT)
Karlsruher, KarlsruheGermany
CORSIKA 8 -Towards a modern framework for the simulation of extensive air showers on behalf of the CORSIKA 8 developers
10.1051/epjconf/201921002011
Current and future challenges in astroparticle physics require novel simulation tools to achieve higher precision and more flexibility. For three decades the FORTRAN version of CORSIKA served the community in an excellent way. However, the effort to maintain and further develop this complex package is getting increasingly difficult. To overcome existing limitations, and designed as a very open platform for all particle cascade simulations in astroparticle physics, we are developing CORSIKA 8 based on modern C++ and Python concepts. Here, we give a brief status report of the project.
Introduction
CORSIKA [1,2] is the most widely used, actively maintained code for Monte Carlo air shower simulation currently available. In spite of its development having started almost 30 years ago [3], it is still frequently extended and improved, with major updates released roughly once per year consisting mostly of improvements in the various interaction models shipped with CORSIKA. Completely new features, however, are developed only rarely nowadays, also due to the complexity of the code posing a major obstacle to their implementation: Originally written and optimized to be used only in simulations for the KAS-CADE experiment [4], and therefore designed to meet the corresponding requirements, it was not intended to serve as the general purpose tool into which it has evolved. Its monolithic structure makes modifications or extensions of the code very difficult. Besides that, CORSIKA is written in FORTRAN 77, which can no longer be considered the lingua franca within the domains of high energy and astroparticle physics, and suffers from a number of restrictions, e.g. the lack of dynamic memory or object orientation, and is therefore unattractive to learn, causing a lack of qualified and motivated contributors.
Although the C++ add-ons COAST [5] or recently dynstack [6] help to remedy parts of these issues to a certain degree, there are still many wishes by users for extensions that simply cannot be accommodated for with reasonable effort. It is clearly a disadvantage to wrap modern extensions around the existing "dinosource" [7] code in comparison to fundamentally re-designing the whole framework in a consistent way.
For that reason, we reached the decision that the time has come to start a project to develop a next-generation code, with the focus on the aspects modularity, flexibility, ease of use and extensibility, efficiency, and reliability from the beginning. Of course, a key element of the new project is to keep the expertise gained and include all the lessons learned from the last decades. While the name is chosen to abide, the distinction between the legacy and next-generation CORSIKA is made through the version number, initially CORSIKA 8 for the latter. We consider CORSIKA 8 to be more of a framework for simulating particle cascades rather than an air-shower-only tool, therefore extending the applicability to wider domains of research.
To a large extent, our goals and plans are outlined in ref. [8] and their implementation is currently ongoing work. Here, we present an overview of the most important aspects of the design.
Building blocks
CORSIKA 8 is developed using modern C++ accompanied by Python tools. The main building blocks of COR-SIKA 8 are displayed in fig. 1 together with their relations to each other.
Particle stack
The particle stack contains the particles in memory which are currently in the course of being propagated. In its most basic incarnation the stack provides access to the particles' four-momenta, four-positions, and particle codes, but an easy extension of additional properties like statistical weight, as necessary e.g. for thinning algorithms, is straightforward. It is envisaged to provide optional access to the history of the particle offering a much deeper insight into its "ancestor" generations than it is currently possible with the corresponding feature [9] of legacy CORSIKA. The particle stack is read from and written to by the process sequence, as well as the transport procedure. Each of these major blocks is by itself a modular system of algorithms. Basically all functionality can be replaced by alternative modules/implementation in a very straightforward way. This also means future extensions can be included easily.
Process sequence
The process sequence represents the physical processes modeled in the simulation and is composed of all the physics modules which the user chooses to enable (e.g. hadronic and electromagnetic interaction models, emission of Cherenkov light or radio). All these modules must conform to the same interfaces and are treated on the same level. We distinguish mainly between continuous processes and discrete processes (see fig. 2). While the first ones are meant to model effects which happen along the trajectory between to steps of the particle (like energy losses or Cherenkov light emission), the second type of processes typically represents interactions and decays. Furthermore, a special class of processes is reserved for the cases in which a particle transits the boundary between two media. Discrete processes need to provide interaction lengths or decay times as a function of the particle currently being propagated (implicitly having access to information about the local environment of the particle through its location). In addition, in case one of the discrete processes is the chosen to be in fact performed, that specific process can then modify the particle stack, typically by deleting the projectile from the stack and placing new secondaries of the interaction onto the stack.
Instead of providing an interaction length, continuous processes are individually required to provide a maximum step-size. This is useful, inter alia, to limit energy losses to a tolerable amount between two steps, which would otherwise invalidate the interaction length calculated previously using the particle energy at the beginning of the step. Continuous processes are provided with the current particle together with the trajectory up to its endpoint determined from a number of criteria (see below) including the abovementioned limited step-size. In contrast to the random nature of discrete processes, they are always performed.
Environment
One of the most prominent features of CORSIKA 8 is the flexible definition of the medium and its properties in which the particles propagate. In particular, it will be possible to simulate not only pure air showers but also showers penetrating the ground and propagating further through ice, water, rock, or other media. A key premise of this endeavour is the ability to compose the environment out of several different (sub-)volumina with different physical properties. In this regard, CORSIKA 8 follows similar concepts as the well-known Geant4 toolkit [10][11][12], with the major difference perhaps being that we do not limit ourselves to homogeneous media. Figure 3 illustrates this idea together with a sketch of the current implementation. We provide very simple volumina, in the beginning only spheres and cuboids, which the user has to furnish with models of its physical properties (in the figure symbolized by the different colors) and then assemble them into the volume tree. The structure of the volume tree represents geometrical containment, i.e., volumes fully contained by a bigger volume are child nodes of the bigger volume. The root node is always the Universe volume which is equivalent to a sphere with infinite radius. By relaxing the condition of full containment, it is possible to cut the child volume along the boundary of its parent. Furthermore, it is necessary to treat cases of overlapping nodes specially in order to avoid ambiguities. We achieve this by having references to other volumes that are to be excluded from a given volume node, in the figure indicated by the dashed arrows. The tree structure allows relatively fast queries of which actual volume contains a given point.
As a second element of the environment, it is foreseen in the design to conveniently change and extend the number of physical properties represented in the medium model. As a first step, we provide interfaces for query-discrete processes continuous processes medium transition Figure 2. Discrete (blue dots) and continuous (green lines) processes during particle transport. The sampled random locations of discrete processes, which can be interactions, decays or boundary crossings (red square), determines the regime for continuous processes.
ing mass density, fractional elementary composition, and magnetic field only. As soon as physics modules e.g. for Cherenkov or radio emission requiring the index of refraction are added to CORSIKA 8, this additional property can then easily be included. For runs without these processes enabled, however, a definition will not be required.
Transport
At the heart of CORSIKA 8 lies the transport code which, making use of the aforementioned building blocks, propagates the particles one by one, most likely producing secondaries, until the simulation finishes -by construction as soon as no particles are left.
The first step consists of proposing a trajectory, starting at the current position of the particle and initially extending to the next point of intersection with a volume boundary. We currently restrict ourselves to linear trajectories since in that case the calculations of intersections with spheres and cuboids reduce to solving polynomial equations of at most second order. For helices, which would be a natural choice for trajectories of charged particles in slowly varying magnetic fields, already the calculation of intersections with planes requires a non-trivial numerical treatment [13].
As second step, the maximum step-size is then further limited by, depending on the environment, up to two conditions concerning the numerical accuracy of the procedure. One limit regards the accuracy of integrating the equations of motion in the magnetic field to make sure that the trajectory will not deviate too much from the true, helix-like solution. This is obviously superfluous in the absence of a magnetic field or for neutral particles. The second limit pertains to the calculation of grammage X along the trajectory within the medium with a given density distribution (x), i.e.
X = trajectory (x) ds.(1)
This can be done analytically exact only for very specific density distributions, e.g. a homogeneous one. For the general case, one needs to deal with either numerical integration or approximations: a suitable approach can be to approximate the density distribution in the vicinity of the starting point x 0 of the trajectory using Taylor's expansion, say to second order,
(x 0 + δx) = (x 0 ) + ∇| x 0 · δx + 1 2 δx T H| x 0 δx + O δx 3 ,(2)
where H denotes the Hesse matrix of and δx is a small piece along the trajectory. Then, the problem reduces to the integration of a polynomial and δx would be limited to a certain length by requiring the estimated error of the approximation to be smaller than a given value. The next step consists of randomly sampling the next location of the discrete processes. Decay points are sampled from an exponential distribution in length, whereas for interaction points the exponentially distributed variable is grammage. To determine the location of the interaction, grammage needs to be converted back to length. Hence, an accurate conversion between these two variables is required. Afterwards, continuous processes are performed along the trajectory up to either its endpoint given by the limiting conditions described above, or the interaction point of the closest discrete process, which will be performed subsequently.
Conclusions and outlook
The development of CORSIKA 8 is in a very active stage. The project is completely open to input from the community. Any participation and collaboration will lead to a better tool for astroparticle physics for the next decades.
We are committed to provide a reliable, stable, accurate and flexible framework. The design as a framework, in contrast to a single-purpose program, makes it clear that the range of future applications could be far beyond just simulating extensive air shower cascades. It is also up to the community to define what is needed and what is scientifically useful. The inherent complexity of particle shower development in materials requires a very careful validation of each ingredient and input model, best with dedicated data. We aim to facilitate a better understanding and study of the relationship between these fundamental ingredients and the final physics observables. The first intermediate development snapshots of COR-SIKA 8 are already available on our gitlab server [14] and can be obtained freely from there. We welcome any comments or, even better, participation/discussion in further developing this project. It is our plan to have a first version suitable for limited and specialized physics studies available already in 2019.
Figure 1 .
1Main building blocks of CORSIKA 8. The relationships and dependencies between the blocks are shown as arrows.
Figure 3 .
3An example environment composed of different volumes with different physical properties indicated by color (left). In the implementation, these are assembled in a tree structure (right).
© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/).EPJ Web of Conferences 210, 02011 (2019) https://doi.org/10.1051/epjconf/201921002011 UHECR 2018
EPJ Web of Conferences 210, 02011 (2019) https://doi.org/10.1051/epjconf/201921002011 UHECR 2018
AcknowledgementsM.R. acknowledges support by the DFG-funded Doctoral School "Karlsruhe School of Elementary and Astroparticle Physics: Science and Technology".
. J N Capdevielle, 10.5445/IR/270033168Kernforschungszentrum KarlsruheTech. rep. KfK-4998J.N. Capdevielle et al., Tech. rep. KfK-4998, Kernforschungszentrum Karlsruhe (1992), doi:10.5445/IR/270033168
. D Heck, J Knapp, J N Capdevielle, G Schatz, T Thouw, Forschungszentrum KarlsruheTech. rep. FZKA-6019D. Heck, J. Knapp, J.N. Capdevielle, G. Schatz, T. Thouw, Tech. rep. FZKA-6019, Forschungszen- trum Karlsruhe (1998), https://publikationen. bibliothek.kit.edu/270043064
. H J Gils, D Heck, J Oehlschlaeger, G Schatz, T Thouw, A Merkel, Comput. Phys. Commun. 56105H.J. Gils, D. Heck, J. Oehlschlaeger, G. Schatz, T. Thouw, A. Merkel, Comput. Phys. Commun. 56, 105 (1989)
. H O Klages, Nucl. Phys. B Proc. Suppl. 5292H.O. Klages et al., Nucl. Phys. B Proc. Suppl. 52, 92 (1997)
. R Ulrich, COASTR. Ulrich, COAST (2006), https://web.ikp.kit. edu/rulrich/coast.html
. D Baack, 10.17877/DE290R-19158Technische Universität DortmundD. Baack, Tech. rep., Technische Universität Dort- mund (2016), doi:10.17877/DE290R-19158
. S P Zwart, 1809.02600Science. 361979S.P. Zwart, Science 361, 979 (2018), 1809.02600
. R Engel, D Heck, T Huege, T Pierog, M Reininghaus, F Riehn, R Ulrich, M Unger, D Veberič, 1808.08226Comput. Softw. Big Sci. 3R. Engel, D. Heck, T. Huege, T. Pierog, M. Rein- inghaus, F. Riehn, R. Ulrich, M. Unger, D. Veberič, Comput. Softw. Big Sci. 3, 2 (2019), 1808.08226
. D Heck, R Engel, Forschungszentrum KarlsruheTech. rep. FZKA-7495D. Heck, R. Engel, Tech. rep. FZKA- 7495, Forschungszentrum Karlsruhe (2009), https://publikationen.bibliothek.kit. edu/270078292
. S Agostinelli, GEANT4 CollaborationNucl. Instrum. Meth. A. 506250S. Agostinelli et al. (GEANT4 Collaboration), Nucl. Instrum. Meth. A 506, 250 (2003)
. J Allison, IEEE Trans. Nucl. Sci. 53270J. Allison et al., IEEE Trans. Nucl. Sci. 53, 270 (2006)
. J Allison, Nucl. Instrum. Meth. A. 835186J. Allison et al., Nucl. Instrum. Meth. A 835, 186 (2016)
. Y Nievergelt, Rev, 38136Y. Nievergelt, SIAM Rev. 38, 136 (1996)
| []
|
[
"Precise determination of the Higgs mass in supersymmetric models with vectorlike tops and the impact on naturalness in minimal GMSB",
"Precise determination of the Higgs mass in supersymmetric models with vectorlike tops and the impact on naturalness in minimal GMSB"
]
| [
"Kilian Nickel *[email protected]†[email protected] \nBethe Center for Theoretical Physics & Physikalisches Institut\nTheory Division, CERN\nUniversität Bonn\n53115, 1211Bonn, Geneva 23Germany, Switzerland\n",
"Florian Staub \nBethe Center for Theoretical Physics & Physikalisches Institut\nTheory Division, CERN\nUniversität Bonn\n53115, 1211Bonn, Geneva 23Germany, Switzerland\n"
]
| [
"Bethe Center for Theoretical Physics & Physikalisches Institut\nTheory Division, CERN\nUniversität Bonn\n53115, 1211Bonn, Geneva 23Germany, Switzerland",
"Bethe Center for Theoretical Physics & Physikalisches Institut\nTheory Division, CERN\nUniversität Bonn\n53115, 1211Bonn, Geneva 23Germany, Switzerland"
]
| []
| We present a precise analysis of the Higgs mass corrections stemming from vectorlike top partners in supersymmetric models. We reduce the theoretical uncertainty compared to previous studies in the following aspects: (i) including the one-loop threshold corrections to SM gauge and Yukawa couplings due to the presence of the new states to obtain the DR parameters entering all loop calculations, (ii) including the full momentum dependence at one-loop, and (iii) including all twoloop corrections but the ones involving g 1 and g 2 . We find that the additional threshold corrections are very important and can give the largest effect on the Higgs mass. However, we identify also parameter regions where the new two-loop effects can be more important than the ones of the MSSM and change the Higgs mass prediction by up to 10 GeV. This is for instance the case in the low tan β, small M A regime. We use these results to calculate the electroweak fine-tuning of an UV complete variant of this model. For this purpose, we add a complete 10 and 10 of SU (5) to the MSSM particle content. We embed this model in minimal Gauge Mediated Supersymmetry Breaking and calculate the electroweak fine-tuning with respect to all important parameters. It turns out that the limit on the gluino mass becomes more important for the fine-tuning than the Higgs mass measurements which is easily to satisfy in this setup. | 10.1007/jhep07(2015)139 | [
"https://arxiv.org/pdf/1505.06077v1.pdf"
]
| 2,015,676 | 1505.06077 | 6b6500f6db1ef97daca4be41cc0d9fcceb2bac6b |
Precise determination of the Higgs mass in supersymmetric models with vectorlike tops and the impact on naturalness in minimal GMSB
22 May 2015
Kilian Nickel *[email protected]†[email protected]
Bethe Center for Theoretical Physics & Physikalisches Institut
Theory Division, CERN
Universität Bonn
53115, 1211Bonn, Geneva 23Germany, Switzerland
Florian Staub
Bethe Center for Theoretical Physics & Physikalisches Institut
Theory Division, CERN
Universität Bonn
53115, 1211Bonn, Geneva 23Germany, Switzerland
Precise determination of the Higgs mass in supersymmetric models with vectorlike tops and the impact on naturalness in minimal GMSB
22 May 20151
We present a precise analysis of the Higgs mass corrections stemming from vectorlike top partners in supersymmetric models. We reduce the theoretical uncertainty compared to previous studies in the following aspects: (i) including the one-loop threshold corrections to SM gauge and Yukawa couplings due to the presence of the new states to obtain the DR parameters entering all loop calculations, (ii) including the full momentum dependence at one-loop, and (iii) including all twoloop corrections but the ones involving g 1 and g 2 . We find that the additional threshold corrections are very important and can give the largest effect on the Higgs mass. However, we identify also parameter regions where the new two-loop effects can be more important than the ones of the MSSM and change the Higgs mass prediction by up to 10 GeV. This is for instance the case in the low tan β, small M A regime. We use these results to calculate the electroweak fine-tuning of an UV complete variant of this model. For this purpose, we add a complete 10 and 10 of SU (5) to the MSSM particle content. We embed this model in minimal Gauge Mediated Supersymmetry Breaking and calculate the electroweak fine-tuning with respect to all important parameters. It turns out that the limit on the gluino mass becomes more important for the fine-tuning than the Higgs mass measurements which is easily to satisfy in this setup.
I. INTRODUCTION
The discovery of the Higgs boson with a mass of about 125 GeV [1,2] has a strong impact on the parameter range of supersymmetric (SUSY) models, especially as its mass value is turning into a precision observable with an uncertainty below 1%. In particular, in constrained versions of the Minimal Supersymmetric Standard Model (MSSM) large regions of the parameter space are not consistent with this mass range [3]. This is in particular the case for models where SUSY breaking is assumed to be transmitted from the hidden to the visible sector via gauge interactions like in minimal Gauge Mediated SUSY Breaking (GMSB). Even relaxing the predictive boundary conditions of a constrained model and considering the phenomenological MSSM with many more parameters at the SUSY scale, it is still rather difficult to find regions with the correct Higgs mass. Either, a very large mixing in the stop sector or heavy stop masses are needed to push the Higgs mass to the desired range [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. However, the large stop mixing with light stops turns out to be dangerous because of charge and colour breaking minima [19][20][21][22][23]. On the other side, very heavy stops introduce again a hierarchy problem which SUSY was supposed to solve. The question about naturalness and fine-tuning is even more pronounced in regions the small tan β region which recently gained some interest because of Higgs fits [15][16][17]: in these regions the tree-level Higgs mass is suppressed by a factor cos(2β) and even much bigger loop corrections are needed than for larger values of tan β.
A widely studied ansatz to solve this tension and to reduce the necessary fine-tuning in SUSY models is to enhance the Higgs mass already at tree-level. For this purpose models are considered which give new F - [24][25][26][27][28][29][30][31][32] or D-term contributions to the Higgs mass [33][34][35][36][37][38][39].
The fine-tuning in these models is often better by a few orders compared to the MSSM.
Alternatively, one can also consider models which give new loop-corrections due to the presence of additional large couplings to push the Higgs mass. This happens for instance in inverse-seesaw models [40,41] or models with vector-like quarks [42][43][44][45][46][47][48][49][50][51][52][53] at the one-loop level, or in models with trilinear R-parity violation at the two-loop level [54]. We are going to concentrate here on models with vectorlike tops partners. In these models, the effects on the Higgs mass have been so far just studied in the effective potential approach at one-loop.
Also a careful analysis of the threshold corrections to the standard model (SM) gauge and Yukawa couplings has been not performed to our knowledge so far. However, it is well known from the MSSM that the SUSY threshold corrections and one-loop momentum dependent effects can alter the Higgs mass by several GeV [55]. Of course, also two-loop corrections involving coloured states are crucial in the MSSM and it wouldn't be possible to reach a mass of 125 GeV without them [56][57][58][59][60][61][62][63][64][65][66][67][68][69]. As soon as the Yukawa-like interactions of the new (s)tops become large, one should expect that effects of a similar size than in the MSSM sector appear. Therefore, we make a careful analysis of all three effects: we calculate the full one-loop threshold corrections to get an accurate prediction of the running gauge and Yukawa couplings at the SUSY scale, we include the entire dependence of external momenta at the one-loop level, and we add the all two-loop corrections which are independent of electroweak gauge couplings. In this context, all calculations are performed within the SARAH [70][71][72][73][74][75] -SPheno [76,77] framework which allows for two-loop calculations in SUSY models beyond the MSSM [78,79]. The obtained precision is comparable to the standard calculations usually employed for the MSSM based on the results of Refs. [65][66][67][68][69].
Finally, we extend the particle content to have a complete 10 and 10 of SU (5) in addition to the MSSM particle content to get a model which is consistent with gauge coupling unification. This model has already been studied to some extent after embedding it in minimal supergravity or GMSB [49,80,81]. We choose here the variant where SUSY breaking is transmitted via gauge mediation and check for the first time for the fine-tuning in regions which are consistent with the Higgs measurements. We show that this gives usually a fine-tuning which can easily compete with other attempts to resurrect natural GMSB by including non-gauge interactions between the messenger particles and MSSM states [82][83][84][85][86][87][88][89][90][91][92].
This manuscript is organized as follows. We first introduce the minimal SUSY model with vectorlike top partners as well as the UV complete variant embedded in GMSB in sec. II.
In sec. III we summary briefly the main features of the tree-level masses before we explain in large detail the calculation of the one-and two-loop corrections. The numerical results are given in secs. IV and V. In sec. IV we discuss the impact of the different corrections at one-and two-loop on the SM-like Higgs mass using a SUSY scale input, before we analyse in sec. V the fine-tuning of the GMSB embedding. We conclude in sec. VI.
II. THE MSSM WITH VECTORLIKE TOPS
A. The minimal model
SF Spin 0 Spin 1 2 Generations (U (1) ⊗ SU(2) ⊗ SU(3)) Qq q 3 ( 1 6 , 2, 3) Ll l 3 (− 1 2 , 2, 1) H d H dHd 1 (− 1 2 , 2, 1) H u H uHu 1 ( 1 2 , 2, 1) Dd * R d * R 3 ( 1 3 , 1, 3) Uũ * R u * R 3 (− 2 3 , 1, 3) Eẽ * R e * R 3
(1, 1, 1)
T t * t * 1 (− 2 3 , 1, 3) T t * t * 1 ( 2 3 , 1, 3)
We extend the particle content of the MSSM by a pair of right-handed vectorlike quark superfieldsT andT . The particle content of the model and the naming conventions for all chiral superfields and their spin-0 as well as 1 2 components are summarized in Tab. II A. In addition, we have the usual vector superfieldsB,Ŵ ,Ĝ which carry the gauge bosons for U (1) Y × SU (2) L × SU (3) C as well as the gauginos λ B , λ W , λ G . The full superpotential for the model reads:
W = Y ij eL iÊjĤd + Y ij dQ iDjĤd + Y ij uQ iÛjĤu + µĤ uĤd + Y i t Q iT Ĥ u + M T T T + m i t Û iT(1)
Here, we skipped colour and isospin indices. The Yukawa couplings Y e , Y d and Y u are in general complex 3 × 3 matrices. The new interaction Y t is a vector, but we concentrate only on cases where the third component Y 3 t has non-vanishing values. To simplify the notation, we define therefore
Y 3 t ≡ Y t(2)
When we speak about the top-Yukawa coupling Y t , we refer to Y 33 u . The dimensionful parameters in the superpotential are the µ-parameter known from the MSSM, as well as the mass term M T for the vectorlike top quark superfields, and a bilinear term m t mixing the new states and the MSSM ones even before electroweak symmetry breaking (EWSB).
The soft-SUSY breaking terms for the model are
−L = T ij el iẽj H d + T ij dq idj H d + T ij uq iũj H u + B µ H d H u + T i tq it H u + B Tt t + B i tũ it + h.c. + m 2 u,ijũ * iũ j + m 2 d,ijd * id j + m 2 q,ijq * iq j + m 2 e,ijẽ * iẽ j + m 2 l,ijl * il j + m 2 H d |H d | 2 + m 2 Hu |H u | 2 + m 2 t |t | 2 + m 2 t |t | 2 + (m 2 ut ũ * it + h.c.) + (M 1 λ B λ B + M 2 λ W λ W + M 3 λ G λ G + h.c.)(3)
In general, the T -and Bparameters are complex tensors of appropriate dimension, while the mass soft-terms for scalars are hermitian matrices, or vectors or scalars. The gaugino mass terms are complex scalar. However, we are going to neglect CP violation in the softsector, i.e. all parameters are taken to be real. For the trilinear soft-term of Y t we use a similar short-hand notation T 3 t ≡ T t in the following.
B. UV completion and fine-tuning
Gauge coupling unification
If we just include the right-handed top superfields, the model is not consistent with gauge coupling unification. To cure this problem, additional fields have to be added. The minimal choice is to add a pair of complete 10-plets under SU (5) which contain the states we are interested in, but also vectorlike left-handed quarks (Q ,Q ) and vector-like right-handed leptons (E ,Ē ). To generate mass terms for all components of the 10 and 10, the following extension of the superpotential is needed:
∆W = M Q Q Q + M E Ê Ê .(4)
Here, the Q-fields have quantum numbers ( 1 6 , 2, 3), (− 1 6 , 2,3), while the vector-like leptonŝ E ,Ê carry quantum numbers (±1, 1, 1) with respect to U (1) Y × SU (2) L × SU (3) c . We are going to assume that no further interactions between these additional states and the MSSM sector are present, i.e. these particles are only spectators when calculating the SUSY mass corrections. Nevertheless, because of their impact on the SUSY RGEs and also on the threshold corrections to the SM gauge couplings they can play an important role. We can see this already at the one-loop RGEs of the gauge couplings for the minimal model and the UV complete version:
β (1) g 1 = 41 5 + 7 5 δ U V g 3 1 (5) β (1) g 2 = (1 + 3δ U V ) g 3 2 (6) β (1) g 3 = (−2 + 2δ U V ) g 3 3 ,(7)
where we parametrized the β function as
β g i ≡ 1 16π 2 β (1) g i + 1 (16π 2 ) 2 β (2) g i + . . .(8)
For δ U V = 0 we obtain the minimal model, while δ U V = 1 describes the UV complete version.
In Fig. 1 the re-established gauge unification can be observed. The one-loop β functions of the Yukawa couplings are the same in both model variants and read corrections. These effects will be included in our numerical analysis. Nevertheless, one can already see in Fig. 2 that the cut-off scale M C at which the Landau pole arises, given as a function of Y t , is pushed towards higher scales in the UV complete version.
β (1) Y d = Y d 3Y † d Y d + Y † u Y u + 3Tr Y d Y † d − 16 3 g 2 3 − 3g 2 2 − 7 15 g 2 1 + Tr Y e Y † e + Y t ,i 2 Y d Y * t i 1 (9) β (1) Ye = 3Y e Y † e Y e + Y e 3Tr Y d Y † d − 3g 2 2 − 9 5 g 2 1 + Tr Y e Y † e(10)β (1) Y t ,i 1 = 3Y T u Y * u + 3Tr 3Y u Y † u + Y T d Y * d + 6 Y t Y * t − 13 15 g 2 1 − 3g 2 2 − 16 3 g 2 3 Y t ,i 1 (11) β (1) Yu = 3Y t ,i 2 Y u Y * t i 1 + Y u 3Y † u Y u + Y † d Y d + 3Tr Y u Y † u + 3 Y t Y * t − 13 15 g 2 1 − 3g 2 2 − 16 3 g 2 3(12)
The additional soft-terms which appear because of the extended particle content are the following:
− ∆L = m 2 e |ẽ | 2 + m 2 e |ẽ | 2 + m 2 q |q | 2 + m 2 q |q | 2 + (m 2 eẽ ẽ * iẽ + m 2 qq q * iq + h.c.)(13)
We can now embed the UV complete version in a constrained setup to relate the SUSY breaking parameters. We are going to choose the setup of gauge mediated SUSY breaking (GMSB) which we introduce now briefly.
Gauge mediated SUSY breaking and boundary conditions
The mediation of the SUSY breaking from the secluded to the visible sector happens in GMSB by messenger particles charged under SM gauge groups. The minimal model provides a pair of 5-plets under SU (5) which don't have any interaction with the MSSM sector but due to the gauge couplings. The necessary ingredients to break SUSY are the interaction of the messengers, called Φ,Φ, and a spurion field S described by
W = λSΦΦ .(14)
S is a gauge singlet and acquires a vacuum expectation value (VEV) along its scalar and auxiliary component due to hidden sector interactions, which we leave here unspecified
S = M + Θ 2 F .(15)
The coupling λ of eq. (14) can be absorbed into the redefinitions of M ≡ λM and F ≡ λF .
With these conventions, we find that the fermionic components of the messengers have a mass M , while the scalars get masses
φ +,− = 1 √ 2 φ M ±φ M , m +,− = √ M 2 ± F .(16)
This gives the condition M 2 > F . The soft breaking masses of the MSSM fields are generated via loop diagrams involving the messenger particles. The gauginos receive masses Mλ at one-loop level while the scalar masses m 2 f are generated at the two-loop. The leading approximations for the soft breaking masses are
Mλ i (t) = α i (t) 4π Λ G , m 2 f i (t) = 2 3 r=1 C r (f ) α(t) 2 r 16π 2 Λ 2 S(17)
α i (t) = g 2 i /(4π) are the running coupling constants at the scale t and C r is the Casimir of the representation r. The SUSY soft breaking scales Λ G and Λ S depend on F and M as follows:
Λ G = F M g F M 2 , Λ 2 S = F 2 M 2 f F M 2 (18) with g(x) 1 + x 2 6 + x 4 15 + x 6 28 + O(x 8 ) , f (x) 1 + x 2 36 − 11x 4 450 − 319x 6 11760 + O(x 8 ) .(19)
It is convenient to define
Λ ≡ F M(20)
For F M 2 this leads to Λ G = Λ S = Λ. Applying the general results to our (UV complete) model, we have the following boundary conditions at the messenger scale M for the scalar soft masses
m 2 l,jj = m 2 Hu = m 2 H d = 3 10 g 4 1 + 3 2 g 4 2 Λ 2 S (21) m 2 q,jj = m 2 q = mq = 1 30 g 4 1 + 3 2 g 4 2 + g 4 3 Λ 2 S (22) m 2 u,jj = m 2 t = mt = 8 15 g 4 1 + 8 3 g 4 3 Λ 2 S (23) m 2 e,jj = m 2 e = mẽ = 6 5 g 4 1 Λ 2 S (24) m 2 d,jj = 2 15 g 4 1 + 8 3 g 4 3 Λ 2 S(25)
with j = 1, 2, 3. All off-diagonal entries are staying zero at the messenger scale. For the gaugino mass terms, we have the MSSM results
M i = g 2 i Λ G(26)
while all other soft-terms vanish up to two-loop
T x =0 x = d, u, e, t(27)B X =0 X = Q , E , T(28)m 2 ut = m 2 qq = m 2 eẽ = B t =0(29)
Furthermore, we assume that the bilinear mass terms for the vector states unify at the messenger scale
M T = M Q = M E ≡ M V(31)
We make no attempt to explain the size of µ or B µ in this setup. There are several proposals how these parameters receive numerical values needed for phenomenological reasons [93][94][95].
We take it as given that one of these ideas is working and calculate the µ and B µ from the vacuum conditions. Similarly, we are also agnostic concerning the cosmological gravitino problem usually introduced in GMSB by the Gravitino LSP and possible solutions for it [96][97][98][99][100][101][102].
Thus, our full set of input parameters in this setup is
M , Λ , tan β , M V , Y t(32)∆ F T ≡ max Abs ∆ α , ∆ α ≡ ∂ ln M 2 Z ∂ ln α = α M 2 Z ∂M 2 Z ∂α .(33)
In this setup, the sensitivity of the Z mass on the fundamental parameters at the UV scale is calculated. α is a set of independent parameters at this scale and ∆ −1 α gives an estimate of the accuracy to which the parameter α must be tuned to get the correct electroweak breaking scale [105]. The smaller ∆ F T , the more natural is the model under consideration.
We use the messenger scale M in GMSB as a reference scale and calculate the FT with respect to
α = {Λ, M V , Y t , Y t , g 3 , µ, B µ }.(34)
The practical calculation of the FT in our numerical calculation works as follows: we vary these parameters at the messenger scale M and run the two-loop RGEs down to the SUSY scale. At the SUSY scale, the electroweak VEVs are calculated numerically using the minimization conditions of the potential and the resulting variation in the Z mass is derived.
III. THE MASS SPECTRUM OF THE MINIMAL MODEL
To get a good estimate of the fine-tuning by including the Higgs constraint, it is necessary to reduce the theoretical uncertainty of the Higgs mass prediction. Our aim is to get the same uncertainty as for the MSSM, namely to consider the Higgs mass in the range
m h = (125 ± 3) GeV(35)
This precision can only be reached if a full one-loop calculation is done, and the dominant two-loop corrections are included. Since this has not been done before in literature for the considered model, we discuss our calculation of the mass spectrum, in particular of the threshold corrections and two-loop Higgs corrections, in detail.
A. Tree-level properties
When electroweak symmetry gets broken, the neutral Higgs states receive VEVs v d and v u and split in their CP even and odd components:
H 0 d → 1 √ 2 (φ d + iσ d + v d ) , H 0 u → 1 √ 2 (φ u + iσ u + v u ) .(36)
We
have tan β = vu v d and v = v 2 d + v 2 u 246
GeV. Using these conventions, the tree-level mass matrix squared for the scalar Higgs particles is the same as in the MSSM. It reads in
the basis (φ d , φ u ) m 2,(T ) h = 1 8 g 2 1 + g 2 2 3v 2 d − v 2 u + m 2 H d + |µ| 2 − 1 4 g 2 1 + g 2 2 v d v u − B µ − 1 4 g 2 1 + g 2 2 v d v u − B µ − 1 8 g 2 1 + g 2 2 − 3v 2 u + v 2 d + m 2 Hu + |µ| 2 (37)
This matrix is diagonalized by Z H :
Z H m 2 h Z H, † = m dia 2,h(38)
Two of the parameters in this matrix can be eliminated by the tadpole conditions for EWSB:
T d ≡ ∂V ∂φ d = − 1 2 v u B µ + B * µ + 1 8 g 2 1 + g 2 2 v d − v 2 u + v 2 d + v d m 2 H d + |µ| 2 = 0 (39) T u ≡ ∂V ∂φ u = 1 8 g 2 1 + g 2 2 v u − v 2 d + v 2 u − v d B µ + v u m 2 Hu + |µ| 2 = 0(40)
We are going to solve these equations for the squared soft-masses m 2 H d and m 2 Hu when we consider a SUSY scale input. That leaves three free parameters in the Higgs sector at treelevel: tan β, µ and B µ . The last one is related to the tree-level mass squared M 2 A of the physical pseudo-scalar via
B µ = 1 tan β + 1/ tan β M 2 A(41)
However, when we consider the UV completion, m 2 H d and m 2 Hu are fixed at the SUSY scale and we are going to solve the above equations (39) and (40) for µ and B µ . Also, the mass matrices for the CP-odd and charged Higgs bosons, for down (s)quarks, charged and neutral (s)leptons, as well as for neutralino and charginos are identical to the MSSM. Only in the up (s)quark sector things change because of the additional top-like states. The scalar mass matrix that links the left-and right-handed MSSM up-squarks and the new vector-like states is given in the basis of ũ L,i ,ũ R,i ,t ,t * by
m 2 u = mũ Lũ * L · · · 1 √ 2 v u T u − v d Y u µ * mũ Rũ * R · · 1 √ 2 v u T t − v d µ * Y t 1 2 2 M T m * t + m 2 ut + v 2 u Y * u Y t mt t * · 1 √ 2 v u M * T Y t + Y T u m * t B * t B * T mt * t .(42)
with the diagonal entries
mũ Lũ * L = − 1 24 − 3g 2 2 + g 2 1 1 − v 2 u + v 2 d + 1 2 2m 2 q + v 2 u Y * t Y t + Y † u Y u (43) mũ Rũ * R = 1 2 2 m * t m t + m 2 u + v 2 u Y u Y † u + 1 6 g 2 1 1 − v 2 u + v 2 d (44) mt t * = 1 2 2 m 2 t + |M T | 2 + v 2 u |Y t | 2 + 1 6 g 2 1 − v 2 u + v 2 d(45)mt * t = m 2 t + |M T | 2 + |m t | 2 + 1 6 g 2 1 − v 2 d + v 2 u(46)
This matrix is diagonalized by Z U :
Z U m 2 u Z U, † = m dia 2,ũ(47)
and we have eight mass eigenstates calledũ i in the following. Similarly, in the fermionic counterpart we choose the basis (u L,i ,t * ) / u * R,i , t * β 2 . The mass matrix in this basis reads
m u = 1 √ 2 v u Y T u 1 √ 2 v u Y t m t M T .(48)
Here, we need two rotation matrices U u L and U u R to diagonalize this matrix,
U u, * L m u U u, † R = m dia u .(49)
The four generations of mass eigenstates are called u i where the first three generations correspond to the up, charm and top quark. is again a generalization of the renormalization procedure presented in Ref. [55]. We explain this calculation and the difference to the MSSM more detailed in sec. III B 2.
3. At the two-loop level, new corrections O(α t (α S + α t + α b + α t )) arise. The importance of these corrections was unknown up to now. However, with the generic results of Ref. [106] for the two-loop effective potential implemented into SARAH [78], a numerical derivation in analogy to Ref. [107] allows to obtain the two-loop self-energies at vanishing external momentum for the scalars which get a VEV. Moreover, since
Ref. [79], a fully equivalent and diagrammatic calculation in the limit p 2 = 0 can also be performed by SARAH and SPheno. Both approaches are used to cross-check the two-loop results. We give more details about this calculation in sec. III B 3.
Threshold corrections
The presence of additional vectorlike states change the relations between the running DR parameters and the measured SM parameters. In the gauge sector, the relation between the SM couplings (MS scheme with five flavours) and the DR ones are
α DR (M Z ) = α (5),MS (M Z ) 1 − ∆α(M Z ) ,(50)α DR S (M Z ) = α (5),MS S (M Z ) 1 − ∆α S (M Z )(51)
Here, α
∆α(µ) = α 2π 1 3 − 16 9 4 i=3 log mu i µ − 4 9 8 i=1 log mũ i µ + ∆α MSSM (µ) (52) ∆α S (µ) = α S 2π − 2 3 4 i=3 log mu i µ − 1 6 8 i=1 log mũ i µ + ∆α MSSM S (µ)(53)
We absorbed all corrections which don't change with respect to the MSSM in ∆α MSSM
S (µ)
and ∆α MSSM (µ). Note, this does not include the up-squark sector, now consisting of 8 squarks, to prevent double counting. In the case of the UV complete model, additional terms of the same form show up.
To relate α to the running couplings g 1 and g 2 , the running Weinberg angle sin Θ and the electroweak VEV in DR scheme are needed. Also here the vector-like tops enter because of the new loop corrections to the mass shifts δM 2 Z and δM 2 W of the gauge bosons. The corrections from the extended (s)top sector to the transversal self-energies are
∆Π T,Z (p 2 ) = +3 8 a=1 A 0 m 2 ua Γ Z,Z,ũ * a ,ũa − 12 8 a=1 8 b=1 |Γ Z,ũ * a ,ũ b | 2 B 00 p 2 , m 2 ua , m 2 u b + 3 4 a=1 4 b=1 |Γ L Z,ūa,u b | 2 + |Γ R Z,ūa,u b | 2 H 0 p 2 , m 2 ua , m 2 u b + 4B 0 p 2 , m 2 ua , m 2 u b m ua m u b Γ L * Z,ūa,u b Γ R Z,ūa,u b (54) ∆Π W,T (p 2 ) = −12 8 a=1 6 b=1 |Γ W + ,ũ * a ,d b | 2 B 00 p 2 , m 2 d b , m 2 ua + 3 8 a=1 A 0 m 2 ua Γ W − ,W + ,ũ * a ,ũa + 3 4 a=1 3 b=1 |Γ L W + ,ūa,d b | 2 + |Γ R W + ,ūa,d b | 2 H 0 p 2 , m 2 ua , m 2 d b + 4B 0 p 2 , m 2 ua , m 2 d b m d b m ua Γ L * W + ,ūa,d b Γ R W + ,ūa,d b(55)
with
H 0 (p, m 1 , m 2 ) =4B 22 (p, m 1 , m 2 ) + G 0 (p, m 1 , m 2 ) ,(56)G 0 (p, m 1 , m 2 ) =(p 2 − m 2 1 − m 2 2 )B 0 (p, m 1 , m 2 ) − A 0 (m 1 ) − A 0 (m 2 ) ,(57)B 22 (p, m 1 , m 2 ) = 1 6 1 2 A 0 (m 1 ) + A 0 (m 2 ) + m 2 1 + m 2 2 − 1 2 p 2 B 0 (p, m 1 , m 2 ) + m 2 2 − m 2 1 2p 2 A 0 (m 2 ) − A 0 (m 1 ) − (m 2 2 − m 2 1 )B 0 (p, m 1 , m 2 ) + m 2 1 + m 2 2 − 1 3 p 2 .(58)
The appearing vertices are given in appendix A 1. All other contributions are identical to the MSSM and given for instance in Ref. [55]. With that information, v and sin 2 Θ DR W are calculated by
v 2 =(M 2 Z + δM 2 Z ) (1 − sin 2 Θ DR W ) sin 2 Θ DR W πα DR (59) sin 2 Θ DR W = 1 2 − 1 4 − πα DR √ 2M 2 Z G F (1 − δ r )(60)
Here, G F is the Fermi constant and δ r doesn't receive new corrections compared to the MSSM (Expressions for δ r can be found in [108]). Also here the spectator fields in the UV complete version will show up in a similar way because their contributions don't vanish even in the limit that all superpotential and soft-breaking interactions of those are assumed to vanish.
The running Yukawa couplings are also calculated in an iterative way. We concentrate on the quark sector, because the leptons don't get new contributions from the new vector-like quarks at one-loop. This is also true for the UV complete model because these contributions are proportional to the superpotential interactions which we assume to vanish for the E and Q states. The starting point are the running fermion masses in DR obtained from the pole masses given as input:
m DR,SM d,s,b =m d,s,b 1 − α DR S 3π − 23α DR,2 S 72π 2 + 3 128π 2 g DR,2 2 − 13 1152π 2 g DR,2 1 (61) m DR,SM u,c =m u,c 1 − α DR S 3π − 23α DR,2 S 72π 2 + 3 128π 2 g DR,2 2 − 7 1152π 2 g DR,2 1 (62) m DR,SM t =m t 1 + 1 16π 2 ∆m (1),qcd t + ∆m (2),qcd t + ∆m (1),ew t (63) with ∆m (1),qcd t = − 16πα DR S 3 5 + 3 log M 2 Z m 2 t (64) ∆m (2),qcd t = − 64π 2 α DR,2 S 3 1 24 + 2011 384π 2 + ln 2 12 − ζ(3) 8π 2 + 123 32π 2 log M 2 Z m 2 t + 33 32π 2 log M 2 Z m 2 t 2 (65) ∆m (1),ew t = − 4 9 g DR,2 2 sin 2 Θ DR W 5 + 3 log M 2 Z m 2 t(66)
The two-loop parts are taken from Ref. [109,110]. The DR masses are matched to the eigenvalues of the loop-corrected fermion mass matrices calculated as
m (1L) f (p 2 i ) = m (T ) f −Σ S (p 2 i ) −Σ R (p 2 i )m (T ) f − m (T ) fΣ L (p 2 i )(67)
Here, the pure QCD and QED corrections are dropped in the self-energiesΣ because they are already absorbed in the running DR masses. The self-energy contributions from the extended (s)top sector to down-quarks are
Σ d,S i,j (p 2 ) = 2 a=1 4 b=1 B 0 p 2 , m 2 u b , m 2 H − a Γ L * d j ,H − a ,u b m u b Γ Ř d i ,H − a ,u b + 8 a=1 2 b=1 B 0 p 2 , m 2 χ − b , m 2 ua Γ L * d j ,ũa,χ − b mχ− b Γ Ř d i ,ũa,χ − b − 4 4 b=1 + B 0 p 2 , m 2 u b , m 2 W − Γ R * d j ,W − ,u b m u b Γ Ľ d i ,W − ,u b (68) Σ d,R i,j (p 2 ) = − 1 2 2 a=1 4 b=1 B 1 p 2 , m 2 u b , m 2 H − a Γ R * d j ,H − a ,u b Γ Ř d i ,H − a ,u b − 1 2 8 a=1 2 b=1 B 1 p 2 , m 2 χ − b , m 2 ua Γ R * d j ,ũa,χ − b Γ Ř d i ,ũa,χ − b − 4 b=1 B 1 p 2 , m 2 u b , m 2 W − Γ L * d j ,W − ,u b Γ Ľ d i ,W − ,u b (69) Σ d,L i,j (p 2 ) = Σ d,R i,j (p 2 ) (L↔R)(70)
The full self-energies in the up-quark sector read now
Σ u,S i,j (p 2 ) = B 0 p 2 , m 2 d b , m 2 H − a Γ L * u j ,H + a ,d b m d b Γ Ř u i ,H + a ,d b + B 0 p 2 , m 2 u b , m 2 ha Γ L * u j ,ha,u b m u b Γ Ř u i ,ha,u b + mχ− a B 0 p 2 , m 2 χ − a , m 2 d b Γ L * u j ,χ − a ,d b Γ Ř u i ,χ − a ,d b + m ua B 0 p 2 , m 2 ua , m 2 A 0 b Γ L * u j ,ua,A 0 b Γ Ř u i ,ua,A 0 b + B 0 p 2 , m 2 χ 0 b , m 2 ua Γ L * u j ,ũa,χ 0 b mχ0 b Γ Ř u i ,ũa,χ 0 b + 4 3 mgB 0 p 2 , m 2 g , m 2 ua Γ L * u j ,ũa,g 1 Γ Ř u i ,ũa,g 1 − 4 B 0 p 2 , m 2 d b , m 2 W − Γ R * u j ,W + ,d b m d b Γ Ľ u i ,W + ,d b − 16 3 B 0 p 2 , m 2 u b , 0 Γ R * u j ,g,u b m u b Γ Ľ u i ,g,u b − 4B 0 p 2 , m 2 u b , 0 Γ R * u j ,γ,u b m u b Γ Ľ u i ,γ,u b − 4B 0 p 2 , m 2 u b , m 2 Z Γ R * u j ,Z,u b m u b Γ Ľ u i ,Z,u b (71) Σ u,R i,j (p 2 ) = − 1 2 B 1 p 2 , m 2 d b , m 2 H − a Γ R * u j ,H + a ,d b Γ Ř u i ,H + a ,d b − 1 2 B 1 p 2 , m 2 u b , m 2 ha Γ R * u j ,ha,u b Γ Ř u i ,ha,u b − 1 2 B 1 p 2 , m 2 χ − a , m 2 d b Γ R * u j ,¯− χa,d b Γ Ř u i ,¯− χa,d b − 1 2 B 1 p 2 , m 2 ua , m 2 A 0 b Γ R * u j ,ua,A 0 b Γ Ř u i ,ua,A 0 b − 1 2 B 1 p 2 , m 2 χ 0 b , m 2 ua Γ R * u j ,ũa,χ 0 b Γ Ř u i ,ũa,χ 0 b − 2 3 B 1 p 2 , m 2 g , m 2 ua Γ R * u j ,ũa,g 1 Γ Ř u i ,ũa,g 1 − B 1 p 2 , m 2 d b , m 2 W − Γ L * u j ,W + ,d b Γ Ľ u i ,W + ,d b − 4 3 B 1 p 2 , m 2 u b , 0 Γ L * u j ,g,u b Γ Ľ u i ,g,u b − B 1 p 2 , m 2 u b , 0 Γ L * u j ,γ,u b Γ Ľ u i ,γ,u b − B 1 p 2 , m 2 u b , m 2 Z Γ L * u j ,Z,u b Γ Ľ u i ,Z,u b (72) Σ u,L i,j (p 2 ) = Σ u,R i,j (p 2 ) (L↔R)(73)
Because of the length of the expressions eqs. (71)(72)(73), the sums over internal generation indices a and b are understood. All necessary vertices are listed in Appendix A 2 1 . The
eigenvalues of m (1L) f (p 2 i ) must fulfill Eig m (1L) d (p 2 = m 2 d i ) = (m DR,SM d , m DR,SM s , m DR,SM b ) (74) Eig m (1L) u (p 2 = m 2 u i ) = (m DR,SM u , m DR,SM c , m DR,SM t , m DR t )(75)
with the DR-masses taken from eqs. (61)(62)(63). In addition, the rotation matrices diagonalizing m values of M T : 1 and 3 TeV. In addition, we fixed tan β = 3 and all soft-masses to 1.5 TeV.
In total, this effect can be as large as a few percent and is larger for smaller M T because the t − t mixing becomes larger. This already gives an important change in the MSSM-like corrections to the Higgs states which turn out to be of order of a few GeV, as we will see. We fixed here M T = 1 TeV.
While a study of flavour physics in this model is beyond the scope of this paper, we want to briefly comment on the expected effects. The CKM matrix in this model is a 4 × 3 matrix and we adjust the Yukawa couplings Y d and Y u in our study in a way that the 3 × 3
sub-matrix assigning the couplings between SM-quarks is in agreement with measurements.
The last column of the CKM matrix carries the elements V t q which define the size of the flavour changing charged currents between the vectorlike top and the SM down-quarks. The size of |V t q | is constrained by the measurements of flavour violating processes which are known to a high precision and which are in agreement with SM predictions. In Ref. [111] the following limits were derived at 3σ:
|V t d | < 0.01 , |V t s | < 0.01 , |V t b | < 0.27(76)
We show the prediction of these elements as a function of Y t in Fig. 4 for M T = 1 TeV.
One can see that the obtained values are well below the current bounds. The main reason for this is that we assume Y 1 t and Y 2 t to vanish.
One-loop corrections
A generic one-loop calculation with SARAH and SPheno was introduced in Ref. [112]. The
v SUSY = g 2 1 + g 2 2 4 (M 2,pole Z − δM 2 Z ) , v d = v SUSY cos β , v u = v SUSY sin β(77)
With these values the tree-level masses are re-calculated and the calculation of the one-loop corrections is started. Here, first the one-loop corrections δt
i = + 6 4 a=1 A 0 m 2 ua m ua Γ L φ i ,ūa,ua + Γ R φ i ,ūa,ua − 3 8 a=1 A 0 m 2 ua Γ φ i ,ũ * a ,ũa(1)
with i = u, d. All other corrections are identical to the results of Ref. [55]. Afterwards, we need the one-loop corrections to the scalar Higgs mass matrix. Here, the vectorlike top quarks contribute to the scalar self-energy Π(p 2 )
Π u,ũ ij (p 2 ) = −6 4 a=1 m ua 4 b=1 B 0 p 2 , m 2 ua , m 2 u b m u b Γ L * φ i ,ūa,u b Γ R φ j ,ūa,u b + Γ R * φ i ,ūa,u b Γ L φ j ,ūa,u b − 3 8 a=1 A 0 m 2 ua Γ φ i ,φ j ,ũ * a ,ũa + 3 8 a=1 8 b=1 B 0 p 2 , m 2 ua , m 2 u b Γ * φ i ,ũ * a ,ũ b Γ φ j ,ũ * a ,ũ b(79)
The necessary vertices to calculate δ t,t t (1) and Π t,t (p 2 ) are given in Appendix A 3. We can now express the one-loop corrected mass matrix of the scalar Higgs by
m 2,(1L) h (p 2 ) =m 2,(T ) h + Π u,ũ (p 2 ) + 1 v d δ u,ũ t (1) d 0 0 1 vu δ u,ũ t (1) u + Π MSSM ¡ u, ¡ u (p 2 ) + 1 v d δ MSSM ¡ u, ¡ u t (1) d 0 0 1 vu δ MSSM ¡ u, ¡ u t (1) u (80)
Here, Π MSSM (m 2 h i ) for each eigenvalue is found. Previously, the one-loop corrections in this model have been calculated in the effective potential approach [46]. This calculation is equivalent to ours in the limit p 2 → 0. Thus, by checking this limit we can easily estimate the error introduced in these calculations by that approximation. Since the additional fermions and the scalars are usually heavier than the desired Higgs mass of 125 GeV, one can expect that the momentum effects are rather moderate. However, before we discuss this in detail, we go one step further to the two-loop corrections. O(α t α s ) with α t = (Y 33 u ) 2 /4π, α t = (Y 3 t ) 2 /4π. The next important contributions from the MSSM are those of O(α 2 t ). These come from diagrams involving (s)tops and Higgs states respectively Higgsinos. Also here, the diagrams shown in Fig. 6 are the same as in the MSSM, but the sums over (s)fermion generations
Two-loop corrections
u i Φ 0 k u j u i Φ ± k d j u ĩ u k χ 0 j d ĩ u k χ ± j u ĩ d k χ ± j u i Φ 0 k u jũ i Φ ± k d jũ i Φ jũ ĩ u jũ ĩ d j FIG. 6. Two-loop diagrams giving contributions to the effective potential O(α 2 t ), O(α 2 t ), and O(α t α t ). Here, Φ 0 = {h, H, G 0 , A 0 }, Φ ± = {H ± , G ± }, Φ = {Φ 0 , Φ ± }.
The index ranges are:
Φ(1, 2);χ 0 (1 − 4);χ ± (1, 2); u(1 − 4); d(1 − 3);ũ(1 − 8);d(1 − 6).δt (2L) i = ∂V eff,(2L) ∂v i Π (2L) ij = ∂ 2 V eff,(2L) ∂v i ∂v j(81)
However, this involves a numerical derivation which sometimes suffers from numerical problems and rather large uncertainties. Thus, the second method implemented in SARAH and Given the two-loop corrections, the loop-corrected Higgs mass can be expressed by
m 2,(2L) h (p 2 ) = m 2,(T ) h + Π (1L) (p 2 ) + Π (2L) (0) − 1 v d (δt (1L) d + δt (2L) d ) 0 0 1 vu (δt (1L) u + δt (2L) u ) (82)
Here, we have no longer distinguished between corrections involving vectorlike tops or not, but used Π (XL) and δt (XL) for the sum of all contributions. The eigenvalues m 2 h i fulfilling Eig(m 2,(2L) h (m 2 h i )) = m 2 h i are associated with the scalar pole masses. In the following, the smaller value m 2 h 1 corresponds to the SM-like Higgs boson and we are going to use the short notation m h ≡ m 2 h 1 for it. Before we turn to the full calculation, we want to discuss briefly the importance of the different contributions at two-loop. For this purpose we depict in Fig. 7 the different two-loop contributions to the Higgs mass matrix: However, we will not go into details in these aspects of this model here. We are just using the FlavorKit results [113] to double check that all points are in agreement with current bounds from flavour observables. This is, of course, expected as we already discussed in sec. III B 1. The Fortran code written by SARAH was compiled together with SPheno version 3.3.6. For all parameter scans in the following we have used the Mathematica package SSP [114]. We have identified in sec. III B 3 two regions where the new two-loop corrections are expected to be even more important. The first region is the one with non-vanishing T t . This is studied in Fig. 9 where we set T t = 2000 GeV · Y t . In addition, we check also the The other region we identified where the two-loop corrections can be important is the one where the SM-like Higgs has a larger down-type fraction. This happens if M 2 A becomes small. We discuss this case in Fig. 10 for zero and non-zero B T again. In particular for the As a next step we want to understand the dependence of the loop corrections on the involved masses a bit more. We start with the dependence on the vectorlike mass parameter M T and B T and show in Fig. 11 the Higgs mass at the one-and two-loop level. At oneloop we have the well-known picture that the corrections quickly decrease with increasing There is also another, very interesting observation: even for Y t = 0 the fine-tuning in The running gaugino mass at the SUSY scale is related to the one at the messenger scale by the ratio of the corresponding gauge coupling at both scales:
Π ij ≡ Π (2L) ij − δ ij 1 v i δt (2L) i i = d, u(83)Y t ′ Π (2L) ij (α t ′ α t ) [GeV 2 ] 0.0 0.2 0.4 0.6 0.8 1.0 1000 500 0 Y t ′ Π (2L) ij (α t ′ α t ) [GeV 2M i (Q) = M i (M ) g 2 i (Q) g 2 i (M ) = g 2 i (Q)Λ G(84)
We show the minimal fine-tuning in the (mg, m h ) plane in Fig. 15. It is interesting that the fine-tuning for m h = 125 GeV can be smaller than for m h = 122 GeV and m h = 128 GeV when the gluino mass is sufficiently large. For very large Y t where the FT becomes the best, the theory is not perturbative up to the GUT scale. Since there is a cut-off anyway in the theory, there is no real need to maintain gauge coupling unification by adding the spectator fields at the SUSY scale. Therefore, one might wonder what the FT of the minimal model is. This is depicted in Fig. 16. In this setup, the squarks are lighter for the same values of M and Λ because of the smaller strong coupling at the messenger scale. Thus, in general larger Λ is needed to increase the Higgs mass. This leads also to larger gluino masses. This is shown in Fig. 17 where we compare the minimal value of Λ to get a Higgs mass larger than 122 GeV in the (tan β, Y t ) plane for a messenger scale of again 10 7 GeV, and the resulting stop and gluino masses triggered by . We found that the fine-tuning can be reduced significantly compared to minimal GMSB with only the MSSM particle content. Often, those regions with the best fine-tuning which are in agreement with the Higgs mass measurement are ruled out by gluino searches. Interestingly, we find that for heavy gluino masses the fine-tuning for heavier Higgs masses can be even better. In particular, for mg 1400 GeV, the best fine-tuning is found for a Higgs mass of roughly 125 GeV.
ACKNOWLEDGEMENTS
We thank Mark D. Goodsell for a fruitful collaboration to automatize the two-loop calculations with SARAH and SPheno and many interesting discussions in this context. This has been crucial to facilitate this project.
Γũ iαd * jβ W − µ = − i 1 √ 2 g 2 δ αβ 3 a=1 Z U, * ia Z D ja (A1) Γũ iαũ * jβ Zµ = − i 6 δ αβ 3g 2 cos Θ W − g 1 sin Θ W 3 a=1 Z U, * ia Z U ja − 4g 1 sin Θ W Z U, * i7 Z U j7 + Z U, * i8 Z U j8 + 3 a=1 Z U, * i3+a Z U j3+a (A2) Γ Ld iα u jβ W − µ = − i 1 √ 2 g 2 δ αβ 3 a=1 U u, * L,ja U d L,ia (A3) Γ Rd iα u jβ W − µ = 0 (A4) Γ Lū iα u jβ Zµ = − i 6 δ αβ 3g 2 cos Θ W − g 1 sin Θ W 3 a=1 U u, * L,ja U u L,ia − 4g 1 U u, * L,j4 sin Θ W U u L,i4 (A5) Γ Rū iα u jβ Zµ = 2i 3 g 1 δ αβ sin Θ W U u, * R,i4 U u R,j4 + 3 a=1 U u, * R,ia U u R,ja(A6)Γ Lū iα d jβ W + µ = − i 1 √ 2 g 2 δ αβ 3 a=1 U d, * L,ja U u L,ia (A8) Γ Rū iα d jβ W + µ = 0 (A9) Γ Lū iα u jβ gγµ = − i 2 g 3 λ γ α,β U u, * L,j4 U u L,i4 + 3 a=1 U u, * L,ja U u L,ia (A10) Γ Rū iα u jβ gγµ = − i 2 g 3 λ γ α,β U u, * R,i4 U u R,j4 + 3 a=1 U u, * R,ia U u R,ja (A11) Γ Lū iα u jβ γµ = − i 6 δ αβ 3g 2 sin Θ W + g 1 cos Θ W 3 a=1 U u, * L,ja U u L,ia + 4g 1 U u, * L,j4 cos Θ W U u L,i4 (A12) Γ Rū iα u jβ γµ = − 2i 3 g 1 cos Θ W δ αβ U u, * R,i4 U u R,j4 + 3 a=1 U u, * R,ia U u R,ja (A13) Γ Lū iα u jβ Zµ = − i 6 δ αβ 3g 2 cos Θ W − g 1 sin Θ W 3 a=1 U u, * L,ja U u L,ia − 4g 1 U u, * L,j4 sin Θ W U u L,i4 (A14) Γ Rū iα u jβ Zµ = 2i 3 g 1 δ αβ sin Θ W U u, * R,i4 U u R,j4 + 3 a=1 U u, * R,ia U u R,ja (A15) Γ Lū iα u jβ A 0 k = 1 √ 2 δ αβ U u, * R,i4 3 a=1 U u, * L,ja Y t ,a + 3 b=1 U u, * L,jb 3 a=1 U u, * R,ia Y u,ab Z A k2 (A16) Γ Rū iα u jβ A 0 k = − 1 √ 2 δ αβ Z A k2 3 a=1 Y * t ,a U u L,ia U u R,j4 + 3 b=1 3 a=1 Y * u,ab U u R,ja U u L,ib (A17) Γ Ld iαχ − jũ kγ = iU * j2 δ αγ 3 b=1 Z U, * kb 3 a=1 U d, * R,ia Y d,ab (A18) Γ Rd iαχ − jũ kγ = − iδ αγ g 2 3 a=1 Z U, * ka U d L,ia V j1 − Z U, * k7 3 a=1 Y * t ,a U d L,ia + 3 b=1 3 a=1 Y * u,ab Z U, * k3+a U d L,ib V j2 (A19) Γ L χ 0 i u jβũ * kγ = − i 6 δ βγ 3 √ 2g 2 N * i2 3 a=1 U u, * L,ja Z U ka + 6N * i4 3 a=1 U u, * L,ja Y t ,a Z U k7 + 3 b=1 U u, * L,jb 3 a=1 Y u,ab Z U k3+a + √ 2g 1 N * i1 4U u, * L,j4 Z U k8 + 3 a=1 U u, * L,ja Z U ka (A20) Γ R χ 0 i u jβũ * kγ = i 3 δ βγ 2 √ 2g 1 3 a=1 Z U k3+a U u R,ja N i1 − 3 3 b=1 3 a=1 Y * u,ab U u R,ja Z U kb N i4 + 2 √ 2g 1 N i1 Z U k7 − 3 3 a=1 Y * t ,a Z U ka N i4 U u R,j4 (A21) Γ Lū iα d jβ H + k = iδ αβ U u, * R,i4 3 a=1 U d, * L,ja Y t ,a + 3 b=1 U d, * L,jb 3 a=1 U u, * R,ia Y u,ab Z + k2 (A22) Γ Rū iα d jβ H + k = iδ αβ 3 b=1 3 a=1 Y * d,ab U d R,ja U u L,ib Z + k1 (A23) Γ L gαu jβũ * kγ = − i 1 √ 2 g 3 φgλ α γ,β U u, * L,j4 Z U k8 + 3 a=1 U u, * L,ja Z U ka (A24) Γ R gαu jβũ * kγ = i 1 √ 2 g 3 φ * g λ α γ,β Z U k7 U u R,j4 + 3 a=1 Z U k3+a U u R,ja (A25) Γ Lū iα u jβ h k = − i 1 √ 2 δ αβ U u, * R,i4 3 a=1 U u, * L,ja Y t ,a + 3 b=1 U u, * L,jb 3 a=1 U u, * R,ia Y u,ab Z H k2 (A26) Γ Rū iα u jβ h k = − i 1 √ 2 δ αβ Z H k2 3 a=1 Y * t ,a U u L,ia U u R,j4 + 3 b=1 3 a=1 Y * u,ab U u R,ja U u L,ib(A27)
3. Higgs Vertices with vectorlike (s)tops We give in the following the two-loop RGEs for the considered model. In general, the RGEs for a parameter X are defined by
Γ Lū iα u jβ h k = − i 1 √ 2 δ αβ U u, * R,i4 3 a=1 U u, * L,ja Y t ,a + 3 b=1 U u, * L,jb 3 a=1 U u, * R,ia Y u,ab Z H k2 (A28) Γ Rū iα u jβ h k = − i 1 √ 2 δ αβ Z H k2 3 a=1 Y * t ,a U u L,ia U u R,j4 + 3 b=1 3 a=1 Y * u,ab U u R,ja U u L,ib (A29) Γ h iũjβũ * kγ = i 12 δ βγ − 3g 2 2 + g 2 1 3 a=1 Z U, * ja Z U ka v d Z H i1 − v u Z H i2 + 2Z U, * j7 3 √ 2µ 3 a=1 Y * t ,a Z U ka Z H i1 − 3 √ 2 3 a=1 T * t ,a Z U ka Z H i2 − 6v u 3 b=1 3 a=1 Y * t ,a Y u,ba Z U k3+b Z H i2 − 2g 2 1 v d Z H i1 Z U k7 + 2g 2 1 v u Z H i2 Z U k7 − 6v u 3 a=1 |Y t ,a | 2 Z H i2 Z U k7 − 2 − 3 √ 2µ 3 b=1 3 a=1 Y * u,ab Z U, * j3+a Z U kb Z H i1 + 3 √ 2M T Z U, * j8 3 a=1 Y * t ,a Z U ka Z H i2 + 3 √ 2 3 b=1 Z U, * jb 3 a=1 Z U k3+a T u,ab Z H i2 + 6v u 3 a=1 Z U, * ja Y t ,a 3 b=1 Y * t ,b Z U kb Z H i2 + 3 √ 2 3 b=1 3 a=1 Z U, * j3+a T * u,ab Z U kb Z H i2 + 3 √ 2Z U, * j8 3 b=1 3 a=1 Y * u,ab m t ,a Z U kb Z H i2 + 6v u 3 c=1 Z U, * j3+c 3 b=1 3 a=1 Y * u,ca Y u,ba Z U k3+b Z H i2 + 6v u 3 c=1 3 b=1 Z U, * jb 3 a=1 Y * u,ac Y u,ab Z U kc Z H i2 + 2g 2 1 3 a=1 Z U, * j3+a Z U k3+a v d Z H i1 − v u Z H i2 + 3 √ 2 3 a=1 Z U, * ja T t ,a Z H i2 Z U k7 + 6v u 3 b=1 Z U, * j3+b 3 a=1 Y * u,ba Y t ,a Z H i2 Z U k7 − 3 √ 2µ * Z H i1 3 a=1 Z U, * ja Y t ,a Z U k7 + 3 b=1 Z U, * jb 3 a=1 Y u,ab Z U k3+a − 2g 2 1 v d Z U, * j8 Z H i1 Z U k8 + 2g 2 1 v u Z U, * j8 Z H i2 Z U k8 + 3 √ 2M * T 3 a=1 Z U, * ja Y t ,a Z H i2 Z U k8 + 3 √ 2 3 b=1 Z U, * jb 3 a=1 m * t ,a Y u,ab Z H i2 Z U k8 (A30) Γ h i h jũkγũ * lδ = i 12 δ γδ − 3g 2 2 + g 2 1 3 a=1 Z U, * ka Z U la Z H i1 Z H j1 − Z H i2 Z H j2 − 4 3 3 a=1 Z U, * ka Y t ,a 3 b=1 Y * t ,b Z U lb Z H i2 Z H j2 + 3Z U, * k7 3 b=1 3 a=1 Y * t ,a Y u,ba Z U l3+b Z H i2 Z H j2 + 3 3 c=1 Z U, * k3+c 3 b=1 3 a=1 Y * u,d dt X = 1 16π 2 β (1) X + 1 (16π 2 ) β∆ U V β (n) x ≡ β (n),U V x − β (n) x (B3)
The calculation of the RGEs in SARAH is based on generic expressions given in Refs. [115][116][117][118][119][120] 1. Gauge Couplings ∆β (1) g 1 =
∆β (1) g 2 = 0 (B8) ∆ U V β (1) g 2 = 3g 3 2 (B9) ∆β (2) g 2 = −6g 3 2 Y t Y * t (B10) ∆ U V β (2) g 2 = 1 5 g 3 2 105g 2 2 + 80g 2 3 + g 2 1 (B11) ∆β (1) g 3 = g 3 3 (B12) ∆ U V β (1) g 3 = 2g 3 3 (B13) ∆β (2) g 3 = 2 15 g 3 3 − 30 Y t Y * t + 85g 2 3 + 8g 2 1 (B14) ∆ U V β (2) g 3 =M 2 = 0 (B20) ∆ U V β(1)M 2 = 6g 2 2 M 2 (B21) ∆β(2)M 2 = 12g 2 2 − M 2 Y t Y * t + Y * t T t (B22) ∆ U V β (2) M 2 = 2 5 g 2 2 10 21g 2 2 M 2 + 8g 2 3 M 3 + M 2 + g 2 1 M 1 + M 2 (B23) ∆β (1) M 3 = 2g 2 3 M 3 (B24) ∆ U V β(1)M 3 = 4g 2 3 M 3 (B25) ∆β(2)M 3 = 8 15 g 2 3 − 15M 3 Y t Y * t + 15 Y * t T t + 4g 2 1 M 1 + 4g 2 1 M 3 + 85g 2 3 M 3 (B26) ∆ U V β(2)Y d = Y t ,j Y d Y * t i (B28) ∆β (2) Y d = −3Y d Y † u Y u Y t Y * t + Y d − 3 Y t Y † d Y d Y * t + 8 75 50g 4 3 + 7g 4 1 − 2 Y d Y * t i Y T d Y * d Y t j − 2 Y d Y * t i Y T u Y * u Y t j + 1 5 Y t ,j − 10 Y d Y † u Y u Y * t i + − 15Tr Y u Y † u − 25 Y t Y * t + 4g 2 1 Y d Y * t i (B29) ∆ U V β (2) Y d = 1 75 49g 4 1 + 675g 4 2 + 800g 4 3 Y d (B30) ∆β (1) Ye = 0 (B31) ∆β (2) Ye = 3 25 Y e 24g 4 1 − 25 Y t Y † d Y d Y * t (B32) ∆ U V β (2) Ye = 9 25 25g 4 2 + 7g 4 1 Y e (B33) ∆β (1) Yu = 3 Y t ,j Y u Y * t i + Y u Y t Y * t (B34) ∆β (2) Yu = −9Y u Y † u Y u Y t Y * t + Y u − 18 Y t Y † u Y u Y * t − 3 Y t Y † d Y d Y * t − 9 Y t Y * t 2 + 104 75 g 4 1 + 16 3 g 4 3 + 4 5 20g 2 3 + g 2 1 Y t Y * t − 4 Y u Y * t i Y T u Y * u Y t j + 1 5 Y t ,j − 10 2 Y u Y † u Y u Y * t i + Y u Y † d Y d Y * t i + 2g 2 1 + 30g 2 2 − 45Tr Y u Y † u − 65 Y t Y * t Y u Y * t i (B35) ∆ U V β (2) Yu = 1 75 675g 4 2 + 800g 4 3 + 91g 4 1 Y u (B36) β (1) Y t ,i = − 3g 2 2 + 3Tr Y u Y † u + 6 Y t Y * t − 13 15 g 2 1 − 16 3 g 2 3 Y t ,i + 3 Y T u Y * u Y t i + Y T d Y * d Y t i (B37) β(2)
Y t ,i = + 3367 450 g 4 1 + g 2 1 g 2 2 + 15 2 g 4 2 + 136 45 g 2 1 g 2 3 + 8g 2 2 g 2 3 + 32 9
g 4 3 − 22 Y t Y * t 2 − 5 Y t Y † d Y d Y * t − 22 Y t Y † u Y u Y * t + Y t Y * t 16g 2 3 + 6g 2 2 − 9Tr Y u Y † u + 6 5 g 2 1 + 4 5 g 2 1 Tr Y u Y † u + 16g 2 3 Tr Y u Y † u − 3Tr Y d Y † u Y u Y † d − 9Tr Y u Y † u Y u Y † u Y t ,i + − 3Tr Y d Y † d + 2 5 g 2 1 − Tr Y e Y † e Y T d Y * d Y t i + 2 5 g 2 1 Y T u Y * u Y t i + 6g 2 2 Y T u Y * u Y t i − 13 Y t Y * t Y T u Y * u Y t i − 9Tr Y u Y † u Y T u Y * u Y t i − 2 Y T d Y * d Y T d Y * d Y t i − 2 Y T u Y * u Y T d Y * d Y t i − 4 Y T u Y * u Y T u Y * u Y t i (B38) ∆ U V β (2) Y t ,i = 1 75 675g 4 2 + 800g 4 3 + 91g 4 1 Y t ,i(B39)
Bilinear Superpotential Parameters
∆β (1) µ = 3µ Y t Y * t (B40) ∆β (2) µ = + 4 5 20g 2 3 + g 2 1 µ Y t Y * t − 9µ Y t Y * t 2 + 6 25 µ − 25 Y t Y † d Y d Y * t + 4g 4 1 − 75 Y t Y † u Y u Y * t (B41) ∆ U V β (2) µ = 9g 4 2 µ + 21 25 g 4 1 µ (B42) β (1) M T = 2 15 15M T Y t Y * t + 15 Y t Y † u m t − 8 5g 2 3 + g 2 1 M T (B43) β(2)M T − 8M T Y t Y * t 2 − 2M T Y t Y † d Y d Y * t − 2M T Y t Y † u Y u Y * t − 2 Y t Y † d Y d Y † u m t − 2 Y t Y † u Y u Y † u m t + Y t Y † u m t 6g 2 2 − 6Tr Y u Y † u − 2 5 g 2 1 + Y t Y * t 6g 2 2 M T − 6M T Tr Y u Y † u − 8 Y t Y † u m t − 2 5 g 2 1 M T (B44) ∆ U V β(2)M T = 16 75 50g 4 3 + 7g 4 1 M T (B45) β (1) m t ,i = 2 M T Y u Y * t i + Y u Y † u m t i − 16 15 5g 2 3 + g 2 1 m t ,i (B46) β (2) m t ,i = + 16 225 131g 4 1 + 50g 4 3 + 80g 2 1 g 2 3 m t ,i − 2 5 20M T Y t Y * t + 5 Y t Y † u m t + M T − 15g 2 2 + 15Tr Y u Y † u + g 2 1 Y u Y * t i + − 15g 2 2 + 15Tr Y u Y † u + 15 Y t Y * t + g 2 1 Y u Y † u m t i + 5 M T Y u Y † d Y d Y * t i + M T Y u Y † u Y u Y * t i + Y u Y † d Y d Y † u m t i + Y u Y † u Y u Y † u m t i (B47) ∆ U V β (2) m
FIG. 1 .
1Running of the gauge couplings (α −1 i (Q), i = 1, 2, 3, at 1-loop). The dashed lines belong to the minimal vectorlike top model and the full lines to the UV-completed model. The dotted lines represent the SM-only running up to M SUSY = 1500 GeV.
B. Calculation of the Higgs masses at one-and two-loopIn this section we give details about the calculation of the Higgs masses at the one-and two-loop level. We have performed all calculations with the combination of the software packages SARAH and SPheno which automatize all relevant steps. There are three changes compared to the calculation of the Higgs masses in the MSSM:1. The new vectorlike states change the threshold corrections at M Z to derive the gauge and Yukawa couplings in DR scheme from the measured SM couplings and fermion masses. SARAH and SPheno applies and generalizes the procedure of Ref.[55] to make this matching. We give more details about the main differences compared to the MSSM in sec. III B 1.2. At the one-loop level new contributions of O(α t ) arise. These corrections are widely discussed in literature and are known to be able to give a push of many GeV to the Higgs mass. While these corrections so far have just been calculated in the effective potential approach, SARAH and SPheno perform the full one-loop corrections in a diagrammatic way including the dependence of the external momenta. This calculation
α (5),MS are taken as input and receive corrections from the top loops as well as from new physics. For the minimal model, the thresholds read
FIG. 3 .
3by the measurement of the CKM matrix. One can use these conditions and invert eq. (67) to get expressions for the tree-level mass matrices, which are then used to calculated Y DR d and Y DR u . Since the self-energies depend on the Yukawa matrices, the entire calculation has to be numerically iterated until a stable point is reached.After the calculation of the gauge and Yukawa couplings at M Z is finished, the twoloop RGEs shown in Appendix B are used to run the couplings up to M SUSY . Since in all calculations the masses of the SUSY states at M Z are needed, also a two-loop running of all parameters from M SUSY to M Z is done to get the running tree-level masses at M Z . The effect of the threshold corrections on the running value of the top Yukawa coupling (Y 33 u ) at the SUSY scale as a function of Y t is shown in Fig. 3. We have used two different Running top Yukawa coupling (Y 33 u ) at the SUSY scale as function of Y t for two different values of M T : 1.0 TeV (blue) and 3.0 TeV (dotted red).
FIG. 4 .
4One might wonder why the values for the top Yukawa don't agree for Y t = 0. The reason is that the threshold corrections to g 3 are always present and they depend on M T , even if other couplings of the vectorlike states are absent. This changes the prediction for g 3 which enters (i) the SM and MSSM part of the thresholds corrections, and (ii) the RGEs when running from M Z to M SUSY . Absolute size |V t q | of the CKM entries between the vectorlike top states and the SM down quarks q = u, s, b. The colour code is |V t d | (full blue), |V t s | (dotted red), and |V t b | (dashed green).
procedure for this is as follows. First, all running tree-level parameters are calculated at the SUSY scale. The g i (i = 1, 2, 3) and Y i (i = e, d, u) are obtained by running up the DR values calculated at M Z , the Higgs soft-masses m 2 H d and m 2 Hu are derived from the tadpole equations eqs. (39)-(40). Using these values all tree-level masses are obtained and δM 2 Z is calculated. This quantity is needed to get the correct electroweak VEVs at the SUSY scale from the Z-boson pole mass M 2,pole Z and tan β via
tadpole equations T i are needed. The changes compared to the MSSM stemming from vectorlike tops are: δ u,ũ t
(p 2 )
2u are the MSSM results without any contributions from up (s)quarks. The eigenvalues m 2 h i of m is a function of the external momentum, this calculation is usually iterated until a stable solution m 2,(1L) h
FIG. 5 .
5It is very well known that two-loop corrections in the MSSM are crucial: they can give a large push to the Higgs mass and are the only chance to get agreement between the Higgs mass in the MSSM for moderate SUSY masses (< 2 TeV) and the measurement of about 125 GeV. This mass is out of reach only using one-loop corrections. This is not necessarily the case for models with vectorlike quarks: if the new couplings to the SM-like Higgs are large enough, even one-loop corrections might be sufficient to find a sufficiently large Higgs mass. Nevertheless, there are good reasons to consider also the two-loop corrections: to be able to make any meaningful statement in the considered model if a point is excluded, the difference to the measurement must be larger than the theoretical uncertainty. At oneloop the theoretical uncertainty in the Higgs mass prediction can easily be 10 GeV or more, i.e. it is not possible at all to restrict many regions of the parameter space by a one-loop calculation. Of course, also the opposite might happen: points which are in good agreement at one-loop can be ruled out by a two-loop calculation.For this reason, we are going to give details about a two-loop calculation including the dominant corrections. 'Dominant' in this context means all contributions excluding those of the electroweak gauge couplings g 1 and g 2 . That's the same precision which is also usually considered for the MSSM. The remaining electroweak corrections, together with the missing momentum dependence and the unknown higher-order corrections are estimated to a remaining uncertainty of about 3 GeV. In the MSSM the most dominant two-loop corrections Two-loop diagrams giving contributions to the effective potential O(α t α s ) and O(α t α s ). Here, the indices of up-quark generations (u i ) run from 1 to 4, and those of up-squark generations (ũ i ) from 1 to 8.are those involving the strong coupling constant g 3 because of large colour factors. The diagrams which contribute in the MSSM are depicted inFig. 5. In the model at hand with vectorlike tops, the diagrams are actually the same but with a sum over a larger number of (s)quark generations. The obtained corrections from these diagrams are O(α t α s ) and
are extended. These diagrams give contributions of the order O(α 2 t ), O(α t α t ) and O(α 2 t ). Also the corrections O(α t (α b + α τ )) with α b = (Y 33 d ) 2 /(4π), α τ = (Y 33 τ ) 2 /(4π) are known in the MSSM. Especially for moderate values of tan β these corrections are less important. Nevertheless, in our calculations also these corrections together with the counterparts O(α t (α b + α τ )) are included. SARAH and SPheno offer two possibilities to calculate the two-loop corrections to scalar Higgs masses: either a purely effective potential calculation can be done. In that case, the diagrams as shown in Figs. 5 and 6 are calculated to get V eff,(2L) , and the derivatives of the results with respect to the Higgs VEVs are taken to get the two-loop corrections to the tadpoles and self-energies
SPheno is often the preferred one: this method employs a diagrammatic calculation where the external Higgs legs explicitly show up. Even if this leads to a much bigger set of two-loop diagrams, the calculation is not necessarily slower. All diagrams are evaluated in the limit p 2 → 0, i.e. the results give equivalent results for δt
It turns out that the corrections O(α t α b ) are negligible. The corrections O(α t α τ ) are even much smaller and therefore not shown in Fig. 7. We consider here two different cases: vanishing T t and T t = 2000 GeV · Y t . In both cases we find that the most dominant contributions are those involving the strong interaction what's similar to the MSSM. The next important ones are those O(α t α t ), while the O(α 2 t ) contributions are moderately small. Here, the difference compared to the MSSM corrections O(α s α t ) and O(α 2 t ) which often cancel to some extent, is that here the contributions come with the same sign. We also see that for most contributions the impact on the (1,1) element is the largest one, i.e. the dominant part of these contributions come from F -terms µY t . Thus, the new two-loop corrections are expected to be more important for parameter regions where the light Higgs has a larger H d fraction. The main differences between the cases of vanishing and nonvanishing T t is that the corrections involving the strong interaction to (1,1) become smaller, while those to the (2,2) increase. Also the O(α t α t ) contributions to the (2,2) are enhanced. Thus, another region where the new two-loop corrections are expected to become important are those with large trilinear soft-terms T t .
FIG. 7 .
7Two-loop contributions the Higgs mass matrix involving vectorlike (s)quark. We used here M T = 1.0 TeV and put all soft-mass terms to 1.5 TeV. On the left, we set T t = 0, on the right T t = 2.0 TeV · Y t . Dashed lines are for the (1,1) element, full lines for the (2,2) one, and dotted lines for the off-diagonal contribution. In the first three rows we plot the individual contributionsO(α 2 t ), O(α t α t ), O(α t α S ),while the last row shows the sum of all contributions. IV. RESULTS -PART I: THE HIGGS MASS Before we turn to our main results, namely the discussion of the fine-tuning in the UV complete model, we want to discuss the importance of the different Higgs mass corrections we have included. For this reason we consider first the minimal model with the MSSM extended by vector-like tops only. To deal with the large number of free parameters at the SUSY scale when not considering an UV embedding, we make the following assumptions about the MSSM soft masses: m 2 u = m 2 q = m 2 d = m 2 e = m 2 l = 1 · (1.5TeV) 2 M 1 = 0.5 TeV , M 2 = 1.0 TeV , M 3 = 2.0 TeV T u = T d = T e = 0 Moreover, we fix usually the MSSM parameters µ = 1.0 TeV , M 2 A = (1 TeV) 2 and for the new sector we assume if not stated otherwise T t = m t = B t = 0 m 2 t = m 2 t = (1.5 TeV) 2 In addition, the most important SM parameters were chosen as α MS S (M Z ) = 0.1180 , m MS b (m b ) = 4.2 GeV , m pole t = 173.2 GeV As already mentioned we employ the combination of the computer tools SPheno and SARAH for all numerical calculations: we have implemented the minimal model with vectorlike tops as well as the UV complete variant in SARAH version 4.5.3 and the model files will become public with the next release of SARAH. SARAH was used to generate Fortran code for SPheno. The obtained Fortran routines include automatically all new features from vectorlike stops discussed in the last sections which are necessary for the precise Higgs mass calculation. Also routines for the calculation of flavour observables and decays widths are generated by SARAH.
FIG. 8 .FIG. 9 .
89A. The difference between one-loop effective potential, full one-loop and two-loopWe check the importance of the corrections calculated here for the first time. For this purpose we compare in Figs. 8 -10 the prediction for the Higgs mass calculated (i) at one-loop with vanishing external momenta but including thresholds, (ii) at one-loop with full momentum dependence but neglecting the threshold corrections to SM gauge and Yukawa couplings, (iii) at full one-loop including the full momentum dependence and all threshold corrections, (iv) at full one-loop with dominant two-loop corrections. The one-loop calculation without external momenta is equivalent to the calculation performed in the effective potential approach. For all three Figures we have put M T = 1 TeV.InFig. 8we compare the results for two different values of tan β: 2 and 10. While there is a large difference already at tree-level, the impact of the loop corrections is similar for both values of tan β. Thus, we find that m h 125 GeV is found for Y t ∼ 0.9 (0.6) for tan β = 2(10). Including the momentum dependence in the one-loop calculation of the vectorlike states can account for changes up to 2 GeV for large Y t and are negative. In contrast, for the considered scenario the two-loop corrections are of a similar size, but positive. However, the biggest difference are caused by the threshold corrections. Since these can have a large impact on the top Yukawa couplings, we find that the prediction of the SM-like Higgs mass can deviate by up to 5 GeV. This effect is more pronounced for smaller tan β. Note, even in the limit Y t → 0, we find a shift by about 1 GeV compared to the calculation using only MSSM results. The reason is that the threshold corrections to g 3 don't vanish evenin this limit. Therefore, the running value of the top Yukawa coupling entering the loop calculations changes slightly, which has still a visible effect on the Higgs mass. The absolute size of the one-loop corrections can grow up to 30 GeV for both values of tan β, while the two-loop corrections are smaller by about a factor of 10. When we compare these numbers Top left: light Higgs mass as function of Y t . The red line corresponds to the effective potential calculation at one-loop, orange is the one-loop corrections with external momenta but neglecting the new threshold correction stemming from vectorlike states, blue is the full oneloop calculation including the momentum dependence and all thresholds, and green includes the dominant two-loop corrections together with the full one-loop correction. Top right: impact of the threshold corrections (red), the momentum dependence at one-loop (orange) and the two-loop corrections (green), given as the difference ∆m h = m h − m h (1L, p 2 = 0, all thresholds). Bottom left: absolute size of the one-(blue) and two-loop (green) corrections stemming from the vectorlike states. Note, for better readability we re-scaled the two-loop corrections by a factor of 10. Bottom right: relative importance of the one-(blue) and two-loop (green) corrections normalized to the size of the purely MSSM-like corrections. The full lines are for tan β = 10 and the dotted one are for tan β = 2. We used here M T = 1.0 TeV, B T = 0. with the purely MSSM corrections, we see that the one-loop corrections can become as important as the MSSM ones, while the two-loop corrections can reach about half the size of the MSSM two-loop corrections. The plots show the same results as in Fig. 8 when including non-vanishing T t . We used T t = 2.0 TeV · Y t , tan β = 5 and T u,33 = −2500 GeV. The full lines are for B T = 0, while the dotted ones correspond to B T = (1.5 TeV) 2 .
effect of B T . For B T = 0 the differences to the results with T t = 0 are not very large: the corrections from the momentum dependence and the two-loop terms are of the same size and come with different signs. The largest effect is again from the threshold corrections. However, if B T becomes large and causes a mass splitting for the vectorlike stops, the picture changes. Now, the most important effect comes from the two-loop corrections which can become as important as the MSSM ones. For Y t values of O(1) this can reduce the Higgs mass prediction by more than 10 GeV and easily over-compensate the two-loop corrections from the MSSM sector.
FIG. 10 .
10The plots show the same results as in Fig. 8 for smaller M 2 A = 10 5 GeV 2 . We put T t = T u = 0, and tan β = 3. The dashed lines are for B T = 0, while the full ones correspond to B T = (1.5 TeV) 2 . large B T the two-loop contributions can clearly make the biggest effect compared to the incomplete calculations used so far. These are again negative and can reduce the SM-like Higgs mass by up to 8 GeV. Thus, while it seems that one can reach the preferred mass of 125 GeV at one-loop with Y t < 1, with the two-loop corrections this is not possible for the considered combination of parameters. Although if B T is taken to be zero, the effect can still be large and the overall size of the new two-loop corrections is still in the ballpark of the MSSM corrections. B. Dependence on the vectorlike masses, stop masses, and the gaugino mass
FIG. 11 .FIG. 12 .FIG. 13 .
111213Contour lines of constant m h at one-(left) and two-loop (middle) in the (M T , mt ) plane. The plots on the right column show the size of the two-loop corrections involving vectorlike states. The plots in the first row are for Y t = 1.0 with T t = 0 and in the second for Y t = 0.7 with T t = 1400 GeV . mass of the vectorlike states, while the dependence on B T is small and just shows up for smallish M T of 1 TeV and smaller and large |B T | > 2.0 TeV 2 for Y t = 1.0 and T t = 0.This general picture does, of course, not change at two-loop but we find a shift by several GeV usually dominated by the MSSM-like corrections. The two-loop corrections from the vectorlike states are singled out in the right column ofFig. 11. They don't show this strong M T dependence as the one-loop corrections do, and actually slightly increase with larger M T . Also the dependence on B T is more pronounced at two-loop. If we go for smaller Y t and turn on T t the one-loop corrections in total become smaller and are less dependent on B T . However, the sensitivity at two-loop and M T and B T is nearly the same, but just the total size of the corrections decreases.We have so far just concentrated on the dependence of the Higgs mass corrections on the new parameters absent in the MSSM. We want to finalize our discussion of the loop On the left: the light Higgs mass m h as function of Y t . Here, we used different values for M 3 : 1 TeV (red), 2 TeV (blue), 3 TeV (green), 4 TeV (orange). The full lines are the two-loop results, the dotted ones the one-loop. On the right the absolute size of the one-(blue) and two-loop (green) corrections involving vectorlike states. The line coding is dashed, dotted, dot-dashed, full for increasing M 3 . corrections by also briefly commenting on the impact of at least two MSSM parameters: the gluino mass parameter M 3 and the soft-mass for the left-handed stop, m q,33 . We start with the dependence on the gluino mass shown in Fig. 12. Here, we vary Y t and use gluino masses between 1 and 4 TeV. At the one-loop level there is of course just a tiny impact on the Higgs mass. The small difference comes from SUSY threshold corrections. For M T = 1.5 TeV and 3.0 TeV we find that with increasing M 3 the two-loop corrections O(α S α t ) become larger. Since they are negative, the prediction for m h becomes smaller. However, for large M T the dominance of the corrections O(α 2 t ) is so large that this effect nearly doesn't play any role. Finally, we check the impact of the soft-masses for the left-handed stops. The one-and two-loop corrections as function of Y t and mq ,33 = 1, 2, 3, 4 TeV are summarized in Fig. 13. We see that this parameter plays an important role at one-and two-loop: at one-loop, the corrections increase by a factor 2 when going from 1 to 4 TeV. At two-loop this effect is even more important and the corrections change by nearly a factor of 3. Interestingly, the one-loop corrections are larger for larger squark soft-terms, while the two-loop corrections increase with decreasing squark masses. On the left: the light Higgs mass m h as function of Y t . Here, we used different values for m q,33 : 1 TeV (red), 2 TeV (blue), 3 TeV (green), 4 TeV (orange). The full lines are the twoloop results, the dotted ones the one-loop. On the right the absolute size of the one-(blue) and two-loop (green) corrections involving vectorlike states is shown. The line coding is dashed, dotted, dot-dashed, full for increasing m q,33 . V. RESULTS -PART II: THE FINE-TUNING IN GAUGE MEDIATED SUSY BREAKING We now turn to the consequence of the loop corrections for the fine-tuning in minimal GMSB. The intrinsic problem of minimal GMSB in the MSSM is that it predicts very small trilinear couplings. Thus, the only chance to enhance the Higgs mass via loop corrections is to go to very large values of Λ and M to get sufficiently heavy stops. When calculating the fine-tuning for this setup and demanding m h 125 GeV, one finds that the fine-tuning ∆ is well above 1000. Of course, in the presence of large loop corrections due to vectorlike states, the need of superheavy stops is relaxed and the fine-tuning is expected to improve. We show in Fig. 14 the fine-tuning in the (tan β, Y t ) plane for different constraints for the Higgs mass within the theoretical uncertainty: (i) m h = 122 GeV, (ii) m h = 125 GeV, (iii) m h = 128 GeV. For the vectorlike states, masses of 500 and 1000 GeV were used at the messenger scale.One finds that the fine-tuning quickly drops with increasing Y t because lighter SUSY states are sufficient to push the Higgs mass to the desired level. For very large Y t of O(1) and the looser constraint of m h > 122 GeV, even a fine-tuning of about 100 seems possible.
FIG. 14 .FIG. 15 .
1415Contours of overall fine-tuning ∆ in the (tan β, Y t )-plane demanding a Higgs mass m h = 128 GeV (top), m h = 125 GeV (middle), and m h = 122 GeV (bottom) for the UV complete variant of the model. We fixed here M = 10 7 GeV and M V = 0.5 TeV (left column), respectively, M V = 1.0 TeV (right column). The red dashed lines indicate the gluino mass in GeV.this model is not as bad as one expects it from the MSSM. The reason is that the strong interaction at the messenger scale is larger compared to MSSM expectations because of the different running. Therefore, for the same value of Λ, the squarks are already significantly heavier and lead to larger Higgs mass corrections. Minimal fine-tuning for given Higgs mass m h and gluino mass mg. We fixed here M = 10 7 GeV and M T = 1 TeV and scanned over tan β, Y t and Λ. However, including the bounds from direct SUSY searches has a large impact: the points with a small fine-tuning are excluded because of the light gluino mass. That's completely different to the GMSB variant of the MSSM where the Higgs mass pushes the fine-tuning of the model to higher values. In this model, the vanishing trilinear couplings at the messenger scale just play a subdominant role concerning the fine-tuning, but the gluino mass demands a larger SUSY scale Λ, which increases the fine-tuning. The situation wouldn't change if we go to larger Messenger masses to increase the running because the one-loop β-function of M 3 vanishes in this model and the mass is actually slightly decreasing with increasing M .Moreover, it's a general feature of GMSB that the gaugino masses are not very sensitive to the messenger scale because the leading dependence in the RGE running always drops out.
FIG. 16 .
16Contours of the overall fine-tuning ∆ (left) and the mass of the lightest up-squark (right, full blue lines) and gluino (right, dashed red lines) in the (tan β, Y t )-plane demanding a Higgs mass m h > 122 GeV for the variant of the model without spectator fields. We fixed here M = 10 7 GeV.
FIG. 17 .
17Contours of constant Λ (black), the lightest top-squark mass (right, full blue lines) and gluino mass (right, dashed red lines) in the (tan β, Y t )-plane demanding a Higgs mass m h > 122 GeV. All contours are given in units of TeV. On the left for the UV complete model, on the right for the model with only vectorlike tops. We fixed here M = 10 7 GeV.this Λ. We find for the minimal model the following fine-tuning ∆ (230, 275, 320, 380) (85) for mg > (1000, 1200, 1400, 1600) GeV and m h > 122 GeV. VI. CONCLUSION We discussed the loop corrections to the light Higgs mass in the MSSM extended by a pair of vectorlike top quarks. We have improved previous calculations in literature in three respects: (i) we included the additional threshold corrections from the vectorlike states to SM gauge and Yukawa couplings, (ii) we added the full momentum dependence at the oneloop level, (iii) we calculated all dominant (i.e. excluding electroweak) two-loop corrections in the effective potential approach. It has been shown that the momentum effects can be sizeable and change the Higgs mass prediction by a few GeV. The effect from the threshold corrections turns out to be often more important. The importance of the two-loop corrections strongly depends on the considered parameter point. They are often a bit smaller than the two-loop corrections known from the MSSM, but we also identified regions where they can be even larger. In these regions, the additional two-loop corrections can change the Higgs mass prediction by up to 10 GeV. We checked the impact of the presence of vectorlike states on the fine-tuning in GMSB. For this purpose, we extended the model by additional vectorlike quarks and leptons to have complete multiplets of SU (5)
t = log (Q/M ), with Q the renormalization scale and M a reference scale. For a parameter x present in the MSSM we show only the difference with respect to the MSSM RGEs ∆β (n) x ≡ β (n) x − β (n)the minimal model with vectorlike top quarks discussed here. The additional difference to the UV complete version of the model is given as
3 + 9g 2 2 M 3 + M 2 + g 2 1 M 1 + M 3 (B27)
FIG. 2. This plot shows the scale M C at which the Landau pole arises as a function of Y t . The red lines are for the minimal model, the blue lines for the UV complete version. For the dotted lines we used tan β = 10, for the full ones tan β = 60.We can use these RGEs to make a quick check for the cut off-scale of the theory in the limit of very large Y t . For this purpose, we fix at M SU SY = 1.5 TeV the SM gauge0.4
0.6
0.8
1.0
1.2
1.4
6
8
10
12
14
16
18
Y t ′
log(M
C /GeV)
couplings as g i = (0.47, 0.64, 1.05), and consider only third generation Yukawa couplings
Y 33
j
=
√
2/246 · (1.8/ cos β, 2.4/ cos β, 160/ sin β) with j = e, d, u. Of course, this is a very
simplistic setup missing many details like two-loop effects in the running and threshold
Appendix A: Vertices 1. Vector boson Vertices with vectorlike (s)tops
ca Y u,ba Z U l3+b Z H i2 Z HAppendix B: Renormalization Group Equationsj2
+ 3
3
c=1
3
b=1
Z U, *
kb
3
a=1
Y *
u,ac Y u,ab Z U
lc Z H
i2 Z H
j2 + g 2
1
3
a=1
Z U, *
k3+a Z U
l3+a Z H
i1 Z H
j1 − Z H
i2 Z H
j2
+ g 2
1 Z U, *
k7 Z H
i1 Z H
j1 Z U
l7 − g 2
1 Z U, *
k7 Z H
i2 Z H
j2 Z U
l7 + 3Z U, *
k7
3
a=1
|Y t ,a | 2 Z H
i2 Z H
j2 Z U
l7
+ 3
3
b=1
Z U, *
k3+b
3
a=1
Y *
u,ba Y t ,a Z H
i2 Z H
j2 Z U
l7 − g 2
1 Z U, *
k8 Z H
i1 Z H
j1 Z U
l8
+ g 2
1 Z U, *
k8 Z H
i2 Z H
j2 Z U
l8
(A31)
t ,i 1=
16
75
50g 4
3 + 7g 4
1 m t ,i 1
(B48)
β
− 12g 2 + 36g 2 − 4m 2 + 15 Y t Y † β
(1)
m 2
ut ,i
= 2 2m 2
Hu + m 2
the rotation matrices of the external states (marked asx in the expressions for Σ) have to replaced by the identity matrix since the corrections to the mass matrices are calculated
(1)M Q = 1 450 10g 2 1 16g 2 3 + 9g 2 2 + 25 256g 4 3 + 288g 2 2 g 2 3 + 297g 4 2 + 289g 4 1 M Q (B52)5. Trilinear Soft-Breaking Parameters ∆β (1)Soft-Breaking Scalar MassesTraces:
. S Chatrchyan, CMS Collaboration125CMS Collaboration, S. Chatrchyan et. al., Observation of a new boson at a mass of 125
GeV with the CMS experiment at the LHC. 1207.7235Phys.Lett. 716GeV with the CMS experiment at the LHC, Phys.Lett. B716 (2012) 30-61 [1207.7235].
Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. G Aad, ATLAS CollaborationPhys.Lett. 7161207.7214ATLAS Collaboration, G. Aad et. al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Phys.Lett. B716 (2012) 1-29 [1207.7214].
. A Arbey, M Battaglia, A Djouadi, F Mahmoudi, J Quevillon, 125A. Arbey, M. Battaglia, A. Djouadi, F. Mahmoudi and J. Quevillon, Implications of a 125
GeV Higgs for supersymmetric models. 1112.3028Phys.Lett. 708GeV Higgs for supersymmetric models, Phys.Lett. B708 (2012) 162-169 [1112.3028].
Interpreting the LHC Higgs Search Results in the MSSM. S Heinemeyer, O Stal, G Weiglein, Phys.Lett. 7101112.3026S. Heinemeyer, O. Stal and G. Weiglein, Interpreting the LHC Higgs Search Results in the MSSM, Phys.Lett. B710 (2012) 201-206 [1112.3026].
Implications of a 125 GeV Higgs for the MSSM and Low-Scale SUSY Breaking. P Draper, P Meade, M Reece, D Shih, Phys.Rev. 85950071112.3068P. Draper, P. Meade, M. Reece and D. Shih, Implications of a 125 GeV Higgs for the MSSM and Low-Scale SUSY Breaking, Phys.Rev. D85 (2012) 095007 [1112.3068].
A 125 GeV SM-like Higgs in the MSSM and the γγ rate. M Carena, S Gori, N R Shah, C E Wagner, JHEP. 1203141112.3336M. Carena, S. Gori, N. R. Shah and C. E. Wagner, A 125 GeV SM-like Higgs in the MSSM and the γγ rate, JHEP 1203 (2012) 014 [1112.3336].
Implications of the 125 GeV Higgs boson for scalar dark matter and for the CMSSM phenomenology. M Kadastik, K Kannike, A Racioppi, M , 1112.3647JHEP. 120561M. Kadastik, K. Kannike, A. Racioppi and M. Raidal, Implications of the 125 GeV Higgs boson for scalar dark matter and for the CMSSM phenomenology, JHEP 1205 (2012) 061 [1112.3647].
A Natural 125 GeV Higgs Boson in the MSSM from Focus Point Supersymmetry with A-Terms. J L Feng, D Sanford, 1205.2372Phys.Rev. 8655015J. L. Feng and D. Sanford, A Natural 125 GeV Higgs Boson in the MSSM from Focus Point Supersymmetry with A-Terms, Phys.Rev. D86 (2012) 055015 [1205.2372].
The CMSSM Favoring New Territories: The Impact of New LHC Limits and a 125 GeV Higgs. A Fowlie, M Kazana, K Kowalska, S Munir, L Roszkowski, Phys.Rev. 86750101206.0264A. Fowlie, M. Kazana, K. Kowalska, S. Munir, L. Roszkowski et. al., The CMSSM Favoring New Territories: The Impact of New LHC Limits and a 125 GeV Higgs, Phys.Rev. D86 (2012) 075010 [1206.0264].
The Higgs sector of the phenomenological MSSM in the light of the Higgs boson discovery. A Arbey, M Battaglia, A Djouadi, F Mahmoudi, JHEP. 12091071207.1348A. Arbey, M. Battaglia, A. Djouadi and F. Mahmoudi, The Higgs sector of the phenomenological MSSM in the light of the Higgs boson discovery, JHEP 1209 (2012) 107 [1207.1348].
MSSM Higgs Boson Searches at the LHC: Benchmark Scenarios after the Discovery of a Higgs-like Particle. M Carena, S Heinemeyer, O Stãěl, C Wagner, G Weiglein, 1302.7033Eur.Phys.J. 739M. Carena, S. Heinemeyer, O. StÃěl, C. Wagner and G. Weiglein, MSSM Higgs Boson Searches at the LHC: Benchmark Scenarios after the Discovery of a Higgs-like Particle, Eur.Phys.J. C73 (2013), no. 9 2552 [1302.7033].
Properties of 125 GeV Higgs boson in non-decoupling MSSM scenarios. K Hagiwara, J S Lee, J Nakamura, 1207.0802JHEP. 12102K. Hagiwara, J. S. Lee and J. Nakamura, Properties of 125 GeV Higgs boson in non-decoupling MSSM scenarios, JHEP 1210 (2012) 002 [1207.0802].
An update on the constraints on the phenomenological MSSM from the new LHC Higgs results. A Arbey, M Battaglia, A Djouadi, F Mahmoudi, Phys.Lett. 7201211.4004A. Arbey, M. Battaglia, A. Djouadi and F. Mahmoudi, An update on the constraints on the phenomenological MSSM from the new LHC Higgs results, Phys.Lett. B720 (2013) 153-160 [1211.4004].
Phenomenological MSSM in view of the 125 GeV Higgs data. B Dumont, J F Gunion, S Kraml, no. 5 055018 [1312.7027Phys.Rev. 89B. Dumont, J. F. Gunion and S. Kraml, Phenomenological MSSM in view of the 125 GeV Higgs data, Phys.Rev. D89 (2014), no. 5 055018 [1312.7027].
The post-Higgs MSSM scenario: Habemus MSSM?. A Djouadi, L Maiani, G Moreau, A Polosa, J Quevillon, Eur.Phys.J. 7326501307.5205A. Djouadi, L. Maiani, G. Moreau, A. Polosa, J. Quevillon et. al., The post-Higgs MSSM scenario: Habemus MSSM?, Eur.Phys.J. C73 (2013) 2650 [1307.5205].
The MSSM Higgs sector at a high M SU SY : reopening the low tanβ regime and heavy Higgs searches. A Djouadi, J Quevillon, 1304.1787JHEP. 131028A. Djouadi and J. Quevillon, The MSSM Higgs sector at a high M SU SY : reopening the low tanβ regime and heavy Higgs searches, JHEP 1310 (2013) 028 [1304.1787].
Implications of the Higgs discovery for the MSSM. A Djouadi, 1311.0720Eur.Phys.J. 742704A. Djouadi, Implications of the Higgs discovery for the MSSM, Eur.Phys.J. C74 (2014) 2704 [1311.0720].
Fully covering the MSSM Higgs sector at the LHC. A Djouadi, L Maiani, A Polosa, J Quevillon, V Riquer, 1502.05653A. Djouadi, L. Maiani, A. Polosa, J. Quevillon and V. Riquer, Fully covering the MSSM Higgs sector at the LHC, 1502.05653.
Stability of the CMSSM against sfermion VEVs. J Camargo-Molina, B O'leary, W Porod, F Staub, 1309.7212JHEP. 1312103J. Camargo-Molina, B. O'Leary, W. Porod and F. Staub, Stability of the CMSSM against sfermion VEVs, JHEP 1312 (2013) 103 [1309.7212].
Vacuum Stability and the MSSM Higgs Mass. N Blinov, D E Morrissey, JHEP. 14031061310.4174N. Blinov and D. E. Morrissey, Vacuum Stability and the MSSM Higgs Mass, JHEP 1403 (2014) 106 [1310.4174].
Charge and Color Breaking Constraints in MSSM after the Higgs Discovery at LHC. D Chowdhury, R M Godbole, K A Mohan, S K Vempati, JHEP. 14021101310.1932D. Chowdhury, R. M. Godbole, K. A. Mohan and S. K. Vempati, Charge and Color Breaking Constraints in MSSM after the Higgs Discovery at LHC, JHEP 1402 (2014) 110 [1310.1932].
Constraining the Natural MSSM through tunneling to color-breaking vacua at zero and non-zero temperature. J Camargo-Molina, B Garbrecht, B O'leary, W Porod, F Staub, 1405.7376Phys.Lett. 737J. Camargo-Molina, B. Garbrecht, B. O'Leary, W. Porod and F. Staub, Constraining the Natural MSSM through tunneling to color-breaking vacua at zero and non-zero temperature, Phys.Lett. B737 (2014) 156-161 [1405.7376].
Exploring MSSM for Charge and Color Breaking and Other Constraints in the Context of Higgs@125 GeV. U Chattopadhyay, A Dey, 1409.0611JHEP. 1411161U. Chattopadhyay and A. Dey, Exploring MSSM for Charge and Color Breaking and Other Constraints in the Context of Higgs@125 GeV, JHEP 1411 (2014) 161 [1409.0611].
The Next-to-Minimal Supersymmetric Standard Model. U Ellwanger, C Hugonie, A M Teixeira, Phys.Rept. 4960910.1785U. Ellwanger, C. Hugonie and A. M. Teixeira, The Next-to-Minimal Supersymmetric Standard Model, Phys.Rept. 496 (2010) 1-77 [0910.1785].
The Upper bound on the lightest Higgs mass in the NMSSM revisited. U Ellwanger, C Hugonie, hep-ph/0612133Mod.Phys.Lett. 22U. Ellwanger and C. Hugonie, The Upper bound on the lightest Higgs mass in the NMSSM revisited, Mod.Phys.Lett. A22 (2007) 1581-1590 [hep-ph/0612133].
The Fine-Tuning of the Generalised NMSSM. G G Ross, K Schmidt-Hoberg, Nucl.Phys. 8621108.1284G. G. Ross and K. Schmidt-Hoberg, The Fine-Tuning of the Generalised NMSSM, Nucl.Phys. B862 (2012) 710-719 [1108.1284].
The Generalised NMSSM at One Loop: Fine Tuning and Phenomenology. G G Ross, K Schmidt-Hoberg, F Staub, 1205.1509JHEP. 120874G. G. Ross, K. Schmidt-Hoberg and F. Staub, The Generalised NMSSM at One Loop: Fine Tuning and Phenomenology, JHEP 1208 (2012) 074 [1205.1509].
Non-universal gaugino masses and fine tuning implications for SUSY searches in the MSSM and the GNMSSM. A Kaminska, G G Ross, K Schmidt-Hoberg, JHEP. 13112091308.4168A. Kaminska, G. G. Ross and K. Schmidt-Hoberg, Non-universal gaugino masses and fine tuning implications for SUSY searches in the MSSM and the GNMSSM, JHEP 1311 (2013) 209 [1308.4168].
A Natural Higgs Mass in Supersymmetry from NonDecoupling Effects. X Lu, H Murayama, J T Ruderman, K Tobioka, 1308.0792Phys.Rev.Lett. 112191803X. Lu, H. Murayama, J. T. Ruderman and K. Tobioka, A Natural Higgs Mass in Supersymmetry from NonDecoupling Effects, Phys.Rev.Lett. 112 (2014) 191803 [1308.0792].
A precision study of the fine tuning in the DiracNMSSM. A Kaminska, G G Ross, K Schmidt-Hoberg, F Staub, JHEP. 14061531401.1816A. Kaminska, G. G. Ross, K. Schmidt-Hoberg and F. Staub, A precision study of the fine tuning in the DiracNMSSM, JHEP 1406 (2014) 153 [1401.1816].
The Supersymmetric Standard Models with a Pseudo-Dirac Gluino from Hybrid F − and D−Term Supersymmetry Breakings. R Ding, T Li, F Staub, C Tian, B Zhu, 1502.03614R. Ding, T. Li, F. Staub, C. Tian and B. Zhu, The Supersymmetric Standard Models with a Pseudo-Dirac Gluino from Hybrid F − and D−Term Supersymmetry Breakings, 1502.03614.
Dirac Gauginos and the 125 GeV Higgs. K Benakli, M D Goodsell, F Staub, 1211.0552JHEP. 130673K. Benakli, M. D. Goodsell and F. Staub, Dirac Gauginos and the 125 GeV Higgs, JHEP 1306 (2013) 073 [1211.0552].
Higgs Mass Bound in E(6) Based Supersymmetric Theories. H E Haber, M Sher, Phys.Rev. 352206H. E. Haber and M. Sher, Higgs Mass Bound in E(6) Based Supersymmetric Theories, Phys.Rev. D35 (1987) 2206.
Comment on 'Higgs Boson Mass Bound in E(6) Based Supersymmetric Theories. M Drees, Phys.Rev. 35M. Drees, Comment on 'Higgs Boson Mass Bound in E(6) Based Supersymmetric Theories.', Phys.Rev. D35 (1987) 2910-2913.
Electroweak breaking and the mu problem in supergravity models with an additional U(1). M Cvetic, D A Demir, J Espinosa, L Everett, P Langacker, hep-ph/9703317Phys.Rev. 562861M. Cvetic, D. A. Demir, J. Espinosa, L. Everett and P. Langacker, Electroweak breaking and the mu problem in supergravity models with an additional U(1), Phys.Rev. D56 (1997) 2861 [hep-ph/9703317].
Exceeding the MSSM Higgs Mass Bound in a Special Class of U(1) Gauge Models. E Ma, Phys.Lett. 7051108.4029E. Ma, Exceeding the MSSM Higgs Mass Bound in a Special Class of U(1) Gauge Models, Phys.Lett. B705 (2011) 320-323 [1108.4029].
Light Higgs Mass Bound in SUSY Left-Right Models. Y Zhang, H An, X Ji, R N Mohapatra, Phys.Rev. 78113020804.0268Y. Zhang, H. An, X.-d. Ji and R. N. Mohapatra, Light Higgs Mass Bound in SUSY Left-Right Models, Phys.Rev. D78 (2008) 011302 [0804.0268].
Hefty MSSM-like light Higgs in extended gauge models. M Hirsch, M Malinsky, W Porod, L Reichert, F Staub, 1110.3037JHEP. 120284M. Hirsch, M. Malinsky, W. Porod, L. Reichert and F. Staub, Hefty MSSM-like light Higgs in extended gauge models, JHEP 1202 (2012) 084 [1110.3037].
SO(10) inspired gauge-mediated supersymmetry breaking. M E Krauss, W Porod, F Staub, Phys.Rev. 881 015014 [1304.0769M. E. Krauss, W. Porod and F. Staub, SO(10) inspired gauge-mediated supersymmetry breaking, Phys.Rev. D88 (2013), no. 1 015014 [1304.0769].
Higgs Mass Corrections in the SUSY B-L Model with Inverse Seesaw. A Elsayed, S Khalil, S Moretti, 1106.2130Phys.Lett. 715A. Elsayed, S. Khalil and S. Moretti, Higgs Mass Corrections in the SUSY B-L Model with Inverse Seesaw, Phys.Lett. B715 (2012) 208-213 [1106.2130].
Anatomy of Higgs mass in Supersymmetric Inverse Seesaw Models. E J Chun, V S Mummidi, S K Vempati, Phys.Lett. 7361405.5478E. J. Chun, V. S. Mummidi and S. K. Vempati, Anatomy of Higgs mass in Supersymmetric Inverse Seesaw Models, Phys.Lett. B736 (2014) 470-477 [1405.5478].
Radiative corrections to Higgs masses in the supersymmetric model with an extra family and antifamily. T Moroi, Y Okada, Mod.Phys.Lett. 7T. Moroi and Y. Okada, Radiative corrections to Higgs masses in the supersymmetric model with an extra family and antifamily, Mod.Phys.Lett. A7 (1992) 187-200.
Upper bound of the lightest neutral Higgs mass in extended supersymmetric Standard Models. T Moroi, Y Okada, Phys.Lett. 295T. Moroi and Y. Okada, Upper bound of the lightest neutral Higgs mass in extended supersymmetric Standard Models, Phys.Lett. B295 (1992) 73-78.
K Babu, I Gogoladze, C Kolda, hep-ph/0410085Perturbative unification and Higgs boson mass bounds. K. Babu, I. Gogoladze and C. Kolda, Perturbative unification and Higgs boson mass bounds, hep-ph/0410085.
Higgs Boson Mass, Sparticle Spectrum and Little Hierarchy Problem in Extended MSSM. K Babu, I Gogoladze, M U Rehman, Q Shafi, Phys.Rev. 78550170807.3055K. Babu, I. Gogoladze, M. U. Rehman and Q. Shafi, Higgs Boson Mass, Sparticle Spectrum and Little Hierarchy Problem in Extended MSSM, Phys.Rev. D78 (2008) 055017 [0807.3055].
Extra vector-like matter and the lightest Higgs scalar boson mass in low-energy supersymmetry. S P Martin, Phys.Rev. 81350040910.2732S. P. Martin, Extra vector-like matter and the lightest Higgs scalar boson mass in low-energy supersymmetry, Phys.Rev. D81 (2010) 035004 [0910.2732].
Raising the Higgs mass with Yukawa couplings for isotriplets in vector-like extensions of minimal supersymmetry. S P Martin, 1006.4186Phys.Rev. 8255019S. P. Martin, Raising the Higgs mass with Yukawa couplings for isotriplets in vector-like extensions of minimal supersymmetry, Phys.Rev. D82 (2010) 055019 [1006.4186].
A Little Solution to the Little Hierarchy Problem: A Vector-like Generation. P W Graham, A Ismail, S Rajendran, P Saraswat, Phys.Rev. 81550160910.3020P. W. Graham, A. Ismail, S. Rajendran and P. Saraswat, A Little Solution to the Little Hierarchy Problem: A Vector-like Generation, Phys.Rev. D81 (2010) 055016 [0910.3020].
Higgs Mass and Muon Anomalous Magnetic Moment in Supersymmetric Models with Vector-Like Matters. M Endo, K Hamaguchi, S Iwamoto, N Yokozaki, Phys.Rev. 84750171108.3071M. Endo, K. Hamaguchi, S. Iwamoto and N. Yokozaki, Higgs Mass and Muon Anomalous Magnetic Moment in Supersymmetric Models with Vector-Like Matters, Phys.Rev. D84 (2011) 075017 [1108.3071].
Implications of gauge-mediated supersymmetry breaking with vector-like quarks and a 125 GeV Higgs boson. S P Martin, J D Wells, Phys.Rev. 86350171206.2956S. P. Martin and J. D. Wells, Implications of gauge-mediated supersymmetry breaking with vector-like quarks and a 125 GeV Higgs boson, Phys.Rev. D86 (2012) 035017 [1206.2956].
Raising the Higgs mass in supersymmetry with t-tâĂš mixing. C Faroughy, K Grizzard, Phys.Rev. 903 035024 [1405.4116C. Faroughy and K. Grizzard, Raising the Higgs mass in supersymmetry with t-tâĂš mixing, Phys.Rev. D90 (2014), no. 3 035024 [1405.4116].
Survey of vector-like fermion extensions of the Standard Model and their phenomenological implications. S A Ellis, R M Godbole, S Gopalakrishna, J D Wells, JHEP. 14091301404.4398S. A. Ellis, R. M. Godbole, S. Gopalakrishna and J. D. Wells, Survey of vector-like fermion extensions of the Standard Model and their phenomenological implications, JHEP 1409 (2014) 130 [1404.4398].
Higgs boson mass and high-luminosity LHC probes of supersymmetry with vectorlike top quark. Z Lalak, M Lewicki, J D Wells, 1502.05702Z. Lalak, M. Lewicki and J. D. Wells, Higgs boson mass and high-luminosity LHC probes of supersymmetry with vectorlike top quark, 1502.05702.
On the two-loop corrections to the Higgs mass in trilinear R-parity violation. H K Dreiner, K Nickel, F Staub, Phys.Lett. 7421411.3731H. K. Dreiner, K. Nickel and F. Staub, On the two-loop corrections to the Higgs mass in trilinear R-parity violation, Phys.Lett. B742 (2015) 261-265 [1411.3731].
Precision corrections in the minimal supersymmetric standard model. D M Pierce, J A Bagger, K T Matchev, R.-J Zhang, hep-ph/9606211Nucl.Phys. 491D. M. Pierce, J. A. Bagger, K. T. Matchev and R.-j. Zhang, Precision corrections in the minimal supersymmetric standard model, Nucl.Phys. B491 (1997) 3-67 [hep-ph/9606211].
QCD corrections to the masses of the neutral CP -even Higgs bosons in the MSSM. S Heinemeyer, W Hollik, G Weiglein, hep-ph/9803277Phys.Rev. 5891701S. Heinemeyer, W. Hollik and G. Weiglein, QCD corrections to the masses of the neutral CP -even Higgs bosons in the MSSM, Phys.Rev. D58 (1998) 091701 [hep-ph/9803277].
The Mass of the lightest MSSM Higgs boson: A Compact analytical expression at the two loop level. S Heinemeyer, W Hollik, G Weiglein, hep-ph/9903404Phys.Lett. 455S. Heinemeyer, W. Hollik and G. Weiglein, The Mass of the lightest MSSM Higgs boson: A Compact analytical expression at the two loop level, Phys.Lett. B455 (1999) 179-191 [hep-ph/9903404].
The Masses of the neutral CP -even Higgs bosons in the MSSM: Accurate analysis at the two loop level. S Heinemeyer, W Hollik, G Weiglein, hep-ph/9812472Eur.Phys.J. 9S. Heinemeyer, W. Hollik and G. Weiglein, The Masses of the neutral CP -even Higgs bosons in the MSSM: Accurate analysis at the two loop level, Eur.Phys.J. C9 (1999) 343-366 [hep-ph/9812472].
Effective potential methods and the Higgs mass spectrum in the MSSM. M Carena, M Quiros, C Wagner, hep-ph/9508343Nucl.Phys. 461M. Carena, M. Quiros and C. Wagner, Effective potential methods and the Higgs mass spectrum in the MSSM, Nucl.Phys. B461 (1996) 407-436 [hep-ph/9508343].
Reconciling the two loop diagrammatic and effective field theory computations of the mass of the lightest CP -even Higgs boson in the MSSM. M Carena, H Haber, S Heinemeyer, W Hollik, C Wagner, hep-ph/0001002Nucl.Phys. 580M. Carena, H. Haber, S. Heinemeyer, W. Hollik, C. Wagner et. al., Reconciling the two loop diagrammatic and effective field theory computations of the mass of the lightest CP -even Higgs boson in the MSSM, Nucl.Phys. B580 (2000) 29-57 [hep-ph/0001002].
Renormalization group analysis of the Higgs sector in the minimal supersymmetric standard model. K Sasaki, M Carena, C Wagner, Nucl.Phys. 381K. Sasaki, M. Carena and C. Wagner, Renormalization group analysis of the Higgs sector in the minimal supersymmetric standard model, Nucl.Phys. B381 (1992) 66-86.
Analytical expressions for radiatively corrected Higgs masses and couplings in the MSSM. M Carena, J Espinosa, M Quiros, C Wagner, hep-ph/9504316Phys.Lett. 355M. Carena, J. Espinosa, M. Quiros and C. Wagner, Analytical expressions for radiatively corrected Higgs masses and couplings in the MSSM, Phys.Lett. B355 (1995) 209-221 [hep-ph/9504316].
Renormalization group improved effective potential for the MSSM Higgs sector with explicit CP violation. M Carena, J R Ellis, A Pilaftsis, C Wagner, hep-ph/0003180Nucl.Phys. 586M. Carena, J. R. Ellis, A. Pilaftsis and C. Wagner, Renormalization group improved effective potential for the MSSM Higgs sector with explicit CP violation, Nucl.Phys. B586 (2000) 92-140 [hep-ph/0003180].
Higgs boson pole masses in the MSSM with explicit CP violation. M Carena, J R Ellis, A Pilaftsis, C Wagner, hep-ph/0111245Nucl.Phys. 625M. Carena, J. R. Ellis, A. Pilaftsis and C. Wagner, Higgs boson pole masses in the MSSM with explicit CP violation, Nucl.Phys. B625 (2002) 345-371 [hep-ph/0111245].
On the O(alpha(t)**2) two loop corrections to the neutral Higgs boson masses in the MSSM. A Brignole, G Degrassi, P Slavich, F Zwirner, hep-ph/0112177Nucl.Phys. 631A. Brignole, G. Degrassi, P. Slavich and F. Zwirner, On the O(alpha(t)**2) two loop corrections to the neutral Higgs boson masses in the MSSM, Nucl.Phys. B631 (2002) 195-218 [hep-ph/0112177].
On the neutral Higgs boson masses in the MSSM for arbitrary stop mixing. G Degrassi, P Slavich, F Zwirner, hep-ph/0105096Nucl.Phys. 611G. Degrassi, P. Slavich and F. Zwirner, On the neutral Higgs boson masses in the MSSM for arbitrary stop mixing, Nucl.Phys. B611 (2001) 403-422 [hep-ph/0105096].
On the two loop sbottom corrections to the neutral Higgs boson masses in the MSSM. A Brignole, G Degrassi, P Slavich, F Zwirner, hep-ph/0206101Nucl.Phys. 643A. Brignole, G. Degrassi, P. Slavich and F. Zwirner, On the two loop sbottom corrections to the neutral Higgs boson masses in the MSSM, Nucl.Phys. B643 (2002) 79-92 [hep-ph/0206101].
Two loop corrections to radiative electroweak symmetry breaking in the MSSM. A Dedes, P Slavich, hep-ph/0212132Nucl.Phys. 657A. Dedes and P. Slavich, Two loop corrections to radiative electroweak symmetry breaking in the MSSM, Nucl.Phys. B657 (2003) 333-354 [hep-ph/0212132].
On the two loop Yukawa corrections to the MSSM Higgs boson masses at large tan beta. A Dedes, G Degrassi, P Slavich, hep-ph/0305127Nucl.Phys. 672A. Dedes, G. Degrassi and P. Slavich, On the two loop Yukawa corrections to the MSSM Higgs boson masses at large tan beta, Nucl.Phys. B672 (2003) 144-162 [hep-ph/0305127].
. F Staub, 0806.0538F. Staub, SARAH, 0806.0538.
From Superpotential to Model Files for FeynArts and CalcHep/CompHep. F Staub, Comput.Phys.Commun. 1810909.2863F. Staub, From Superpotential to Model Files for FeynArts and CalcHep/CompHep, Comput.Phys.Commun. 181 (2010) 1077-1086 [0909.2863].
Automatic Calculation of supersymmetric Renormalization Group Equations and Self Energies. F Staub, 1002.0840Comput.Phys.Commun. 182F. Staub, Automatic Calculation of supersymmetric Renormalization Group Equations and Self Energies, Comput.Phys.Commun. 182 (2011) 808-833 [1002.0840].
2: Dirac Gauginos, UFO output, and more. F Staub, Comput.Phys.Commun. 1841207.0906F. Staub, SARAH 3.2: Dirac Gauginos, UFO output, and more, Comput.Phys.Commun. 184 (2013) pp. 1792-1809 [1207.0906].
SARAH 4: A tool for (not only SUSY) model builders. F Staub, 1309.7223Comput.Phys.Commun. 185F. Staub, SARAH 4: A tool for (not only SUSY) model builders, Comput.Phys.Commun. 185 (2014) 1773-1790 [1309.7223].
Exploring new models in all detail with SARAH. F Staub, 1503.04200F. Staub, Exploring new models in all detail with SARAH, 1503.04200.
SPheno, a program for calculating supersymmetric spectra, SUSY particle decays and SUSY particle production at e+ e-colliders. W Porod, hep-ph/0301101Comput.Phys.Commun. 153W. Porod, SPheno, a program for calculating supersymmetric spectra, SUSY particle decays and SUSY particle production at e+ e-colliders, Comput.Phys.Commun. 153 (2003) 275-315 [hep-ph/0301101].
SPheno 3.1: Extensions including flavour, CP-phases and models beyond the MSSM. W Porod, F Staub, Comput.Phys.Commun. 1831104.1573W. Porod and F. Staub, SPheno 3.1: Extensions including flavour, CP-phases and models beyond the MSSM, Comput.Phys.Commun. 183 (2012) 2458-2469 [1104.1573].
Two-Loop Higgs mass calculations in supersymmetric models beyond the MSSM with SARAH and SPheno. M D Goodsell, K Nickel, F Staub, Eur.Phys.J. 75132 [1411.0675M. D. Goodsell, K. Nickel and F. Staub, Two-Loop Higgs mass calculations in supersymmetric models beyond the MSSM with SARAH and SPheno, Eur.Phys.J. C75 (2015), no. 1 32 [1411.0675].
Two-loop Higgs mass calculation from a diagrammatic approach. M Goodsell, K Nickel, F Staub, 1503.03098M. Goodsell, K. Nickel and F. Staub, Two-loop Higgs mass calculation from a diagrammatic approach, 1503.03098.
Higgs mass, muon g-2, and LHC prospects in gauge mediation models with vector-like matters. M Endo, K Hamaguchi, S Iwamoto, N Yokozaki, 1112.5653Phys.Rev. 8595012M. Endo, K. Hamaguchi, S. Iwamoto and N. Yokozaki, Higgs mass, muon g-2, and LHC prospects in gauge mediation models with vector-like matters, Phys.Rev. D85 (2012) 095012 [1112.5653].
. M Endo, K Hamaguchi, S Iwamoto, N Yokozaki, Vacuum Stability Bound on Extended GMSB Models. 120660JHEP. 1202.2751M. Endo, K. Hamaguchi, S. Iwamoto and N. Yokozaki, Vacuum Stability Bound on Extended GMSB Models, JHEP 1206 (2012) 060 [1202.2751].
Flavored Gauge-Mediation. Y Shadmi, P Z Szabo, 1103.0292JHEP. 1206124Y. Shadmi and P. Z. Szabo, Flavored Gauge-Mediation, JHEP 1206 (2012) 124 [1103.0292].
Relatively Heavy Higgs Boson in More Generic Gauge Mediation. J L Evans, M Ibe, T T Yanagida, Phys.Lett. 7051107.3006J. L. Evans, M. Ibe and T. T. Yanagida, Relatively Heavy Higgs Boson in More Generic Gauge Mediation, Phys.Lett. B705 (2011) 342-348 [1107.3006].
On Low-Energy Predictions of Unification Models Inspired by F-theory. T Jelinski, J Pawelczyk, K Turzynski, Phys.Lett. 7111111.6492T. Jelinski, J. Pawelczyk and K. Turzynski, On Low-Energy Predictions of Unification Models Inspired by F-theory, Phys.Lett. B711 (2012) 307-312 [1111.6492].
A 125GeV Higgs Boson and Muon g-2 in More Generic Gauge Mediation. J L Evans, M Ibe, S Shirai, T T Yanagida, Phys.Rev. 85950041201.2611J. L. Evans, M. Ibe, S. Shirai and T. T. Yanagida, A 125GeV Higgs Boson and Muon g-2 in More Generic Gauge Mediation, Phys.Rev. D85 (2012) 095004 [1201.2611].
Higgs boson of mass 125 GeV in GMSB models with messenger-matter mixing. A Albaid, K Babu, 1207.1014Phys.Rev. 8855007A. Albaid and K. Babu, Higgs boson of mass 125 GeV in GMSB models with messenger-matter mixing, Phys.Rev. D88 (2013) 055007 [1207.1014].
Flavored Gauge Mediation, A Heavy Higgs, and Supersymmetric Alignment. M Abdullah, I Galon, Y Shadmi, Y Shirman, 1209.4904JHEP. 130657M. Abdullah, I. Galon, Y. Shadmi and Y. Shirman, Flavored Gauge Mediation, A Heavy Higgs, and Supersymmetric Alignment, JHEP 1306 (2013) 057 [1209.4904].
Mixing supersymmetry and family symmetry breakings. M J Pãľrez, P Ramond, J Zhang, Phys.Rev. 873 035021 [1209.6071M. J. PÃľrez, P. Ramond and J. Zhang, Mixing supersymmetry and family symmetry breakings, Phys.Rev. D87 (2013), no. 3 035021 [1209.6071].
Focus Point Supersymmetry in Extended Gauge Mediation. R Ding, T Li, F Staub, B Zhu, 1312.5407JHEP. 1403130R. Ding, T. Li, F. Staub and B. Zhu, Focus Point Supersymmetry in Extended Gauge Mediation, JHEP 1403 (2014) 130 [1312.5407].
General Focus Point in the MSSM. A Delgado, M Quiros, C Wagner, JHEP. 1404931402.1735A. Delgado, M. Quiros and C. Wagner, General Focus Point in the MSSM, JHEP 1404 (2014) 093 [1402.1735].
A Basirnia, D Egana-Ugrinovic, S Knapen, D Shih, 1501.00997125 GeV Higgs from Tree-Level A-terms. A. Basirnia, D. Egana-Ugrinovic, S. Knapen and D. Shih, 125 GeV Higgs from Tree-Level A-terms, 1501.00997.
Chiral Flavor Violation from Extended Gauge Mediation. J A Evans, D Shih, A Thalapillil, 1504.00930J. A. Evans, D. Shih and A. Thalapillil, Chiral Flavor Violation from Extended Gauge Mediation, 1504.00930.
The Mu problem in theories with gauge mediated supersymmetry breaking. G Dvali, G Giudice, A Pomarol, hep-ph/9603238Nucl.Phys. 478G. Dvali, G. Giudice and A. Pomarol, The Mu problem in theories with gauge mediated supersymmetry breaking, Nucl.Phys. B478 (1996) 31-45 [hep-ph/9603238].
A Simple complete model of gauge mediated SUSY breaking and dynamical relaxation mechanism for solving the mu problem. S Dimopoulos, G Dvali, R Rattazzi, hep-ph/9707537Phys.Lett. 413S. Dimopoulos, G. Dvali and R. Rattazzi, A Simple complete model of gauge mediated SUSY breaking and dynamical relaxation mechanism for solving the mu problem, Phys.Lett. B413 (1997) 336-341 [hep-ph/9707537].
Theories with gauge mediated supersymmetry breaking. G Giudice, R Rattazzi, hep-ph/9801271Phys.Rept. 322G. Giudice and R. Rattazzi, Theories with gauge mediated supersymmetry breaking, Phys.Rept. 322 (1999) 419-499 [hep-ph/9801271].
Dark matter in theories of gauge mediated supersymmetry breaking. S Dimopoulos, G Giudice, A Pomarol, hep-ph/9607225Phys.Lett. 389S. Dimopoulos, G. Giudice and A. Pomarol, Dark matter in theories of gauge mediated supersymmetry breaking, Phys.Lett. B389 (1996) 37-42 [hep-ph/9607225].
Messenger sneutrinos as cold dark matter. T Han, R Hempfling, hep-ph/9708264Phys.Lett. 415T. Han and R. Hempfling, Messenger sneutrinos as cold dark matter, Phys.Lett. B415 (1997) 161-169 [hep-ph/9708264].
Gravitino warm dark matter with entropy production. E A Baltz, H Murayama, astro-ph/0108172JHEP. 030567E. A. Baltz and H. Murayama, Gravitino warm dark matter with entropy production, JHEP 0305 (2003) 067 [astro-ph/0108172].
Natural gravitino dark matter and thermal leptogenesis in gauge mediated supersymmetry breaking models. M Fujii, T Yanagida, hep-ph/0208191Phys.Lett. 549M. Fujii and T. Yanagida, Natural gravitino dark matter and thermal leptogenesis in gauge mediated supersymmetry breaking models, Phys.Lett. B549 (2002) 273-283 [hep-ph/0208191].
Baryogenesis and gravitino dark matter in gauge mediated supersymmetry breaking models. M Fujii, T Yanagida, hep-ph/0207339Phys.Rev. 66123515M. Fujii and T. Yanagida, Baryogenesis and gravitino dark matter in gauge mediated supersymmetry breaking models, Phys.Rev. D66 (2002) 123515 [hep-ph/0207339].
Gravitino dark matter in gauge mediated supersymmetry breaking. K Jedamzik, M Lemoine, G Moultaka, hep-ph/0506129Phys.Rev. 7343514K. Jedamzik, M. Lemoine and G. Moultaka, Gravitino dark matter in gauge mediated supersymmetry breaking, Phys.Rev. D73 (2006) 043514 [hep-ph/0506129].
Strong dark matter constraints on GMSB models. F Staub, W Porod, J Niemeyer, 0907.0530JHEP. 100158F. Staub, W. Porod and J. Niemeyer, Strong dark matter constraints on GMSB models, JHEP 1001 (2010) 058 [0907.0530].
Observables in Low-Energy Superstring Models. J R Ellis, K Enqvist, D V Nanopoulos, F Zwirner, Mod.Phys.Lett. 157J. R. Ellis, K. Enqvist, D. V. Nanopoulos and F. Zwirner, Observables in Low-Energy Superstring Models, Mod.Phys.Lett. A1 (1986) 57.
Upper Bounds on Supersymmetric Particle Masses. R Barbieri, G Giudice, Nucl.Phys. 30663R. Barbieri and G. Giudice, Upper Bounds on Supersymmetric Particle Masses, Nucl.Phys. B306 (1988) 63.
The fine-tuning cost of the likelihood in SUSY models. D Ghilencea, G Ross, 1208.0837Nucl.Phys. 868D. Ghilencea and G. Ross, The fine-tuning cost of the likelihood in SUSY models, Nucl.Phys. B868 (2013) 65-74 [1208.0837].
Two loop effective potential for a general renormalizable theory and softly broken supersymmetry. S P Martin, hep-ph/0111209Phys.Rev. 65116003S. P. Martin, Two loop effective potential for a general renormalizable theory and softly broken supersymmetry, Phys.Rev. D65 (2002) 116003 [hep-ph/0111209].
Complete two loop effective potential approximation to the lightest Higgs scalar boson mass in supersymmetry. S P Martin, hep-ph/0211366Phys.Rev. 6795012S. P. Martin, Complete two loop effective potential approximation to the lightest Higgs scalar boson mass in supersymmetry, Phys.Rev. D67 (2003) 095012 [hep-ph/0211366].
Delta R in the MSSM. P H Chankowski, A Dabelstein, W Hollik, W M Mosle, S Pokorski, Nucl.Phys. 417P. H. Chankowski, A. Dabelstein, W. Hollik, W. M. Mosle, S. Pokorski et. al., Delta R in the MSSM, Nucl.Phys. B417 (1994) 101-129.
Pole masses of quarks in dimensional reduction. L Avdeev, M Y Kalmykov, hep-ph/9701308Nucl.Phys. 502L. Avdeev and M. Y. Kalmykov, Pole masses of quarks in dimensional reduction, Nucl.Phys. B502 (1997) 419-435 [hep-ph/9701308].
Two loop O(alpha-s**2) MSSM corrections to the pole masses of heavy quarks. A Bednyakov, A Onishchenko, V Velizhanin, O Veretin, hep-ph/0210258Eur.Phys.J. 29A. Bednyakov, A. Onishchenko, V. Velizhanin and O. Veretin, Two loop O(alpha-s**2) MSSM corrections to the pole masses of heavy quarks, Eur.Phys.J. C29 (2003) 87-101 [hep-ph/0210258].
New-physics signals of a model with a vector-singlet up-type quark. A K Alok, S Banerjee, D Kumar, S U Sankar, D London, 1504.00517A. K. Alok, S. Banerjee, D. Kumar, S. U. Sankar and D. London, New-physics signals of a model with a vector-singlet up-type quark, 1504.00517.
The Electroweak sector of the NMSSM at the one-loop level. F Staub, W Porod, B Herrmann, 1007.4049JHEP. 101040F. Staub, W. Porod and B. Herrmann, The Electroweak sector of the NMSSM at the one-loop level, JHEP 1010 (2010) 040 [1007.4049].
A Flavor Kit for BSM models. W Porod, F Staub, A Vicente, Eur.Phys.J. 748 2992 [1405.1434W. Porod, F. Staub and A. Vicente, A Flavor Kit for BSM models, Eur.Phys.J. C74 (2014), no. 8 2992 [1405.1434].
A Tool Box for Implementing Supersymmetric Models. F Staub, T Ohl, W Porod, C Speckner, 1109.5147Comput.Phys.Commun. 183F. Staub, T. Ohl, W. Porod and C. Speckner, A Tool Box for Implementing Supersymmetric Models, Comput.Phys.Commun. 183 (2012) 2165-2206 [1109.5147].
Two loop renormalization group equations for soft supersymmetry breaking couplings. S P Martin, M T Vaughn, hep-ph/9311340Phys.Rev. 502282S. P. Martin and M. T. Vaughn, Two loop renormalization group equations for soft supersymmetry breaking couplings, Phys.Rev. D50 (1994) 2282 [hep-ph/9311340].
Two loop renormalization group equations for soft SUSY breaking scalar interactions: Supergraph method. Y Yamada, hep-ph/9401241Phys.Rev. 50Y. Yamada, Two loop renormalization group equations for soft SUSY breaking scalar interactions: Supergraph method, Phys.Rev. D50 (1994) 3537-3545 [hep-ph/9401241].
Renormalization of the Fayet-Iliopoulos D term. I Jack, D Jones, hep-ph/9911491Phys.Lett. 473I. Jack and D. Jones, Renormalization of the Fayet-Iliopoulos D term, Phys.Lett. B473 (2000) 102-108 [hep-ph/9911491].
The Fayet-Iliopoulos D term and its renormalization in softly broken supersymmetric theories. I Jack, D Jones, S Parsons, hep-ph/0007291Phys.Rev. 62125022I. Jack, D. Jones and S. Parsons, The Fayet-Iliopoulos D term and its renormalization in softly broken supersymmetric theories, Phys.Rev. D62 (2000) 125022 [hep-ph/0007291].
Running soft parameters in SUSY models with multiple U(1) gauge factors. R M Fonseca, M Malinsky, W Porod, F Staub, Nucl.Phys. 8541107.2670R. M. Fonseca, M. Malinsky, W. Porod and F. Staub, Running soft parameters in SUSY models with multiple U(1) gauge factors, Nucl.Phys. B854 (2012) 28-53 [1107.2670].
Two-loop RGEs with Dirac gaugino masses. M D Goodsell, 1206.6697JHEP. 130166M. D. Goodsell, Two-loop RGEs with Dirac gaugino masses, JHEP 1301 (2013) 066 [1206.6697].
| []
|
[
"Conservative-dissipative approximation schemes for a generalized Kramers equation",
"Conservative-dissipative approximation schemes for a generalized Kramers equation"
]
| [
"Hong Manh ",
"Duong ",
"Mark A Peletier ",
"Johannes Zimmer "
]
| []
| []
| We propose three new discrete variational schemes that capture the conservative-dissipative structure of a generalized Kramers equation. The first two schemes are single-step minimization schemes while the third one combines a streaming and a minimization step. The cost functionals in the schemes are inspired by the rate functional in the Freidlin-Wentzell theory of large deviations for the underlying stochastic system. We prove that all three schemes converge to the solution of the generalized Kramers equation. | 10.1002/mma.2994 | [
"https://arxiv.org/pdf/1206.2859v1.pdf"
]
| 53,500,012 | 1206.2859 | 00b68a8b47c13b34aa3d00e60ba3b5cc10fa51bb |
Conservative-dissipative approximation schemes for a generalized Kramers equation
13 Jun 2012 May 1, 2014
Hong Manh
Duong
Mark A Peletier
Johannes Zimmer
Conservative-dissipative approximation schemes for a generalized Kramers equation
13 Jun 2012 May 1, 2014and phrases Kramers equationGradient flowsHamiltonian flowsvariational principleoptimal transport
We propose three new discrete variational schemes that capture the conservative-dissipative structure of a generalized Kramers equation. The first two schemes are single-step minimization schemes while the third one combines a streaming and a minimization step. The cost functionals in the schemes are inspired by the rate functional in the Freidlin-Wentzell theory of large deviations for the underlying stochastic system. We prove that all three schemes converge to the solution of the generalized Kramers equation.
1 Introduction
The Kramers equation
In this paper we discuss the variational structure of a generalized Kramers equation,
∂ t ρ = − div q ρ p m + div p ρ∇ q V + γ div p ρ∇ p F + γkT ∆ p ρ, in R 2d × R + ,(1)
which is the Fokker-Planck or Forward Kolmogorov equation of the stochastic differential equation
dQ(t) = P (t) m dt,(2a)
dP (t) = −∇V (Q(t))dt − γ∇F (P (t))dt + 2γkT dW (t).
(2b)
The system (2) describes the movement of a particle at position Q and with momentum P under the influence of three forces. One force is the derivative −∇V of a background potential V = V (Q), the second is a friction force −γ∇F (P ), and the third is a stochastic perturbation generated by a Wiener process W . The parameter m > 0 is the mass of the particle (so that the velocity is P/m), γ is a friction parameter, k is the Boltzmann constant, and T is the temperature of the noise. A common choice for F is F (P ) = P 2 /2m, which results in a linear friction force. For a stochastic particle given by (2), ρ = ρ(t, q, p) characterizes the probability of finding the particle at time t at position q and with momentum p. Equation (1) characterizes the evolution of this probability density over time. The three deterministic drift terms in (2) lead to convection terms in (1), and the noise results in the final term in (1). We use the notation div q and similar to indicate that the differential operator acts only on one variable.
Both equations describe the behaviour of a Brownian particle with inertia [Bro28], such as which is large enough to be distinguished from the molecules in the surrounding solvent, but small enough to show random behaviour arising from collisions with those same molecules. Both the friction force and the noise term arise from collisions with the solvent, and the parameter γ characterizes the intensity of these collisions. The parameter kT measures the mean kinetic energy of the solvent molecules, and therefore characterizes the magnitude of the collision noise. A major application of this system is as a simplified model for chemical reactions, and it is in this context that Kramers originally introduced it [Kra40].
The aim of this paper is to discuss variational formulations for equation (1). The theory of such variational structures took off with the introduction of Wasserstein gradient flows by [JKO97,JKO98] and of the energetic approach to rate-independent processes [MTL02,Mie05]. Both have changed the theory of evolution equations in many ways. If a given evolution equation has such a variational structure, then this property gives strong restrictions on the type of behaviour of such a system, provides general methods for proving well-posedness [AGS08] and characterizing largetime behaviour (e.g., [CMV03]), gives rise to natural numerical discretizations (e.g., [DMM10]), and creates handles for the analysis of singular limits (e.g., [SS04, Ste08, AMP + 12]). Because of this wide range of tools, the study of variational structure has important consequences for the analysis of an evolution equation.
Remark 1.1. A brief word about dimensions. We make the unusual choice of preserving the dimensional form of the equations, because the explicit constants help in identifying the modelling origin and roles of the different terms and effects, and these aspects are central to this paper. Therefore Q and q are expressed in m, P and p in kg m/s, m in kg, V , F , and kT in J, and γ in kg/s. The density ρ has dimensions such that ρ is dimensionless. This setup implies that the Wiener process has dimension √ s, in accordance with the formal property dW 2 = dt.
Variational evolution
To avoid confusion between the Boltzmann constant and the integer k, from now on we define β −1 := kT . The authors of [JKO98] studied an equation that can be seen as a simpler, spatially homogeneous case of (1), where ρ = ρ(t, p):
∂ t ρ = γβ −1 ∆ p ρ + γ div p ρ∇ p F.(3)
They showed that this equation is a gradient flow of the free energy A p (ρ) := R d ρF + β −1 ρ log ρ dp with respect to the Wasserstein metric. This statement can be made precise in a variety of different ways (see [AGS08] for a thorough treatment of this subject); for the purpose of this paper the most useful one is that the solution t → ρ(t, p) can be approximated by the time-discrete sequence ρ k defined recursively by ρ k ∈ argmin ρ K h (ρ, ρ k−1 ), K h (ρ, ρ k−1 ) := 1 2h
1 γ d(ρ, ρ k−1 ) 2 + A p (ρ).(4)
Here d is the Wasserstein distance between two probability measures ρ 0 (x)dx and ρ(y)dy with finite second moment, d(ρ 0 , ρ) 2 := inf P ∈Γ(ρ0,ρ) R d ×R d |x − y| 2 P (dxdy),
where Γ(ρ 0 , ρ) is the set of all probability measures on R d × R d with marginals ρ 0 and ρ,
Γ(ρ 0 , ρ) = {P ∈ P(R d ×R d ) : P (A×R d ) = ρ 0 (A), P (R d ×A) = ρ(A) for all Borel subsets A ⊂ R d }.(5)
A consequence of this gradient-flow structure is that A p decreases along solutions of (3).
Unfortunately, a convincing generalization of this gradient-flow concept and corresponding theory to equations such as the Kramers equation is still lacking. This is related to the mixture of both dissipative and conservative effects in these equations, which we now explain.
A combination of conservative and dissipative effects
The full Kramers equation (1) is a mixture of the dissipative behaviour described by (3) and a Hamiltonian, conservative behaviour. The conservative behaviour can be recognized by setting γ = 0, thus discarding the last two terms in (2); what remains in (2) is a deterministic Hamiltonian system with Hamiltonian energy H(q, p) = p 2 /2m+V (q). The evolution of this system is reversible and conserves H. Correspondingly, the evolution of (1) with γ = 0 also is reversible and conserves the expectation of H,
H(ρ) := R 2d ρ(q, p)H(q, p) dqdp.
On the other hand, as suggested by the discussion in the previous section, the γ-dependent terms represent dissipative effects. In the variational schemes that we define below, a central role is played by the (q, p)-dependent analogue of A p ,
A(ρ) := R 2d ρ(q, p)F (p) + β −1 ρ(q, p) log ρ(q, p) dqdp.
Because of the special structure of (1), the functional A does not decrease along solutions, but in the particular case F (p) := p 2 /2m, a 'total free energy' functional does: setting
E(ρ) := A(ρ) + ρV dqdp = H + β −1 log ρ ρ dqdp,
we calculate that
∂ t E(ρ(t)) = −γ R 2d 1 ρ(t, q, p) ρ(t, q, p) p m + β −1 ∇ p ρ(t, q, p) 2 dqdp ≤ 0.(6)
The choice F (p) = p 2 /2m is related to the fluctuation-dissipation theorem, and we comment on this in Section 1.7. Because of the conservative, Hamiltonian terms, equation (1) is not a gradient flow, and an approach such as [JKO98] is not possible. In 2000 Huang [Hua00] proposed a variational scheme that is inspired by [JKO98], but modified to account for the conservative effects, and in this paper we describe three more variational schemes for the same equation.
Huang's discrete schemes for the Kramers equation
The time-discrete variational schemes of Huang's and of this paper are best understood through the connection between gradient flows on one hand and large deviations on the other. We have recently shown this connection for a number of systems [ADPZ11, PR11, DLZ11, DLR12], including (3).
The philosophy can be formulated in a number of ways, and here we choose a perspective based on the behaviour of a single particle. We start with the simpler case of equation (3) and the discrete approximation (4). Let {X ǫ } ǫ>0 be a rescaled d-dimensional Wiener process,
dX ǫ (t) = √ 2σǫ dW (t),(7)
where σ is a mobility coefficient.
Prob X ǫ (·) ≈ ξ(·) ∼ exp − 1 ǫ I(ξ) , as ǫ → 0,
where the rate functional I : C([0, h]; R d ) → R ∪ {+∞} is given by
I(ξ) = 1 4σ h 0 ξ (t) 2 dt.
The Wasserstein cost function |x − y| 2 can be written in terms of I as
|x − y| 2 = 4hσ inf I(ξ) : ξ ∈ C 1 ([0, h], R d ) such that ξ(0) = x, ξ(h) = y .(8)
Hence the cost |x − y| 2 can be interpreted as the the probability that a Brownian particle goes from x to y in time h, in the sense of large deviations, and rescaled as to be independent of the magnitude of the noise σ.
The results of [ADPZ11, PR11, DLR12] concern a similar large-deviation analysis, but now for the empirical measure of a large number n of particles. For this system the limit n → ∞ plays a role similar to ǫ → 0 in the example above. In [ADPZ11,PR11,DLR12], it is shown that this rate functional is very similar to the right-hand side of (4) in the limit h → 0. This result explains the strong connection between large deviations on one hand and the gradient-flow structure on the other.
However, the core of the argument of [ADPZ11, PR11,DLR12] is contained in the Schilder example (7) and its connection (8) to the Wasserstein cost. Hence we use this simpler point of view to generalize the approximation scheme (4) to the Kramers equation. There are two different ways of doing this.
Approach 1 [Hua00]. Instead of the inertia-less Brownian particle given by (7), we consider a particle with inertia satisfying
dQ ǫ (t) = P ǫ (t) m dt, (9a) dP ǫ (t) = 2ǫγβ −1 dW (t),(9b)
which can formally also be written as
m d 2 dt 2 Q ǫ (t) = 2γβ −1 ǫ dW dt (t).
By the Freidlin-Wentzell theorem (e.g. [DZ87, Th. 5.6.3]), the process Q ǫ (t) satisfies a similar large-deviation principle with rate functional I : C([0, h], R d ) → R ∪ {+∞} given by
I(ξ) = 1 4γβ −1 h 0 mξ(t) 2 dt.
The comparison with (8) suggests to define a cost functional C h (q, p; q ′ , p ′ ) in a similar way, i.e.,
C h (q, p; q ′ , p ′ ) := h inf h 0 mξ(t) 2 dt : ξ ∈ C 1 ([0, h], R d ) such that (ξ, mξ)(0) = (q, p), (ξ, mξ)(h) = (q ′ , p ′ ) = |p ′ − p| 2 + 12 m h (q ′ − q) − p ′ + p 2 2 .(10)
The second formula follows from an explicit calculation of the minimizer. As above, the interpretation is that of the probabilistic 'cost', that is, the large-deviations characterization of the probability of a path of (9) connecting (q, p) to (q ′ , p ′ ) over time h. Note that C h is not a metric, since it is not symmetric, and also C h (q, p; q, p) = 12|p| 2 generally does not vanish. Therefore the Wasserstein 'distance' W h defined with C h as cost is not a metric, but only an optimal-transport cost (see [Vil03] for an exposition on the theory of optimal transportation).
Huang then defines the approximation scheme as
Scheme 1 [Hua00]. Given a previous state ρ k−1 , define ρ k as the solution of the minimization problem min ρ 1 2h
1 γ W h (ρ k−1 , ρ) + A(ρ) + 2m γh R 2d ρ(q, p)V (q) dqdp,(11)
where W h is the optimal-transport cost on R 2d with cost function C h .
Huang proves [Hua00,Hua11] that the approximations generated by this scheme indeed converge to the solution of (1) as h → 0.
Criticism
Although Scheme 1 is approximately of similar form to (4), there are in fact important issues with this scheme:
1. In (1), the dissipative effects are represented by the terms prefixed by γ, and the conservative effects by the the Hamiltonian terms div q ρp/m and div p ρ∇V . It would be natural to see these effects play separate roles in the variational formulation. However, in Scheme 1 the effects are mixed, since the final term in (11) mixes conservative effects (represented by V and m) with dissipative effects (the prefactor γ, and the role as driving force in a gradientflow-type minimization).
2. The dependence on h of the final term in (11) adds to the confusion; since this parameter is an approximation parameter chosen independently from the actual system, the combination A + 2m/γh ρV can not be considered a single driving potential.
3. In fact, in the standard case F (p) = p 2 /2m the sum of A and ρV is a natural object, since it represents total free energy and decreases along solutions (see Section 1.3). Note how the coefficient in this sum is 1 instead of 2m/γh.
The way in which V appears in Scheme 1 can be traced back to the fact that of the two conservative terms in (1) and (2), only P/m is represented in the definition of the cost C h , in the right-hand side of (9a); the term ∇V is missing in (9). Therefore the scheme has to compensate for the other term ∇V in a different manner. These arguments lead us to pose the following question, which is the central topic of this paper:
Can we construct an approximation scheme that respects the conservativedissipative split?
The answer is 'yes', and in the rest of this paper we explain how; in fact we detail three different schemes, corresponding to different ways of answering this question.
The schemes of this paper
We take a different approach than Huang did.
Approach 2. To set up a new cost functional, we first return to the single-particle point of view, as in (7) and (9). We now take a particle whose behaviour is a combination of the two Hamiltonian terms in (2) and a noise term:
dQ ǫ (t) = P ǫ (t) m dt, (12a) dP ǫ (t) = −∇V (Q ǫ ) dt + 2γβ −1 ǫ dW (t),(12b)
which again can formally be written as
m d 2 dt 2 Q ǫ (t) + ∇V (Q ǫ (t)) = 2γβ −1 ǫ dW dt (t).
Note how this system differs from (9) by the term involving ∇V in (12b).
A very similar application of the Freidlin-Wentzell theorem states that Q ǫ satisfies a largedeviation principle as ǫ → 0 with rate function
I(ξ) = 1 4γβ −1 h 0 mξ(t) + ∇V (ξ(t)) 2 dt.
This leads to the following scheme.
Scheme 2a. We define the cost to be
C h (q, p; q ′ , p ′ ) := h inf h 0 mξ(t) + ∇V (ξ(t)) 2 dt : ξ ∈ C 1 ([0, h], R d ) such that (ξ, mξ)(0) = (q, p), (ξ, mξ)(h) = (q ′ , p ′ ) . (13)
Given a previous state ρ k−1 , define ρ k as the solution of the minimization problem min ρ 1 2h
1 γ W h (ρ k−1 , ρ) + A(ρ),(14)
where W h is the optimal-transport cost on R 2d with cost function C h .
Note how now the term involving V has disappeared from the minimization problem (14). In Sections 4-6 we show that this approximation scheme converges to the solution of (1) as h → 0.
For practical purposes it is inconvenient that the cost C h in (13) has no explicit expression. It turns out that we may approximate C h with an explicit expression and obtain the same limiting behaviour.
Scheme 2b. Define
C h (q, p; q ′ , p ′ ) := h inf h 0 mξ(t) + ∇V (q) 2 dt : (ξ, mξ)(0) = (q, p), (ξ, mξ)(h) = (q ′ , p ′ ) (10) = |p ′ − p| 2 + 12 m h (q ′ − q) − p ′ + p 2 2 + 2h(p ′ − p) · ∇V (q) + h 2 |∇V (q)| 2 = |p ′ − p + h∇V (q)| 2 + 12 m h (q ′ − q) − p ′ + p 2 2 .(15)
Given a previous state ρ k−1 , define ρ k as the solution of the minimization problem min ρ 1 2h
1 γ W h (ρ k−1 , ρ) + A(ρ),(16)
where W h is the optimal-transport cost on R 2d with cost function C h .
Note how C h differs from (13) in that ξ(t) is replaced by q in ∇V . This approximation is exact when V is linear. We prove the convergence of solutions of Scheme 2b in Sections 4-6.
Neither of the costs C h and C h gives rise to a metric, since they are asymmetric and do not vanish when (q ′ , p ′ ) = (q, p). It is possible to construct a two-step scheme with a symmetric cost and corresponding metric W h .
Scheme 2c. Define C h (q, p; q ′ , p ′ ) := |p ′ − p| 2 + 12 m h (q ′ − q) − p ′ − p 2 2 + 2m(q ′ − q) · (∇V (q ′ ) − ∇V (q)). (17)
Assume ρ h k−1 is given, define the single-step, backwards approximate streaming operator
σ h (q, p) := q − h p m , p + h∇V (q) .(18)
Given a previous state ρ k−1 , define ρ k in two steps.
Hamiltonian step: First determine µ h k (q, p) such that µ h k (q, p) := σ −1 h (q, p) ♯ ρ h k−1 (q, p),(19)
where ♯ denotes the push forward operator.
Gradient flow step: Then determine ρ h k that minimizes
min ρ 1 2h 1 γ W h (µ h k , ρ) + A(ρ),(20)
where W h is the metric on R 2d generated by the cost function C h .
The main result and the relation to GENERIC
The main theorem of this paper, Theorem 2.3 below, states that the three new Schemes 2a-c are indeed approximation schemes for the Kramers equation (1): the discrete-time approximate solutions constructed using each of these three schemes converge, as h → 0, to the unique solution of (1).
This statement itself is a relatively uninteresting assertion: it states that the schemes are what we claim them to be, approximation schemes. The interest of this paper lies in the fact that these three schemes suggest a way towards a generalization of the theory of metric-space gradient flows, as developed in [AGS08], to equations like (1) that combine dissipative with conservative effects.
Indeed, the full class of equations and systems that combines dissipative and conservative effects is extremely large. It contains the Navier-Stokes-Fourier equations (which include heat generation and transport), systems modelling visco-elasto-plastic materials, relativistic hydrodynamics, many Boltzmann-type equations, and many other equations describing continuum-mechanical systems. In fact, the full class of systems covered by the GENERIC formalism [Ött05] is of this conservative-dissipative type, and indeed the Kramers equation is one of them.
The GENERIC class (General Equation for the Non-Equilibrium Reversible Irreversible Coupling) consists of equations for an unknown x in a state space X that can be written aṡ
x(t) = J(x)E ′ (x) + K(x)S ′ (x).
Here E, S : X → R are functionals, and J, K are operators. A GENERIC system is fully characterized by X , E, S, J, and K. In addition, there are certain requirements on these elements, which include the symmetry conditions J is antisymmetric and K is symmetric and nonnegative, and the degeneracy or non-interaction conditions
J(x)S ′ (x) = 0, K(x)E ′ (x) = 0, for all x ∈ X .
Because of these properties, along a solution E is constant and S increases. In many systems the functionals E and S correspond to energy and entropy.
When F (p) = |p| 2 /2m, the Kramers equation (1) can be cast in this form. 1 Because of this, the results of this paper strongly suggest that similar schemes can be constructed for arbitrary GENERIC systems. We leave this for future study.
Conclusion and further discussion
We now make some further comments about the schemes of this paper.
Value of the three schemes. Scheme 2a is in our opinion interesting because 'it is the right thing to do'-it stays as close as possible to the underlying physics. However, its non-explicit nature makes it difficult to work with, as the calculations in the proof of Lemma 3.1 illustrate. Scheme 2b is therefore useful as an approximation of Scheme 2a. Scheme 2c has the advantage of being formulated in terms of a metric W h , which suggests applicability of metric-space theory.
The linear-friction case F (p) = |p| 2 /2m. The coefficient γkT in (1) and the coefficient σ := √ 2γkT in (2b) are obviously related by σ 2 = 2γkT . When F (p) = |p| 2 /2m, the coefficient γ is also the coefficient of linear friction, and this relationship between σ, γ, and temperature is the one given by the fluctuation-dissipation theorem. This guarantees that the Boltzmann distribution
ρ ∞ (q, p) = Z −1 exp − 1 kT H(q, p) ,(21)
is the unique stationary solution of (1). Moreover, the total free energy E is the relative entropy with respect to ρ ∞ , and it is a Lyapunov functional for the system, as is shown in (6). When F does not have this specific form, but does have appropriate growth at infinity, then there still exists a unique stationary solution ρ ∞ , which however does not have the convenient characterization (21). The relative entropy with respect to ρ ∞ is then again a Lyapunov fucntional.
Connection to ultra-parabolic equations. If V is linear, V (q) = c · q, where c ∈ R d is a constant vector, then C h coincides with C h . In this case, C h = C h is closely related to the fundamental solution of the equation
∂ t ρ(t, q, p) = − p m · ∇ q ρ(t, q, p) + c · ∇ p ρ(t, q, p) + σ 2 2 ∆ p ρ(t, q, p).(22)
Indeed, the fundamental solution Γ t (q, p; q ′ , p ′ ) of (22) is given by
Γ t (q, p; q ′ , p ′ ) = α 1 t 2d exp − γ σ 2 t C t (q, p; q ′ , p ′ ) ,(23)
where α 1 is a normalization constant depending only on d. This fact is true for a much more general linear system and is related to the controllable property of the system [DM10]. The appearance of the rate functional from the Freidlin-Wentzell theory in (23) consolidates the connection to the large deviation principle of our aprroach.
Connection to the isentropic Euler equations. The cost function C h has been used in [GW09,Wes10] to study the system of isentropic Euler equations,
∂ t ρ + ∇ · (ρu) = 0, ∂ t u + u · ∇u = −∇U ′ (ρ),
where U : [0, ∞) −→ R is an internal energy density. We now formally show the relationship between two equations. Suppose that ρ(t, q, p) is a solution of the Kramers equation (1) with F (p) = |p| 2 /2m. We define the macroscopic spacial density and the bulk velocity as
ρ(t, q) = R d ρ(t, q, p)dp,(24)u(t, q) = 1 ρ(t, q) R d p m ρ(t, q, p)dp.(25)
Using the so-called moment method, we find that ( ρ, u) satisfies the following damped Euler equations [CSR96,Cha03,CLL04],
∂ t ρ + ∇ · ( ρu) = 0 (26) ∂ t u + u · ∇u = − β −1 m ∇ ρ ρ − 1 m ∇V − γ m u.(27)
If γ = 0 and V ≡ 0, these are the isentropic Euler equations with internal energy U (ρ) = β −1 ρ log ρ.
In [GW09,Wes10], the authors showed that the isentropic Euler equations may be interpreted as a second-order differential equation in the space of probability measures. They introduced a discrete approximation scheme, which is similar to Schemes 2a-b, using the cost functional C h . One future topic of research is to analyse whether one can approximate other second-order differential equations in the space of probability measures (e.g., the Schrödinger equation [vR11]), using the cost function C h . Connection to Ambrosio-Gangbo [AG08]. The Hamiltonian step in Scheme 2c is a generalization of the implicit Euler method for a finite-dimensional Hamiltonian system to an infinitedimensional case. It is also compatible with the concept of Hamiltonian flows in the Wasserstein space of probability measures defined by Ambrosio and Gangbo in [AG08]. Let H : P 2 (R 2d ) → (−∞, +∞] and µ ∈ P 2 (R 2d ) be given. Then µ t : [0, ∞) → P 2 (R 2d ) is called a Hamiltonian flow of H with the initial measure µ if the following equation holds
d dt µ t = div qp (µ t J∇H(µ t )), µ 0 = µ, t ∈ (0, T ),
where J is a skew-symmetric matrix and ∇H(µ t ) is the gradient of the Hamiltonian H at µ t (Definition 3.2 in [AG08]). In particular, when H(ρ) = R 2d
p 2 2m + V (q) ρ(q, p)dqdp then ∇H = (∇ q V (q), p m ) T .
According to Lemma 6.2 in [AG08] when µ is regular, a Hamiltonian flow in a small interval (0, h) is constructed by pushing forward the initial measure µ under the map Φ(t, ·) = (q(t), p(t)) which is the solution of the system (2) (with γ = 0). In the Hamiltonian step we approximate this system by the implicit Euler method and define µ h k to be the end point µ(h).
Overview of the paper
The paper is organized as follows. In Section 2, we describe our assumptions and state the main result. Section 3 establishes some properties of the three cost functions. The proof of the main theorem is given in Sections 4 to 6. In Section 4, we establish the Euler-Lagrange equations for the minimizers in three schemes. In Section 5, we prove the boundedness of the second moments and the entropy functional. Finally, the convergence result is given in Section 6.
Assumptions and main result
Throughout the paper we make the following assumptions.
V ∈ C 3 (R d ) and F ∈ C 2 (R d ), F (x) ≥ 0 for all x ∈ R d .(28)
There exists a constant C > 0 such that for all z 1 ,
z 2 ∈ R d 1 C |z 1 − z 2 | 2 ≤ (z 1 − z 2 ) · (∇V (z 1 ) − ∇V (z 2 )), (29a) |∇V (z 1 ) − ∇V (z 2 )| ≤ C |z 1 − z 2 | , (29b) |∇F (z 1 ) − ∇F (z 2 )| ≤ C |z 1 − z 2 | ,(29c)∇ 2 V (z 1 ) , ∇ 3 V (z 1 ) ≤ C.(29d)
Note that (29a) implies that V increases quadratically at infinity, and therefore V achieves its minimum. Without loss of generality we assume that this minimum is at the origin, which implies the estimate |∇V (z)| ≤ C|z|.
As we remarked in the Introduction, we work in the dimensional setting, and keep all the physical constants in place, in order to make the physical background of the expressions clear. We make an important exception, however, for inequalities of the type above; here the constants C can have any dimension, and we will group terms on the right-hand side of such estimates without taking their dimensions into account. This can be done without loss of generality, since we do not specify the generic constant C, and this constant will be allowed to vary from one expression to the next.
We only consider probability measures on R 2d which have a Lebesgue density, and we often tacitly identify a probability measure with its density. We denote by P 2 (R 2d ) the set of all probability measures on R d × R d with finite second moment,
P 2 (R 2d ) := ρ : R d × R d → [0, ∞) measurable, R 2d ρ(q, p)dqdp = 1, M 2 (ρ) < ∞ , where M 2 (ρ) = R 2d (γ 2 |q| 2 + |p| 2 )ρ(q, p) dqdp.(31)
With these assumptions, the functionals A and E introduced in the introduction are welldefined in P 2 (R 2d ). Moreover, the following two lemmas are now classical (see, e.g., [ (17), with corresponding optimal-transport cost functional W * h .
Lemma 2.1. Let ρ 0 , ρ ∈ P 2 (R 2d ) be given. There exists a unique optimal plan P * opt ∈ Γ(ρ 0 , ρ) such that
W * h (ρ 0 , ρ) = R 4d C * h (q, p; q ′ , p ′ )P * opt (dqdpdq ′ dp ′ ).(32)
Lemma 2.2. Let ρ 0 ∈ P 2 (R 2d ) be given. If h is small enough, then the minimization problem
min ρ∈P2(R 2d ) 1 2h 1 γ W * h (ρ 0 , ρ) + A(ρ),(33)
has a unique solution.
These lemmas imply that Schemes 2a-c are well-defined.
Next, we make the definition of a weak solution precise. A function ρ ∈ L 1 (R + × R 2d ) is called a weak solution of equation (1) with initial datum ρ 0 ∈ P 2 (R 2d ) if it satisfies the following weak formulation of (1):
∞ 0 R 2d ∂ t ϕ + p m · ∇ q ϕ − ∇ q V (q) + γ∇ p F (p) · ∇ p ϕ + γβ −1 ∆ p ϕ ρ dqdpdt = − R 2d ϕ(0, q, p)ρ 0 (q, p) dqdp, for all ϕ ∈ C ∞ c (R × R 2d ). (34)
The main result of the paper is the following.
Theorem 2.3. Let ρ 0 ∈ P 2 (R 2d ) satisfy A(ρ 0 ) < ∞.ρ h (t, q, p) = ρ h k (q, p) for (k − 1)h < t ≤ kh.(35)
Then for any T > 0,
ρ h ⇀ ρ weakly in L 1 ((0, T ) × R 2d ) as h → 0,(36)
where ρ is the unique weak solution of the Kramers equation with initial value ρ 0 . Moreover
ρ h (t) → ρ(t) weakly in L 1 (R 2d ) as h → 0 for any t > 0,(37)
and as t → 0,
ρ(t) → ρ 0 in L 1 (R 2d ).(38)
Outline of the proof. The proof follows the procedure of [JKO98] (see also [Hua00,Hua11]) and is divided into three main steps, which are carried out in Sections 4, 5, and 6: establish the Euler-Lagrange equation for the minimizers, then estimate the second moments and entropy functionals, and finally pass to the limit h → 0. We start in Section 3 with some properties of the cost functions.
Properties of the three cost functions
Here we derive and summarize a number of properties of the three cost functions. Define the quadratic form N (q, p) := |γq| 2 + |p| 2 ,
so that M 2 (ρ) = R 2d N (q, p) ρ(q, p) dqdp. Lemma 3.1. 1. Let C * h be either C h or C h . There exists C > 0 such that |q − q ′ | 2 + |p − p ′ | 2 ≤ CC h (q, p; q ′ , p ′ ), (39a) |q − q ′ | 2 ≤ Ch 2 C * h (q, p; q ′ , p ′ ) + N (q, p) + N (q ′ , p ′ ) ,(39b)|p − p ′ | 2 ≤ C C * h (q, p; q ′ , p ′ ) + h 2 N (q, p) + h 2 N (q ′ , p ′ ) .(39c)
2. For the cost function C h of Scheme 2a we have
∇ q ′ C h (q, p; q ′ , p ′ ) = 24m h m h (q ′ − q) − p ′ + p 2 − 2h∇ 2 V (q ′ ) · p ′ + σ h (q, p; q ′ , p ′ ), (40a) ∇ p ′ C h (q, p; q ′ , p ′ ) = 2(p ′ − p) − 12 m h (q ′ − q) − p ′ + p 2 + 2h∇V (q ′ ) + τ h (q, p; q ′ , p ′ ),(40b)
where there exists C > 0 such that
|σ h (q, p; q ′ , p)|, 1 h |τ h (q, p; q ′ , p ′ )| ≤ Ch C h (q, p; q ′ , p ′ ) + N (q, p) + N (q ′ , p ′ ) + 1 . (41)
3. For the cost function C h of Scheme 2b we have
∇ q ′ C h (q, p; q ′ , p ′ ) = 24m h m h (q ′ − q) − p ′ + p 2 ,(42a)∇ p ′ C h (q, p; q ′ , p ′ ) = 2(p ′ − p) − 12 m h (q ′ − q) − p ′ + p 2 + 2h∇V (q). (42b)
4. For the cost function C h of Scheme 2c we have
∇ q ′ C h (q, p; q ′ , p ′ ) = 24m h m h (q ′ − q) − p ′ − p 2 + 4m(∇V (q ′ ) − ∇V (q)) + r(q, q ′ ), (43a) ∇ p ′ C h (q, p; q ′ , p ′ ) = 2(p ′ − p) − 12 m h (q ′ − q) − p ′ − p 2 , (43b) where |r(q, q ′ )| ≤ Ch 2 C h (q, p; q ′ , p ′ ) + N (q, p) + N (q ′ , p ′ ) .(44)
Proof. For the length of this proof we fix q, p, q ′ , p ′ , and h, and we abbreviate
C h := C h (q, p; q ′ , p ′ ), C h := C h (q, p; q ′ , p ′ ), and N := N (q, p)+N (q ′ , p ′ ) = |γq| 2 +|p| 2 +|γq ′ | 2 +|p ′ | 2 .
Let ξ(t) and ξ(t), respectively, be the optimal curves in the definition of C h in (10) and of C h in (15). We will need a number of properties of these two curves. All the statements below are of the following type: there exists C > 0 and 0 < h 0 < 1 such that the property holds for all h < h 0 . Here C is always independent of q, p, q ′ , p ′ , and h. The norm · p is the L p -norm on the interval (0, h).
The curve ξ satisfies˙˙˙ξ = 0, and hence it is a cubic polynomial
ξ(t) = q 0 + at + bt 2 + ct 3 ,(45)
where the coefficients can be calculated from the boundary conditions:
a = p m , b = 3 h 2 q ′ − q − ph m − p ′ − p mh , c = p ′ + p mh 2 − 2 h 3 (q ′ − q).
Explicit calculations give
ξ 2 2 ≤ h ξ 2 ∞ ≤ ChN,(46)ξ 2 2 ≤ h ξ 2 ∞ ≤ C h −3 |q − q ′ | 2 + h −1 |p − p ′ | 2 ,(47)ξ 1 ≤ h ξ ∞ ≤ C h −1 |q − q ′ | + |p − p ′ | .(48)
The curve ξ(t) satisfies the equation
N ( ξ)(t) := m 2˙˙˙˙ ξ (t) + 2m∇ 2 V ( ξ) ·¨ ξ(t) + m∇ 3 V ( ξ) ·˙ ξ ·˙ ξ(t) + ∇ 2 V ( ξ) · ∇V ( ξ)(t) = 0,(49)
( ξ, m˙ ξ)(0) = (q, p), ( ξ, m˙ ξ)(h) = (q ′ , p ′ ),
where ∇ 3 V is the third-order tensor of third derivatives of V . This is a relatively benign equation, but non-trivially nonlinear.
We will need the following four intermediate estimates:
ξ 2 2 ≤ ChN,(50)C h + h ¨ ξ 2 2 ≤ C C h + h 2 N ,(51)˙ ξ 2 2 ≤ Ch C h + N ,(52)˙˙˙u 1 ≤ C C h + N + 1 .(53)
We first prove (50). Since ξ is optimal in C h ,
m ¨ ξ 2 ≤ m¨ ξ + ∇V ( ξ) 2 + ∇V ( ξ) 2 (13) ≤ mξ + ∇V (ξ) 2 + ∇V ( ξ) 2 ≤ m ξ 2 + ∇V (ξ) 2 + ∇V ( ξ) 2 (30) ≤ m ξ 2 + C ξ 2 + ξ 2 ≤ m ξ 2 + C ξ 2 + h 1/2 ξ ∞ .(54)
Therefore
ξ ∞ ≤ | ξ(0)| + h|˙ ξ(0)| + h 3/2 ¨ ξ 2 ≤ |q| + h m |p| + Ch 3/2 ξ 2 + ξ 2 + h 1/2 ξ ∞ .
If h 0 is small enough, then Ch 2 < 1/2, so that
ξ ∞ (46),(47) ≤ 2|q| + 2h m |p| + C |q − q ′ | + h|p − p ′ | + h 2 √ N .
Therefore ξ 2 2 ≤ h ξ 2 ∞ ≤ ChN, which is (50). Similar to (54) it also follows, since ξ is admissible for C h , that
C h = m 2 h ξ 2 2 ≤ m 2 h ¨ ξ 2 2 ≤ 2h m¨ ξ + ∇V ( ξ) 2 2 + 2h ∇V ( ξ) 2 2 (13),(30) ≤ 2 C h + Ch ξ 2 2 (50) ≤ 2 C h + Ch 2 N,
which implies (51). We now can prove part 1 of the Lemma. (39a) is a direct consequence of (17) and (29a). The estimate for p follows from (15) and (30) for C h , and from (10) and (51) for C h :
|p ′ − p| 2 ≤ C |p ′ − p + h∇V (q)| 2 + h 2 |∇V (q)| 2 ≤ C C h (q, p; q ′ , p ′ ) + h 2 N , |p ′ − p| 2 ≤ C h ≤ C( C h + h 2 N ).
Similarly,
|q ′ − q| 2 = h 2 m 2 m h (q ′ − q) − p + p ′ 2 + p ′ + p 2 2 ≤ 3h 2 m 2 m h (q ′ − q) − p + p ′ 2 2 + |p| 2 4 + |p ′ | 2 4 ≤ Ch 2 (C h + N ) ≤ Ch 2 ( C h + N ),(55)
and also |q ′ − q| 2 ≤ Ch 2 ( C h + N ).
Using the Poincaré inequality v − − v 2 ≤ Ch v ′ 2 , the estimate (52) then follows by
˙ ξ 2 2 ≤ 2 − ˙ ξ 2 2 + Ch 2 ¨ ξ 2 2 (51) ≤ 2 h |q − q ′ | 2 + Ch C h + h 2 N (39b) ≤ Ch C h + N .
To prove the final of the four intermediate estimates, (53), we define u = ξ − ξ; remark that
m 2˙˙˙u = −2m∇ 2 V ( ξ) ·¨ ξ − m∇ 3 V ( ξ) ·˙ ξ ·˙ ξ − ∇ 2 V ( ξ) · ∇V ( ξ).(56)
Note that u =u = 0 at t = 0, h, so that we have u 1 ≤ Ch 4 ˙˙˙u 1 and ü 1 ≤ Ch 2 ˙˙˙u 1 . We then calculate ˙˙˙u 1 (56),(29)
≤ C ¨ ξ 1 + ˙ ξ 2 2 + ξ 1 + h ≤ C ξ 1 + ˙ ξ 2 2 + ξ 1 + ü 1 + u 1 ≤ C ξ 1 + ˙ ξ 2 2 + ξ 1 + h 2 ˙˙˙u 1 + h 4 ˙˙˙u 1 .
Again, taking h 0 sufficiently small, we have C(h 2 + h 4 ) < 1/2, and therefore
˙˙˙u 1 ≤ C ξ 1 + ˙ ξ 2 2 + ξ 1 (46),(48),(52) ≤ C |q − q ′ | h + |p − p ′ | + h C h + hN + h √ N (39b) ≤ C C h + N + h C h + N + 1 ≤ C C h + N + 1 .
We now continue with parts 2, 3, and 4. The derivatives of C h can be calculated directly using the explicit expression (15). The derivatives of C h can be calculated as follows. Let η ∈ C 2 ([0, h]; R 2d ) satisfy η(0) = 0. Then
lim ε→0 4γβ −1 h I( ξ + εη) = 2h h 0 m¨ ξ + ∇V ( ξ) · mη + ∇ 2 V ( ξ) · η (t) dt = 2h h 0 N ( ξ) · η(t) dt + 2h mη m¨ ξ + ∇V ( ξ) − mη m˙˙˙ ξ + ∇ 2 V ( ξ) ·˙ ξ (h).
Note that N ( ξ) ≡ 0 by the stationarity (49) of ξ. This expression is equal to
∇ q ′ C h (q, p; q ′ , p ′ ) · η(h) + ∇ p ′ C h (q, p; q ′ , p ′ ) · mη(h),
which allows us to identify the two derivatives in terms of ξ. Setting u = ξ − ξ, we rewrite these in terms of u:
∇ q ′ C h (q, p; q ′ , p ′ ) = −2hm 2˙˙˙ ξ(h) − 2hm∇ 2 V ( ξ(h)) ·˙ ξ(h) = −2hm 2˙˙ξ (h) − 2hm∇ 2 V ( ξ(h)) ·˙ ξ(h) − 2hm 2 ˙˙˙ ξ(h) −˙˙ξ(h) (45) = 24m h m h (q ′ − q) − p ′ + p 2 − 2h∇ 2 V (q ′ ) · p ′ − 2hm 2˙˙u (h), ∇ p ′ C h (q, p; q ′ , p ′ ) = 2hm¨ ξ(h) + 2h∇V ( ξ(h)) = 2hmξ(h) + 2h∇V ( ξ(h)) + 2hm ¨ ξ(h) −ξ(h) (45) = 2(p ′ − p) − 12 m h (q ′ − q) − p ′ + p 2 + 2h∇V (q ′ ) + 2hmü(h).
Therefore (40) holds with
σ h = −2hm 2˙˙u (h) and τ h = 2hmü(h).
The estimates (41) then follow from (53) and the inequalities
ü ∞ ≤ h ˙˙u ∞ ≤ Ch ˙˙˙u 1 ,
which hold since u =u = 0 at t = 0, h.
The derivatives of C h are given by (43), where
r(q, q ′ ) := 2m ∇ 2 V (q ′ ) · (q ′ − q) − ∇V (q ′ ) + ∇V (q) .
The estimate (44) on r follows from (29d), (55), and the fact that by (29a), C h ≤ C h .
The Euler-Lagrange equation for the minimization problem
Let C * h be one of C h , C h , or C h , defined in (13), (15), and (17), with corresponding optimaltransport cost functional W * h . Let ρ ∈ P 2 (R 2d ) be given and let ρ be the unique solution of the minimization problem
min µ∈P2(R 2d ) 1 2γh W * h (ρ, µ) + A(µ).
We now establish the Euler-Lagrange equation for ρ. Following the now well-established route (see e.g. [JKO98,Hua00]), we first define a perturbation of ρ by a push-forward under an appropriate flow. Let ξ, η ∈ C ∞ 0 (R 2d , R d ). We define the flows Φ, Ψ :
[0, ∞) × R 2d → R d such that ∂Ψ s ∂s = φ(Ψ s , Φ s ), ∂Φ s ∂s = η(Ψ s , Φ s ), Ψ 0 (q, p) = q, Φ 0 (q, p) = p.
Let ρ s (q, p) be the push forward of ρ(q, p) under the flow (Ψ s , Φ s ), i.e., for any ϕ ∈ C ∞ 0 (R 2d , R) we have
R 2d ϕ(q, p)ρ s (q, p)dqdp = R 2d ϕ(Ψ s (q, p), Φ s (q, p))ρ(q, p)dqdp.(57)
Obviously ρ 0 (q, p) = ρ(q, p), and an explicit calculation gives ∂ s ρ s s=0 = −div q ρφ − div p ρη in the sense of distributions.
By following the calculations in e.g. [Hua00] we then compute the stationarity condition on ρ,
0 = 1 2γh R 4d [∇ q ′ C * h (q, p; q ′ , p ′ ) · φ(q ′ , p ′ ) + ∇ p ′ C * h (q, p; q ′ , p ′ ) · η(q ′ , p ′ )] P * opt (dqdpdq ′ dp ′ ) + R 2d ρ(q, p)∇ p F (p) · η(q, p)dqdp − β −1 R 2d ρ(q, p) [div q φ(q, p) + div p η(q, p)] dqdp,(59)
where P * opt is optimal in W * h (ρ, ρ). For any ϕ ∈ C ∞ 0 (R 2d , R), we choose
φ(q ′ , p ′ ) = − γh 2 6m 2 ∇ q ′ ϕ(q ′ , p ′ ) + γh 2m ∇ p ′ ϕ(q ′ , p ′ ), η(q ′ , p ′ ) = − γh 2m ∇ q ′ ϕ(q ′ , p ′ ) + γ∇ p ′ ϕ(q ′ , p ′ ).
i.e.,
φ η = − γh 2 6m 2 I γh 2m I − γh 2m I γI ∇ϕ(q ′ , p ′ ).(60)
Now the specific form of the cost functional C * h (q, p; q ′ , p ′ ) comes into play. We calculate the gradient expression in (59) for each scheme in the next subsections.
− γh 2 6m 2 I γh 2m I − γh 2m I γI = − γh 2 6m 2 I 0 0 γI A − γh 2m 0 I −I 0 B .
Note that A is symmetric and B is antisymmetric: this mirrors the conservative-dissipative structure of the Kramers equation.
The top-left block in A, which would correspond to diffusion in the spatial variable q, is of order O(h 2 ), and therefore vanishes when h → 0. The other block, which corresponds to diffusion in the momentum variable p, is of order O(1) and remains. This explains how in the limit h → 0 only diffusion in the momentum variable remains.
Schemes 2a and 2b
Lemma 4.2. Let h > 0 and let {ρ h k } be the sequence of the minimizers either for problem (14) in Scheme 2a or for problem (16) in Scheme 2b. Let W * h be W h for Scheme 2a and W h for Scheme 2b, and let P h * k be optimal in W * h (ρ h k−1 , ρ h k ). Then, for all ϕ ∈ C ∞ c (R 2d ), there holds
0 = 1 h R 4d [(q ′ − q) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(q ′ , p ′ )] P h * k (dqdpdq ′ dp ′ ) − 1 m R 2d p ′ · ∇ q ′ ϕ(q ′ , p ′ )ρ h k (q ′ , p ′ )dq ′ dp ′ + R 2d ∇V (q ′ ) · ∇ p ′ ϕ(q ′ , p ′ )ρ h k (q ′ , p ′ )dq ′ dp ′ + γ R 2d ∇F (p ′ ) · ∇ p ′ ϕ(q ′ , p ′ ) − β −1 ∆ p ′ ϕ(q ′ , p ′ ) ρ h k (q ′ , p ′ )dq ′ dp ′ + ω h k ,(61)
where
|ω h k | ≤ Ch W * h (ρ h k−1 , ρ h k ) + M 2 (ρ h k−1 ) + M 2 (ρ h k ) + 1 .
The second moment M 2 is defined in (31).
Proof. For Scheme 2b we combine (60) with (42) to yield
∇ q ′ C h (q, p; q ′ , p ′ ) · φ(q ′ , p ′ ) + ∇ p ′ C h (q, p; q ′ , p ′ ) · η(q ′ , p ′ ) = 2γ (q ′ − q) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(q ′ , p ′ ) − h m p ′ · ∇ q ′ ϕ(q ′ , p ′ ) + 2γ∇V (q) · − h 2 2m ∇ q ′ ϕ(q ′ , p ′ ) + h∇ p ′ ϕ(q ′ , p ′ ) .(62)
Substituting (60) and (62) into the Euler-Lagrange equation (59), we obtain
0 = 1 h R 4d [(q ′ − q) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(q ′ , p ′ )] P h k (dqdpdq ′ dp ′ ) − 1 m R 2d p ′ · ∇ q ′ ϕ(q ′ , p ′ )ρ h k (q ′ , p ′ )dq ′ dp ′ + R 4d ∇V (q) · ∇ p ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) + γ R 2d ∇F (p ′ ) · ∇ p ′ ϕ(q ′ , p ′ ) + β −1 h 2 6m 2 ∆ q ′ ϕ(q ′ , p ′ ) − β −1 ∆ p ′ ϕ(q ′ , p ′ ) ρ h k (q ′ , p ′ )dq ′ dp ′ − h 2m R 4d ∇V (q) + γ∇F (p ′ ) · ∇ q ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ).(63)
Therefore (61) holds with
|ω h k | = R 4d (∇V (q) − ∇V (q ′ )) · ∇ p ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ )dq ′ dp ′ +β −1 h 2 6m 2 R 2d ∆ q ′ ϕ(q ′ , p ′ )ρ h k (q ′ , p ′ )dq ′ dp ′ − h 2m R 4d ∇V (q) + γ∇F (p ′ ) · ∇ q ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) (29b),(29c) ≤ C R 4d |q − q ′ | + h(|q| + |p ′ | + 1) P h k (dqdpdq ′ dp ′ ) ≤ C R 4d 1 h |q − q ′ | 2 + h(|q| 2 + |p ′ | 2 + 1) P h k (dqdpdq ′ dp ′ ) (39) ≤ Ch W h (ρ h k−1 , ρ h k ) + M 2 (ρ h k−1 ) + M 2 (ρ h k ) + 1 .
This proves Lemma 4.2 for Scheme 2b.
For Scheme 2a we obtain an identity similar to (62),
∇ q ′ C h (q, p; q ′ , p ′ ) · φ(q ′ , p ′ ) + ∇ p ′ C h (q, p; q ′ , p ′ ) · η(q ′ , p ′ ) = 2γ (q ′ − q) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(q ′ , p ′ ) − h m p ′ · ∇ q ′ ϕ(q ′ , p ′ ) + 2γ h∇V (q ′ ) + 1 2 τ h (q, p; q ′ , p ′ ) · − h 2m ∇ q ′ ϕ(q ′ , p ′ ) + ∇ p ′ ϕ(q ′ , p ′ ) + 2γ −h∇ 2 V (q ′ ) · p ′ + 1 2 σ h (q, p ′ ; q ′ , p ′ ) · − h 2 6m 2 ∇ q ′ ϕ(q ′ , p ′ ) + h 2m ∇ p ′ ϕ(q ′ , p ′ ) .
This leads to the same equation as (61), but now with error term
ω h k = − h 2m R 4d ∇V (q ′ ) · ∇ q ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) + R 4d ∇ 2 V (q ′ ) · p ′ − 1 2h σ h (q, p; q ′ , p ′ ) · h 2 6m 2 ∇ q ′ ϕ(q ′ , p ′ ) − h 2m ∇ p ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) + 1 2h R 4d τ h (q, p; q ′ , p ′ ) − h 2m ∇ q ′ ϕ(q ′ , p ′ ) + ∇ p ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) − γh 2m R 4d ∇F (p ′ ) · ∇ q ′ ϕ(q ′ , p ′ )ρ h k (q ′ , p ′ )dq ′ dp ′ + β −1 h 2 6m 2 R 2d ∆ q ′ ϕ(q ′ , p ′ )ρ h k (q ′ , p ′ )dq ′ dp ′ .
We estimate this error as follows, using the notation of the proof of Lemma 1:
|ω h k | ≤ C R 4d h(1 + |q ′ |) + h|p ′ | + |σ h | + 1 h |τ h | + h(1 + |p ′ |) + h 2 P h k ≤ C R 4d h(1 + |q ′ | 2 + |p ′ | 2 ) + h C h + N + 1 P h k ≤ Ch R 4d C h + N + 1 P h k ≤ Ch W h (ρ h k−1 , ρ k k ) + M 2 (ρ h k−1 ) + M 2 (ρ h k ) + 1 .
This concludes the proof of Lemma 4.2.
Scheme 2c
Lemma 4.3. Let h > 0 and let {µ h k } and {ρ h k } be the sequences constructed in Scheme 2c. Let P h k (dqdpdq ′ dp ′ ) be the optimal plan in the definition of
W h (µ h k , ρ h k ). Then, for all ϕ ∈ C ∞ c (R 2d ), there holds 0 = 1 h R 4d (q ′ − q + p m h) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p − h∇ q V (q)) · ∇ p ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) − 1 m R 2d p · ∇ q ϕ(q, p)ρ h k (dqdp) + R 2d ∇V (q) · ∇ p ϕ(q, p)ρ h k (q, p)dqdp + γ R 2d ∇F (p) · ∇ p ϕ(q, p) − β −1 ∆ p ϕ(q, p) ρ h k (q, p)dqdp + ζ h k ,(64)
where
|ζ h k | ≤ Ch hW h (µ h k , ρ h k ) + M 2 (µ h k ) + M 2 (ρ h k ) + 1].
Proof. From (60) and (43) we obtain
∇ q ′ C h (q, p; q ′ , p ′ ) · φ(q ′ , p ′ ) + ∇ p ′ C h (q, p; q ′ , p ′ ) · η(q ′ , p ′ ) = 2γ (q ′ − q) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(q ′ , p ′ ) − h m (p ′ − p) · ∇ q ′ ϕ(q ′ , p ′ ) + γ 4m(∇V (q ′ ) − ∇V (q)) + r(q, q ′ ) · − h 2 6m 2 ∇ q ′ ϕ(q ′ , p ′ ) + h 2m ∇ p ′ ϕ(q ′ , p ′ ) . (65)
Substituting (60) and (65) into the Euler-Lagrange equation (59), we obtain
0 = 1 h R 4d [(q ′ − q) · ∇ q ′ ϕ(q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(q ′ , p ′ )] P h k (dqdpdq ′ dp ′ ) − 1 m R 4d (p ′ − p) · ∇ q ′ ϕ(q ′ , p ′ )P h k (dqdpdq ′ dp ′ ) + R 4d (∇V (q ′ ) − ∇V (q)) · ∇ p ′ ϕ(q ′ , p ′ )P h k (dqdpdq ′ dp ′ ) + γ R 2d ∇F (p) · ∇ p ϕ(q, p) − β −1 ∆ p ϕ(q, p) ρ h k (q, p)dqdp + ζ h k ,(66)
where we estimate the remainder, again using the notation of the proof of Lemma 3.1,
|ζ h k | = − h 3m R 4d (∇V (q ′ ) − ∇V (q)) · ∇ q ′ ϕ(q ′ , p ′ )P h k (dqdpdq ′ dp ′ ) + 1 2 R 4d r(q, q ′ ) · − h 6m 2 ∇ q ′ ϕ(q ′ , p ′ ) + 1 2m ∇ p ′ ϕ(q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) − γh 2m R 2d ρ h k (q, p)∇F (p) · ∇ q ϕ(q, p)dqdp + β −1 γh 2 6m 2 R 2d ρ h k (q, p)∆ q ϕ(q, p)dqdp (29),(44) ≤ C R 4d h|q ′ − q| + h 2 (C h + N ) + h(1 + |p ′ |) + h 2 P h k (dqdpdq ′ dp ′ ) ≤ C R 4d h(|q| 2 + |q ′ | 2 ) + h 2 (C h + N ) + h(1 + |p ′ | 2 ) P h k (dqdpdq ′ dp ′ ) ≤ Ch hW h (µ h k , ρ h k ) + M 2 (µ h k ) + M 2 (ρ h k ) + 1].
This concludes the proof of Lemma 4.3.
A priori estimate: Boundedness of the second moment and entropy
This section includes some technical lemmas that are needed in order to prove the convergence result of Section 6.
W * h (ρ h k−1 , ρ h k ) ≤ 2γh(A(ρ 0 ) − A(ρ h n )) + Ch 2 n k=0 M 2 (ρ h k ) + Cnh 2 ,(67)W h (µ h k , ρ h k ) ≤ 2γh(A(ρ 0 ) − A(ρ h n )) + Ch 2 n k=0 M 2 (ρ h k ) + Cnh 2 .
Proof. We give the details for Scheme 2a and then comment on the differences for the other schemes. We first define the operator s h : R 2d → R 2d as the solution operator over time h for the Hamiltonian system
Q ′ = P m , P ′ = −∇V (Q),(68)
that is, s h (q, p) is the solution at time h given the initial datum (q, p) at time zero. The operator s h is bijective and volume-preserving. For any fixed k ≥ 1, ρ h k minimizes the functional (2hγ) −1 W h (ρ h k−1 , ρ)+A(ρ) over ρ ∈ P 2 (R 2d ), i.e.,
W h (ρ h k−1 , ρ h k ) + 2hγA(ρ h k ) ≤ W h (ρ h k−1 , ρ) + 2hγA(ρ),(69)
for every ρ ∈ P 2 (R 2d ). In particular by taking ρ = (s −1
h ) ♯ ρ h k−1 =: ρ h * , for which W h (ρ h k−1 , ρ h * ) = 0, it follows that W h (ρ h k−1 , ρ h k ) ≤ 2γh A(ρ h * ) − A(ρ h k ) = 2γh F (ρ h * ) − F (ρ h k ) + 2γh S(ρ h * ) − S(ρ h k ) .(70)
We now estimate each term on the right hand side. Write (q, p) = s h (q, p). Using equation (68), we readily estimate that the solution (Q(t), P (t)) starting at (q, p) and ending at (q, p) satisfies Q ∞ ≤ C (|q| + h|p|), and therefore
h 0 ∇V (Q(t))dt ≤ h sup t∈[0,h] |∇V (Q(t))| ≤ h Q ∞ ≤ Ch (|q| + h|p|) , so that F (p) = F p + h 0 ∇V (Q(t))dt (28),(29c) ≤ F (p) + C(|p| + 1) h 0 ∇V (Q(t))dt + C h 0 ∇V (Q(t))dt 2 ≤ F (p) + Ch(|p| + 1) (|q| + h|p|) + Ch 2 (|q| + h|p|) 2 ≤ F (p) + Ch N (q, p) + 1 . Therefore F (ρ h * ) = R 2d F (p)ρ h * (q, p)dqdp = R 2d F (p)ρ h k−1 (q, p)dqdp ≤ R 2d (F (p) + ChN (q, p) + Ch)ρ h k−1 (q, p)dqdp ≤ F (ρ h k−1 ) + ChM 2 (ρ h k−1 ) + Ch. (71)
For the entropy term, we have, since s h is volume-preserving and bijective,
S(ρ h * ) = β −1 R 2d ρ h * (q, p) log ρ h * (q, p)dqdp = β −1 R 2d ρ h k−1 (s h (q, p)) log ρ h k−1 (s h (q, p))dqdp = S(ρ h k−1 ).(72)
From (70), (71), and (72), we obtain
W h (ρ h k−1 , ρ h k ) ≤ 2γh(A(ρ h k−1 ) − A(ρ h k )) + Ch 2 M 2 (ρ h k−1 ) + Ch 2 .
Summing over k = 1 to n we obtain (67).
For Scheme 2b, the equation (68) only modifies slightly, in that the acceleration becomes constant:
Q ′ = P m , P ′ = −V (q).
Similar estimates lead to the same result. For Scheme 2c, the proof is again similar, by taking ρ h * := µ h k and estimating the difference A(µ h k ) − A(ρ h k−1 ) as is done above. Lemma 5.2. There exist positive constants T 0 , h 0 , and C, independent of the initial data, such that for any 0 < h ≤ h 0 , the solutions {ρ h k } k≥1 for Scheme 2a, Scheme 2b, or Scheme 2c, satisfy
M 2 (ρ h k ) ≤ C M 2 (ρ 0 ) + 1 and |S(ρ h k )| ≤ C S(ρ 0 ) + M 2 (ρ 0 ) + 1 for any k ≤ K 0 ,(73)
where K 0 = ⌈T 0 /h⌉.
|p| 2 ρ h i (q, p)dqdp 1 2 = R 4d |p ′ | 2 P h i (dqdpdq ′ dp ′ ) 1 2 ≤ R 4d |p ′ − p| 2 P h i (dqdpdq ′ dp ′ ) 1 2 + R 4d |p| 2 P h i (dqdpdq ′ dp ′ ) 1 2 By (39c), we estimate R 4d |p ′ − p| 2 P h i (dqdpdq ′ dp ′ ) 1 2 ≤ C W h (ρ h i−1 , ρ h i ) 1 2 + Ch M 2 (ρ h i ) 1 2 + M 2 (ρ h i−1 ) 1 2 ,
and hence,
R 2d |p| 2 ρ h i (q, p)dqdp 1 2 ≤ R 2d |p| 2 ρ h i−1 (q, p)dqdp 1 2 +C W h (ρ h i−1 , ρ h i ) 1 2 +Ch M 2 (ρ h i ) 1 2 +M 2 (ρ h i−1 ) 1 2 .
Summing over i from 1 to k we obtain
R 2d |p| 2 ρ h k (q, p)dqdp 1 2 ≤ C k i=1 W h (ρ h i−1 , ρ h i ) 1 2 + Ch k i=1 M 2 (ρ k i−1 ) 1 2 + R 2d |p| 2 ρ 0 (q, p)dqdp 1 2 ≤ C k i=1 W h (µ h i , ρ h i ) 1 2 + Ch k i=1 M 2 (ρ k i ) 1 2 + CM 2 (ρ 0 ) 1 2 . Therefore R 2d |p| 2 ρ h k (q, p)dqdp ≤ C k i=1 W h (µ h i , ρ h i ) 1 2 2 + Ch 2 k i=1 M 2 (ρ h i ) 1 2 2 + CM 2 (ρ 0 ) ≤ Ck k i=1 W h (µ h i , ρ h i ) + Ckh 2 k i=1 M 2 (ρ h i ) + CM 2 (ρ 0 ).(74)
Similarly, we use (55) and the fact that
q ′ = h 2m √ 3 2 √ 3 m h (q ′ − q) − p + p ′ 2 + h 2m (p ′ + p) + q to derive that R 2d |q| 2 ρ h i (q, p)dqdp 1 2 = R 4d |q ′ | 2 P h i (dqdpdq ′ dp ′ ) 1 2 ≤ h 2m √ 3 R 4d 12 m h (q ′ − q) − p ′ + p 2 2 P h i (dqdpdq ′ dp ′ ) 1 2 + h 2m R 4d |p ′ | 2 P h i (dqdpdq ′ dp ′ ) 1 2 + h 2m R 4d |p| 2 P h i (dqdpdq ′ dp ′ ) 1 2 + R 2d |q| 2 ρ h i−1 (q, p)dqdp 1 2 ≤ Ch W h (ρ h i−1 , ρ h i ) 1 2 + Ch M 2 (ρ h i−1 ) 1 2 + M 2 (ρ h i ) 1 2 + R 2d |q| 2 ρ h i−1 (q, p)dqdp 1 2 .
Summing over i from 1 to k, we obtain R 2d
|q| 2 ρ h k (q, p)dqdp 1 2 ≤ Ch k i=1 W h (ρ h i−1 , ρ h i ) 1 2 + Ch k i=1 M 2 (ρ h i ) 1 2 + CM 2 (ρ 0 ) 1 2
and therefore,
R 2d γ 2 |q| 2 ρ h k (q, p)dqdp ≤ Ckh 2 k i=1 W h (ρ h i−1 , ρ h i ) + Ckh 2 k i=1 M 2 (ρ h i ) + CM 2 (ρ 0 ).(75)
From (74) and (75) it holds that
M 2 (ρ h k ) = R 2d (|γq| 2 + |p| 2 )ρ h k (q, p)dqdp ≤ Ck k i=1 W h (ρ h i−1 , ρ h i ) + Ckh 2 k i=1 M 2 (ρ h i ) + CM 2 (ρ 0 ).
Applying Lemma 5.1 with n = k, it follows that
M 2 (ρ h k ) ≤ Ck h(A(ρ 0 ) − A(ρ h k )) + Ch 2 k i=0 M 2 (ρ h i ) + Ckh 2 + Ckh 2 k i=1 M 2 (ρ h i ) + CM 2 (ρ 0 ) ≤ −CkhS(ρ h k ) + Ckh 2 k i=1 M 2 (ρ k i ) + CM 2 (ρ 0 ) + CkhA(ρ 0 ) + Ck 2 h 2 .(76)
By inequality (29) in [JKO98], S(ρ h k ) is bounded from below by M 2 (ρ h k ),
S(ρ h k ) ≥ −C − CM 2 (ρ h k ).(77)
Substituting (77) into (76) we have
M 2 (ρ h k ) ≤ C 2 1 kh 2 k i=1 M 2 (ρ k i ) + C 1 khM 2 (ρ h k ) + C 1 (k 2 h 2 + 1) + C 1 M 2 (ρ 0 ),(78)
where we fix the constant C 1 , and use it to set the time horizon T 0 :
T 0 = 1 4C 1 , K 0 = T 0 h .(79)
We emphasize that C 1 , and hence T 0 , is independent of the initial data. We now choose h 0 ≤ T 0 so small that for all h ≤ h 0 we have K 0 h ≤ 2T 0 and C 1 K 0 h ≤ 1 2 . Then it follows from (78) that, for any h ≤ h 0 , k ≤ K 0 ,
3 4 M 2 (ρ h k ) ≤ C 2 1 kh 2 k i=1 M 2 (ρ h i ) + C 1 (4T 2 0 + 1) + C 1 M 2 (ρ 0 ).(80)
Hence
3 4 K0 i=1 M 2 (ρ h i ) ≤ C 2 1 K 2 0 h 2 K0 i=1 M 2 (ρ h i ) + K 0 (T 0 + C 1 ) + C 1 M 2 (ρ 0 ) ≤ 4C 2 1 T 2 0 K0 i=1 M 2 (ρ h i ) + K 0 (T 0 + C 1 ) + C 1 M 2 (ρ 0 ) (81) ≤ 1 4 K0 i=1 M 2 (ρ h i ) + K 0 (T 0 + C 1 ) + C 1 M 2 (ρ 0 ). Consequently, K0 i=1 M 2 (ρ h i ) ≤ 2K 0 (T 0 + C 1 ) + 2C 1 M 2 (ρ 0 ).(82)
Substituting (82) into (80), we obtain
M 2 (ρ h k ) ≤
We now show that the entropy S(ρ h k ) is also bounded. From (77) and (83), it follows that S(ρ h k ) is bounded from below. It remains to find an upper bound. Applying Lemma 5.1 for n = k, and noting that
F (ρ h k ) ≥ 0, W h (ρ h i−1 , ρ h i ) ≥ 0 for all i, we have S(ρ h k ) ≤ A(ρ 0 ) + Ch k i=0 M 2 (ρ h i ) + Ckh ≤ Ch k i=1 M 2 (ρ h i ) + C S(ρ 0 ) + M 2 (ρ 0 ) + 2CT 0 . (84)
By combining with (82) we obtain the upper bound for the entropy. This completes the proof of the lemma.
The following lemma extends Lemma 5.2 to any T > 0. The proof is the same as Lemma 5.3 in [Hua00], and we omit it.
Lemma 5.3. Let {ρ h k } k≥1 be the sequence of the minimizers of Scheme 2a or Scheme 2b for fixed h > 0. For any T > 0, there exists a constant C > 0 depending on T and on the initial data such that
M 2 (ρ h k ) ≤ C,(85)k i=1 W * h (ρ h i−1 , ρ h i ) ≤ Ch,(86)R 2d max{ρ h k log ρ h k , 0} dqdp ≤ C,(87)
for any h ≤ h 0 and k ≤ K h , where
K h = T h .
For Scheme 2c the same inequalities hold, with (86) replaced by
k i=1 W h (µ h i , ρ h i ) ≤ Ch.
6 Proof of Theorem 2.3
In this section we bring all the parts together to prove Theorem 2.3. The structure of this proof is the same as that of e.g. [JKO98,Hua00], and we refer to those references for the parts that are very similar. The main difference lies in the convergence of the discrete Euler-Lagrange equations for each of the cases to the weak formulation of the Kramers equation as h → 0.
Throughout we fix T > 0 and for each h > 0 we set
K h := ⌈T /h⌉.
The proof of the space-time weak compactness (36) is the same for the three schemes. Let (ρ h k ) k be the sequence of minimizers constructed by any of the three schemes, and let t → ρ h (t) be the piecewise-constant interpolation (35). By Lemma 5.3 we have
M 2 (ρ h (t)) + R 2d max{ρ h (t) log ρ h (t), 0} dqdp ≤ C, for all 0 ≤ t ≤ T.(88)
Since the function z → max{z log z, 0} has super-linear growth, (88) guarantees that there exists a subsequence, denoted again by ρ h , and a function ρ ∈ L 1 ((0, T ) × R 2d ) such that
ρ h → ρ weakly in L 1 ((0, T ) × R 2d ).(89)
This proves (36).
The proof of the stronger convergence (37) and of the continuity (38) at t = 0 follows the same lines as in [JKO98,Hua00]. The main estimate is the 'equi-near-continuity' estimate
d ρ h (t 1 ), ρ h (t 2 ) 2 ≤ C(|t 2 − t 1 | + h),
where d(ρ 0 , ρ 1 ) is the metric generated by the quadratic cost |q − q ′ | 2 + |p − p ′ | 2 . This estimate follows from the inequality (see (39)) |q − q ′ | 2 + |p − p ′ | 2 ≤ C C * h (q, p; q ′ , p ′ ) + h 2 N (q, p) + h 2 N (q ′ , p ′ ) , and the estimates (88) and (86); see [Hua00, Theorem 5.2]. The only remaining statement of Theorem 2.3 is the characterization of the limit in terms of the solution of the Kramers equation, and we now describe this.
Let ρ h be generated by one of the three schemes. We now prove that the limit ρ satisfies the weak version of the Kramers equation (34). Fix T > 0 and ϕ ∈ C ∞ c ((−∞, T ) × R 2d ); all constants C below depend on the parameters of the problem, on the initial datum ρ 0 , and on ϕ, but are independent of k and of h. We first discuss Schemes 2a and 2b.
Let P h * k ∈ Γ(ρ h k−1 , ρ h k ) be the optimal plan for W * h (ρ h k−1 , ρ h k ), where the star indicates the quantities associated with either Scheme 2a or Scheme 2b. For any 0 < t < T , we have
R 2d ρ h k (q, p) − ρ h k−1 (q, p) ϕ(t, q, p)dqdp = R 2d ρ h k (q ′ , p ′ )ϕ(t, q ′ , p ′ )dq ′ dp ′ − R 2d
ρ h k−1 (q, p)ϕ(t, q, p)dqdp = R 4d ϕ(t, q ′ , p ′ ) − ϕ(t, q, p) P h * k (dqdpdq ′ dp ′ ) = R 4d
(q ′ − q) · ∇ q ′ ϕ(t, q ′ , p ′ ) + (p ′ − p) · ∇ p ′ ϕ(t, q ′ , p ′ ) P h * k (dqdpdq ′ dp ′ ) + ε k ,
where |ε k | ≤ C R 4d |q ′ − q| 2 + |p ′ − p| 2 P h * k (dqdpdq ′ dp ′ )
(39) ≤ CW * h (ρ h k−1 , ρ h k ) + Ch 2 M 2 (ρ h k−1 ) + M 2 (ρ h k ) (88) ≤ CW * h (ρ h k−1 , ρ h k ) + Ch 2 .(91)
By combining (90) with (61) we find R 2d ρ h k (t, q, p) − ρ h k−1 (q, p) h ϕ(t, q, p)dqdp = R 2d p m · ∇ q ϕ(t, q, p) − (∇V (q) + γ∇F (p)) · ∇ p ϕ(t, q, p) + γβ −1 ∆ p ϕ(t, q, p) ρ h k (q, p)dqdp
+ θ k (t),(92)
where |θ k (t)| ≤ |ε k | h + Ch W * h (ρ h k−1 , ρ h k ) + M 2 (ρ h k−1 ) + M 2 (ρ h k ) + 1
(88),(91) ≤ C h W * h (ρ h k−1 , ρ h k ) + Ch.(93)
Note that θ k depends on t through the t-dependence of ϕ. Next, from (92), for k ≥ 1 we have
kh (k−1)h R 2d ρ h k (q, p) − ρ h k−1 (q, p) h ϕ(t, q, p)dqdpdt = kh (k−1)h R 2d
p m · ∇ q ϕ(t, q, p) − (∇V (q) + γ∇F (p)) · ∇ p ϕ(t, q, p) + γβ −1 ∆ p ϕ(t, q, p) ρ h k (q, p)dqdpdt
+ kh (k−1)h θ k (t)dt = kh (k−1)h R 2d
p m · ∇ q ϕ(t, q, p) − (∇V (q) + γ∇F (p)) · ∇ p ϕ(t, q, p) + γβ −1 ∆ p ϕ(t, q, p) ρ h (t, q, p)dqdpdt
+ kh (k−1)h θ k (t)dt.
Summing from k = 1 to K h we obtain
K h k=1 kh (k−1)h R 2d ρ h k (q, p) − ρ h k−1 (q, p) h ϕ(t, q, p)dqdpdt = T 0 R 2d
p m · ∇ q ϕ(t, q, p) − (∇V (q) + γ∇F (p)) · ∇ p ϕ(t, q, p) + γβ −1 ∆ p ϕ(t, q, p) ρ h (t, q, p)dqdpdt
+ R h ,(94)
where
R h = K h k=1 kh (k−1)h θ k (t)dt.(95)
By a discrete integration by parts, we can rewrite the left hand side of (94) as
− h 0 R 2d ρ 0 (q, p) ϕ(t, q, p) h dqdpdt + T 0 R 2d
ρ h (t, q, p) ϕ(t, q, p) − ϕ(t + h, q, p) h dqdpdt.
From (94) and (96) we obtain T 0 R 2d ρ h (t, q, p) ϕ(t, q, p) − ϕ(t + h, q, p) h dqdpdt
= T 0 R 2d
p m · ∇ q ϕ(t, q, p) − (∇V (q) + γ∇F (p)) · ∇ p ϕ(t, q, p) + γβ −1 ∆ p ϕ(t, q, p) ρ h (t, q, p)dqdpdt ρ h k (q, p) − ρ h k−1 (q, p) ϕ(t, q, p) dqdp
+ h 0 R 2d ρ 0 (q, p) ϕ(t, q, p) h dqdpdt + R h .(97)= R 2d ρ h k (q ′ , p ′ )ϕ(t, q ′ , p ′ )dq ′ dp ′ − R 2d
ρ h k−1 (q, p))ϕ(t, q, p)dqdp
= R 2d ρ h k (q ′ , p ′ )ϕ(t, q ′ , p ′ )dq ′ dp ′ − R 2d
µ h k (q, p)ϕ(t, σ h (q, p))dqdp
= R 4d ϕ(t, q ′ , p ′ ) − ϕ t, q − p m h, p + ∇V (q)h P h k (dqdpdq ′ dp ′ ) = R 4d (q ′ − q + p m h) · ∇ q ′ ϕ(t, q ′ , p ′ ) + (p ′ − p − ∇V (q)h) · ∇ p ′ ϕ(t, q ′ , p ′ ) P h k (dqdpdq ′ dp ′ ) + ε k , where |ε k | ≤ C R 4d γ 2 q ′ − q + p m h 2 + |p ′ − p − ∇V (q)h| 2 P h k (dqdpdq ′ dp ′ )
with the constant C depending only on ϕ. Since |p ′ − p| 2 , |q ′ − q| 2 ≤ CC h (q, p; q ′ , p ′ ) and |∇V (q)| 2 ≤ C |q| 2 ,
γ 2 q ′ − q + p m h 2 + |p ′ − p − h∇V (q)| 2 ≤ 2 γ 2 |q − q ′ | 2 + γ 2 h 2 m 2 |p| 2 + |p − p ′ | 2 + h 2 |∇V (q)| 2
≤ CC h (q, p; q ′ , p ′ ) + Ch 2 N (q, p).
Therefore
|ε k | ≤ C R 4d
C h (q, p; q ′ , p ′ ) + h 2 N (q, p) + h 2 P h k (dqdpdq ′ dp ′ ) = CW h (µ h k , ρ h k ) + CM 2 (µ h k )h 2 + Ch 2 ≤ CW h (µ h k , ρ h k ) + Ch 2 .
The rest of the proof is the same.
For any h > 0 sufficiently small, let ρ h k be the sequence of the solutions of any of the three Schemes 2a-c. For any t ≥ 0, define the piecewise-constant time interpolation
Remark 4 . 1 .
41The structure of the choice (60) can be understood in terms of the conservativedissipative nature of the Kramers equation. The matrix in front of ∇ϕ(q ′ , p ′ ) in (60) is of the form
Lemma 5 . 1 .
51Let {ρ h k } k≥1 be the sequence of the minimizers of Scheme 2a or Scheme 2b for fixed h > 0. Then for any positive integer n and sufficiently small h, we have n k=1
Now R h → 0 as h → 0, since Taking the limit h → 0 in (97) yields equation(34).For Scheme 2c, only (90) is different:|R h |
(95)
≤
K h
k=1
kh
(k−1)h
|θ k (t)|dt
(93)
≤ C
K h
k=1
kh
(k−1)h
1
h
W *
h (ρ h
k−1 , ρ h
k ) + h dt
= C
K h
k=1
W *
h (ρ h
k−1 , ρ h
k ) + Ch 2
(86)
≤ Ch.
R 2d
In order to do this, the variable ρ needs to be supplemented with an additional energy variable, that compensates for the gain and loss in the energy H as a result of the dissipative effects.
2 + K 0 (T 0 + C 1 ) + C 1 M 2 (ρ 0 ).(83)This finishes the proof of the boundedness of M 2 (ρ h k ).
AcknowledgementThe research of the paper has received funding from the ITN "FIRST" of the Seventh Framework Programme of the European Community (grant agreement number 238702).
From a large-deviations principle to the Wasserstein gradient flow: a new micro-macro passage. S Adams, N Dirr, M A Peletier, J Zimmer, Communications in Mathematical Physics. 307S. Adams, N. Dirr, M. A. Peletier, and J. Zimmer. From a large-deviations principle to the Wasserstein gradient flow: a new micro-macro passage. Communications in Mathematical Physics, 307:791-815, 2011.
Hamiltonian ODEs in the Wasserstein space of probability measures. L Ambrosio, W Gangbo, Comm. Pure Appl. Math. 611L. Ambrosio and W. Gangbo. Hamiltonian ODEs in the Wasserstein space of proba- bility measures. Comm. Pure Appl. Math., 61(1):18-53, 2008.
Gradient flows in metric spaces and in the space of probability measures. L Ambrosio, N Gigli, G Savaré, Lectures in Mathematics. ETH Zürich. Birkhauser. 2nd editionL. Ambrosio, N. Gigli, and G. Savaré. Gradient flows in metric spaces and in the space of probability measures. Lectures in Mathematics. ETH Zürich. Birkhauser, Basel, 2nd edition, 2008.
Passing to the limit in a Wasserstein gradient flow: From diffusion to reaction. ] S + 12, A Arnrich, M Mielke, G Peletier, M Savaré, Veneroni, Calculus of Variations and Partial Differential Equations. 44+ 12] S. Arnrich, A. Mielke, M. Peletier, G. Savaré, and M. Veneroni. Passing to the limit in a Wasserstein gradient flow: From diffusion to reaction. Calculus of Variations and Partial Differential Equations, 44:419-454, 2012.
on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Privately circulated in 1827. R Brown, A brief account of microscopical observations made in the months of june. Reprinted in the Edinburgh new Philosophical JournalR. Brown. A brief account of microscopical observations made in the months of june, july and august, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Privately circulated in 1827. Reprinted in the Edinburgh new Philosophical Journal (pp. 358-371, July-September, 1828).
Generalized thermodynamics and fokker-planck equations: Applications to stellar dynamics and two-dimensional turbulence. P H Chavanis, Phys. Rev. E. 6836108P. H. Chavanis. Generalized thermodynamics and fokker-planck equations: Applica- tions to stellar dynamics and two-dimensional turbulence. Phys. Rev. E, 68:036108, Sep 2003.
Chapman-Enskog derivation of the generalized smoluchowski equation. P H Chavanis, P Laurençot, M Lemou, Physica A: Statistical Mechanics and its Applications. 341P. H. Chavanis, P. Laurençot, and M. Lemou. Chapman-Enskog derivation of the gen- eralized smoluchowski equation. Physica A: Statistical Mechanics and its Applications, 341:145-164, October 2004.
Kinetic equilibration rates for granular media and related equations: entropy dissipation and mass transportation estimates. J A Carrillo, R J Mccann, C Villani, Rev. Mat. Iberoamericana. 193J. A. Carrillo, R. J. McCann, and C. Villani. Kinetic equilibration rates for granular media and related equations: entropy dissipation and mass transportation estimates. Rev. Mat. Iberoamericana, 19(3):971-1018, 2003.
Statistical mechanics of two dimensional vortices and collisionless stellar systems. P H Chavanis, J Sommeria, R Robert, The Astrophysical Journal. 471P. H. Chavanis, J. Sommeria, and R. Robert. Statistical mechanics of two dimensional vortices and collisionless stellar systems. The Astrophysical Journal, 471:385-399, 1996.
Wasserstein gradient flows from large deviations of thermodynamic limits (submitted). M H Duong, V Laschos, D R M Renger, M. H. Duong, V. Laschos, and D.R.M. Renger. Wasserstein gradient flows from large deviations of thermodynamic limits (submitted). http://arxiv.org/abs/1203.0676, 2012.
Upscaling from particle models to entropic gradient flows. To appear in. N Dirr, V Laschos, J Zimmer, J. Math. Phys. N. Dirr, V. Laschos, and J. Zimmer. Upscaling from particle models to entropic gradient flows. To appear in J. Math. Phys, 2011.
Density estimates for a random noise propagating through a chain of differential equations. F Delarue, S Menozzi, J. Funct. Anal. 2596F. Delarue and S. Menozzi. Density estimates for a random noise propagating through a chain of differential equations. J. Funct. Anal., 259(6):1577-1630, 2010.
A gradient flow scheme for nonlinear fourth order equations. B Düring, D Matthes, J Milišić, Discrete Contin. Dyn. Syst. Ser. B. 143B. Düring, D. Matthes, and J. Milišić. A gradient flow scheme for nonlinear fourth order equations. Discrete Contin. Dyn. Syst. Ser. B, 14(3):935-959, 2010.
Large deviations techniques and applications, volume 38 of Stochastic modelling and applied probability. A Dembo, O Zeitouni, SpringerNew York, NY, USA2nd editionA. Dembo and O. Zeitouni. Large deviations techniques and applications, volume 38 of Stochastic modelling and applied probability. Springer, New York, NY, USA, 2nd edition, 1987.
Optimal transport for the system of isentropic Euler equations. W Gangbo, M Westdickenberg, Comm. Partial Differential Equations. 347-9W. Gangbo and M. Westdickenberg. Optimal transport for the system of isentropic Euler equations. Comm. Partial Differential Equations, 34(7-9):1041-1073, 2009.
A variational principle for the Kramers equation with unbounded external forces. C Huang, J. Math. Anal. Appl. 2501C. Huang. A variational principle for the Kramers equation with unbounded external forces. J. Math. Anal. Appl., 250(1):333-367, 2000.
A variational principle for a class of ultraparabolic equations. C Huang, C. Huang. A variational principle for a class of ultraparabolic equations. 2011.
Free energy and the Fokker-Planck equation. R Jordan, D Kinderlehrer, F Otto, Landscape paradigms in physics and biology. Los Alamos, NM107R. Jordan, D. Kinderlehrer, and F. Otto. Free energy and the Fokker-Planck equation. Phys. D, 107(2-4):265-271, 1997. Landscape paradigms in physics and biology (Los Alamos, NM, 1996).
The variational formulation of the fokkerplanck equation. R Jordan, D Kinderlehrer, F Otto, SIAM Journal on Mathematical Analysis. 291R. Jordan, D. Kinderlehrer, and F. Otto. The variational formulation of the fokker- planck equation. SIAM Journal on Mathematical Analysis, 29(1):1-17, 1998.
Brownian motion in a field of force and the diffusion model of chemical reactions. H A Kramers, Physica. 7H. A. Kramers. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica, 7:284-304, 1940.
Evolution of rate-independent systems. A Mielke, Evolutionary equations. AmsterdamElsevier/North-HollandIIA. Mielke. Evolution of rate-independent systems. In Evolutionary equations. Vol. II, Handb. Differ. Equ., pages 461-559. Elsevier/North-Holland, Amsterdam, 2005.
A variational formulation of rate-independent phase transformations using an extremum principle. A Mielke, F Theil, V I Levitas, Arch. Ration. Mech. Anal. 1622A. Mielke, F. Theil, and V. I. Levitas. A variational formulation of rate-independent phase transformations using an extremum principle. Arch. Ration. Mech. Anal., 162(2):137-177, 2002.
Beyond equilibrium thermodynamics. H C Öttinger, Wiley-Interscience1st editionH. C.Öttinger. Beyond equilibrium thermodynamics. Wiley-Interscience, 1st edition, 2005.
Variational formulation of the Fokker-Planck equation with decay: a particle approach. M A Peletier, D R M Renger, submittedM.A. Peletier and D.R.M. Renger. Variational formulation of the Fokker-Planck equa- tion with decay: a particle approach (submitted). http://arxiv.org/abs/1108.3181, 2011.
Gamma-convergence of gradient flows with applications to Ginzburg-Landau. E Sandier, S Serfaty, Communications on Pure and Applied Mathematics. 5712E. Sandier and S. Serfaty. Gamma-convergence of gradient flows with applications to Ginzburg-Landau. Communications on Pure and Applied Mathematics, 57(12):1627- 1672, 2004.
The Brezis-Ekeland principle for doubly nonlinear equations. U Stefanelli, SIAM Journal on Control and Optimization. 471615U. Stefanelli. The Brezis-Ekeland principle for doubly nonlinear equations. SIAM Journal on Control and Optimization, 47:1615, 2008.
Topics in optimal transportation. C Villani, Graduate Studies in Mathematics. 58American Mathematical SocietyC. Villani. Topics in optimal transportation, volume 58 of Graduate Studies in Math- ematics. American Mathematical Society, Providence, RI, 2003.
On optimal transport view on Schrödinger's equation. M.-K Von Renesse, Canad. Math. Bull. To appear inM.-K. von Renesse. On optimal transport view on Schrödinger's equation. To appear in Canad. Math. Bull., 2011.
Projections onto the cone of optimal transport maps and compressible fluid flows. M Westdickenberg, J. Hyperbolic Differ. Equ. 74M. Westdickenberg. Projections onto the cone of optimal transport maps and com- pressible fluid flows. J. Hyperbolic Differ. Equ., 7(4):605-649, 2010.
| []
|
[
"Minimizing Cache Timing Attack Using Dynamic Cache Flushing (DCF) Algorithm",
"Minimizing Cache Timing Attack Using Dynamic Cache Flushing (DCF) Algorithm"
]
| [
"Jalpa Bani [email protected] \nComputer Science and Engineering Department\nComputer Science and Engineering Department\nUniversity of Bridgeport Bridgeport\nUniversity of Bridgeport\n06601, 06601BridgeportCT, CT\n",
"Syed S Rizvi [email protected] \nComputer Science and Engineering Department\nComputer Science and Engineering Department\nUniversity of Bridgeport Bridgeport\nUniversity of Bridgeport\n06601, 06601BridgeportCT, CT\n"
]
| [
"Computer Science and Engineering Department\nComputer Science and Engineering Department\nUniversity of Bridgeport Bridgeport\nUniversity of Bridgeport\n06601, 06601BridgeportCT, CT",
"Computer Science and Engineering Department\nComputer Science and Engineering Department\nUniversity of Bridgeport Bridgeport\nUniversity of Bridgeport\n06601, 06601BridgeportCT, CT"
]
| [
"IJCSIS) International Journal of Computer Science and Information Security"
]
| Rijndael algorithm was unanimously chosen as the Advanced Encryption Standard (AES) by the panel of researchers at National Institute of Standards and Technology (NIST) in October 2000. Since then, Rijndael was destined to be used massively in various software as well as hardware entities for encrypting data. However, a few years back, Daniel Bernstein [2] devised a cachetiming attack that was capable enough to break Rijndael's seal that encapsulates the encryption key. In this paper, we propose a new Dynamic Cache Flushing (DCF) algorithm which shows a set of pragmatic software measures that would make Rijndael impregnable to cache timing attack. The simulation results demonstrate that the proposed DCF algorithm provides better security by encrypting key at a constant time. | null | [
"https://arxiv.org/pdf/0909.0573v1.pdf"
]
| 5,096,373 | 0909.0573 | 8e00228384229cbc7b886dbf767eb7860ecbef9c |
Minimizing Cache Timing Attack Using Dynamic Cache Flushing (DCF) Algorithm
August, 2009
Jalpa Bani [email protected]
Computer Science and Engineering Department
Computer Science and Engineering Department
University of Bridgeport Bridgeport
University of Bridgeport
06601, 06601BridgeportCT, CT
Syed S Rizvi [email protected]
Computer Science and Engineering Department
Computer Science and Engineering Department
University of Bridgeport Bridgeport
University of Bridgeport
06601, 06601BridgeportCT, CT
Minimizing Cache Timing Attack Using Dynamic Cache Flushing (DCF) Algorithm
IJCSIS) International Journal of Computer Science and Information Security
41August, 2009dynamic cache flushingRijndael algorithmtiming attack
Rijndael algorithm was unanimously chosen as the Advanced Encryption Standard (AES) by the panel of researchers at National Institute of Standards and Technology (NIST) in October 2000. Since then, Rijndael was destined to be used massively in various software as well as hardware entities for encrypting data. However, a few years back, Daniel Bernstein [2] devised a cachetiming attack that was capable enough to break Rijndael's seal that encapsulates the encryption key. In this paper, we propose a new Dynamic Cache Flushing (DCF) algorithm which shows a set of pragmatic software measures that would make Rijndael impregnable to cache timing attack. The simulation results demonstrate that the proposed DCF algorithm provides better security by encrypting key at a constant time.
I. INTRODUCTION
Rijndael is a block cipher adopted as an encryption standard by the U.S. government. It has been analyzed extensively and is now used widely worldwide as was the case with its predecessor, the Data Encryption Standard (DES). Rijndael, the AES standard is currently used in various fields. Due to its impressive efficiency [8], it's being used in high-speed optical networks, it's used in military applications that encrypt top secret data, and it's used in banking and financial applications wherein secured and real-time transfer of data is a toppriority.
Microsoft has embraced Rijndael and implemented Rijndael in its much talked about DotNet (.NET) Framework. DotNet 3.5 has Rijndael implementation in System.Security.Cryptography namespace. DotNet framework is used by millions of developers around the world to develop software applications in numerous fields. In other words, software implementation of Rijndael is touching almost all the fields that implements cryptography through the DotNet framework.
Wireless Network Security has no exception. Wired Equivalent Privacy (WEP) is the protocol used in wireless networks to ensure secure environment. When WEP is turned on in a wireless network, every packet of data that is transmitted from one station to another is first encrypted using Rijndael algorithm by taking the packets' data payload and a secret encryption key called WEP key. The encrypted data is then broadcasted to stations registered on that wireless network. At the receiving end, the "wireless network aware stations" utilize the WEP key to decrypt data using Rijndael algorithm. Rijndael supports a larger range of block and key sizes; AES has a fixed block size of 128 bits and a key size of 128, 192 or 256 bits, whereas Rijndael can be specified with key and block sizes in any multiple of 32 bits, with a minimum of 128 bits and a maximum of 256 bits [6].
This algorithm implements the input, output, and cipher key where each of the bit sequences may contain 128, 192 or 256 bits with the condition that the input and output sequences have the same length. However, this algorithm provides the basic framework to make the code scalable. Look up tables have been used to make Rijndael algorithm faster and operations are performed on a two dimensional array of bytes called states. State consists of 4 rows of bytes, each of which contains Nb bytes, where Nb is the input sequence length divided by 32. During the start or end phase of an encryption or decryption operation, the bytes of the cipher input or output are copied from or to this state array.
The several operations that are implemented in this algorithm are listed below [9]:
• Key Schedule: It is an array of 32-bit words that is initialized from the cipher key. The cipher iterates through a number of the cycles or rounds, each of which uses Nk words from the key schedule. This is considered as an array of round keys, each containing Nk words.
• Finite Field Operations: In this algorithm finite field operations are carried out, which refers to operations performed in the finite field resulting in an element within that field. Finite field operations such as addition and multiplication, inverse multiplication, multiplications using tables and repeated shifts are performed.
• Rounds: At the start of the cipher the input is copied into the internal state. An initial round key is then added and the state is then transformed by iterating a round function in a number of cycles. On completion the final state is copied into the cipher output [1].
The round function is parameterized using a key schedule that consists of a one dimensional array of 32bit words for which the lowest 4, 6 or 8 words are initialized with the cipher. There are several steps carried out during this operation:
SubBytes: As shown in Fig. 1, it is a non-linear substitution step where each of the byte replaces with another according to a lookup table.
ShiftRows: This is a transposition step where each row of the state is shifted cyclically a certain number of steps, as shown in Fig. 2.
MixColumns: This is a mixing operation which operates on the columns of the state, combining the four bytes in each column, as shown in Fig. 3.
AddRoundKey: Here each byte of the state is combined with the round key; each round key is derived from the cipher key using a key schedule [1], as shown in Fig. 4.
• Final Round: The final round consists of the same operations as in the Round function except the MixColumns operation.
II. RELATED WORK
Parallelism or Parallel Computing has become a key aspect of high performance computing today and its fundamental advantages have deeply influenced modern processor designers. It has become a dominant paradigm in processor architecture in form of multicore processors available in personal computers today. Sharing processor resources like cache memory, sharing memory maps in random access memory (RAM) and sharing computational power of the math coprocessors during execution of multiple processes in the operating systems, has become an inevitable phenomenon. Few years back, Intel introduced hyper-threading technology in its Pentium 4 processors, wherein the sharing of processor resources between process threads is extended further by sharing memory caches. Shared access to memory cache is a feature that's available in all the latest processors from Intel and AMD Athlon.
With all the hunky-dory talk about how parallel computing has made Central Processing Unit's (CPUs) very powerful today, the fundamentals of sharing memory cache across the thread boundary has come along opening doors for security vulnerabilities. The shared memory cache can permit malicious threads of a spy process to monitor execution of another thread that implements Rijndael, allowing attackers to brute force the encryption key [6,7].
III. PROBLEM IN RIJNDAEL: CACHE TIMING ATTACK
Cache timing attack -the name speaks for itself. This belongs to a pattern of attacks that concentrates on monitoring the target cryptosystem, and analyzing the time taken to execute various steps in the cryptographic algorithm. In other words, the attack exploits the facts that every step in the algorithm takes a certain time to Although, the cache-timing attack is well-known theoretically, but it was only until April 2005 that a stout researcher named Daniel Bernstein [2,4] published that the weakness of Rijndael can reveal timing information that eventually can be utilized to crack the encryption key. In his paper, Daniel announced a successful cache timing attack by exploiting the timing characteristics of the table lookups.
Here is the simplest conceivable timing attack on Rijndael. AES software implementations like Rijndael that uses look-up tables to perform internal operations of the cipher, such as Sboxes, are the one that are most vulnerable to this attack. [2].
Since in Rijndael algorithm all look up tables are stored in the cache, by putting another thread or some different way, attacker can easily get the encrypted data from the cache. Fig.1 shows that AES implementation in OpenSSL which does not take constant time. This was taken on a Pentium M processor. It is a 128 x 128 array of blocks where X axis shows one key for each row of blocks and Y axis shows one input for each column of blocks. Any combination of (key, Input) pair shows the encryption process for that particular pair by indicating the fix pattern of colors at that place. We can see the tremendous variability among blocks in Fig. 5. Due to this variability, attacker can easily determine the weak point, where the encryption took place by just analyzing the color pattern.
The cache timing attack problem has been tackled through various approaches [3]. Each solution has its own pros and cons. For instance, Intel released a set of compilers targeting their latest 64-bit processors. These compilers would take the C++ code as input and output a set of machine instructions that would not use CPU cache at all. In other words, the resultant code has a machine instruction that does not use CPU cache for temporary storage of data, in other words the cache is disabled automatically.
The other suggestion was to place all the lookup tables in CPU registers rather than CPU cache, but this would affect performance significantly. Hardware approaches are also being considered. It has been suggested to have a parallel Field-Programmable Gate Array (FPGA) implementation or Application-Specific Integrated Circuits (ASIC) implementation with a separate coprocessor functioning with the existing CPU. This special coprocessor would contain special logical circuitry that would implement Rijndael. Timing attack can thus be avoided by barring other processes from accessing the special coprocessor [5].
IV. PROPOSED DYNAMIC CACHE FLUSHING (DCF) ALGORITHM
Numerous attempts have been made to address the timing attack loophole in AES. After a deep analysis of the logical steps involved in the Rijndael algorithm, we propose a novel technique to improvise the existing Rijndael algorithm. Our proposed algorithm follows variable-time AES algorithm by replacing it with a constant-time (but not high-speed) AES algorithm known as DCF (Dynamic Cache Flushing). Here, constant means totally independent of the AES key and input. The resulting DCF algorithm would be capable enough to stand strong against the timing attacks.
In order to determine the constant-time, first we need to collect timings and then look for input-dependent patterns. For example, we can repeatedly measure the time taken by AES for once (key; input) pair, convert the distribution of timings into a small block of colors, and then repeat the same color pattern for many keys and inputs.
A constant-time AES algorithm would have the same block of colors for every key and input pair, as shown in Fig 2. Fig 2 is a 128 x 128 array of blocks. Here, X axis indicates the key for each row of blocks and Y axis shows the input for each column of blocks. The pattern of colors in a block reflects the distribution of timings for that (Key; Input) pair. Here, for all (Key, Input) pairs, the color patterns remains the same, due to the constant time. Hence, attacker cannot easily figure out at which point of time the encryption of key and data took place. DCF algorithm generates keys at a constant rate on today's popular dual-core CPUs.
A. Description of the Proposed DCF Algorithm
The DCF algorithm is the improved version of Rijndael.
In other words, the basic encryption/decryption process would remain unchanged. However, there are few additional steps injected into the Rijndael algorithm that would make it resilient to cachetiming attack.
DCF algorithm -as the name rightly suggests, flushes cache while the encryption of data is in progress. In other words, the data that is being copied by the program into the CPU cache during the encryption/decryption process is removed at periodic intervals. The major advantage of doing this is that, during a cache-timing attack, the spy process tries to tap the data stored in look up tables in the CPU cache. Since each instruction takes time to encrypt or decrypt the data, attacker can break the data by just taking difference of collected large body of timing data from the target machine for the plaintext byte and collected large body of reference timing data for each instruction. Fig. 5 shows that encryption/decryption takes place at random time and it can be easily determined by the spy process. If data in the CPU cache is flushed dynamically during the encryption or decryption process, it would make life more difficult for the spy process, when it tries to collect the data for sampling purposes. In addition, no data in the cache implies that there is no specific place or point that refers to the encryption process as shown in Fig. 6.
It should be noted in Fig. 6 that the graph maintains a uniform pattern during the entire encryption/decryption process. Due to this uniformity, an attacker would face difficulty in tracking the exact time frame when encryption/decryption took place. This is possible by flushing the CPU cache at irregular intervals. Flushing the cache ensures that an attacker will not get enough insight into the data pattern during the encryption process by tapping the cache data. In order to increase the efficiency of this approach, one can increase the frequency of cache flushing. This would be a customizable parameter in the proposed DCF implementation. By further analyzing the DCF algorithm, it would lead to more "cache-misses" than "cache-hits". The "cache-misses" would eventually be recovered by looking up into the RAM for data. The "cache-misses" is the performance penalty we pay with this approach. But with the computing capability we have today with the high-end dual core CPUs, this refetching of data elements from the RAM, can be dealt with.
It should be noted that complete cache disabling is also an option [3], but in such scenarios the spy process might as well start tapping the RAM for encrypted data. Flushing the cache would rather confuse the spy process and make life difficult for attackers to derive a fixed pattern of the timing information and encrypted data samples.
Another feature intended in DCF algorithm is to Figure 6. AES timings, using Constant-Time AES algorithm, for 128 keys and 128 inputs implement random delays within the execution cycles during the encryption/decryption process. As a matter of fact that if bunch of the instructions from the encryption program repeats more than once, the execution time for those instructions remain constant all the time. By continuously monitoring the CPU instruction cycles, attacker can determine the time taken to execute a step in encryption algorithm. Attacker might be able to capture the entire process timeline and data patterns being encrypted or decrypted. In DCF, additional delays could be introduced while the algorithm steps are in progress. This would change the encryption/decryption timeline and make the algorithm more unpredictable. As a result, attacker will not be able to guess the timing pattern created by the encryption/decryption steps. Every time when the proposed DCF algorithm generates a unique timing pattern for encrypting the set of data, it makes things more difficult for an attacker who uses a key parameter (i.e., the time taken to encrypt a set of data) in his predictable brute-force approach for cracking the key. The delays in DCF could be made more unpredictable by randomizing the numeric values that defines the amount of delay caused. A good sturdy randomizer could achieve a fairly unpredictable pattern of Fig. 5 Open SSL AES timings for 128 keys and 128 inputs on a Pentium M processor and Fig. 6 AES timings using Constant-Time AES algorithm, for 128 keys and 128 input delays.
The cache timing attack exploits the effect of memory access on the cache, and would thus be completely lessened by an implementation that does not perform any table lookups. Instead of avoiding table lookup, one could employ them by ensuring that the pattern of accesses to the memory is completely independent of the data passing through the algorithm. In its easiest form, implementing a memory access for a relevant set of data, one can read all the data from the look-up table. In addition, one could use an alternative description of the cipher which replaces the table lookups by an equivalent series of the logical operations. For AES, this is particularly ideal since the lookup tables have concise algebraic descriptions, but performance is degraded by over an order of magnitude [3].
Flushing cache, random delays, and making data access independent of underlying data being processed, would make sense only if the DCF program is forced to run on a single thread. Single thread would also ensure that less data is being exposed to the spy process at any given point of time.
B. Mathematical Model
As discussed above, Rijndael is vulnerable to timing attacks due to its use of table lookups. In the current analysis, we develop a mathematical model for the attacks when table lookups are being performed during the execution of a Rijndael algorithm. We use our inventive method of flushing the cache during the execution of the table lookups and prove that when the table lookups are performed in constant-time, the attacker is unable to apply his/her spy process to recognize the encrypted data. Fig. 7 and 8 are plotted for constant-time DCF algorithm using a tool called "CacheIn" -a toolset for comprehensive Cache Inspection from Springer. Counter measures like flushing cache are implemented in the DCF algorithm using C++. axis shows the time taken to execute that particular instruction. Fig. 8 shows the average time taken to fetch the input data Pi from the cache for that particular instruction xi0. Here, X axis shows the data in cache memory and Y axis shows the time taken to fetch that data. Due to the constant time approach with the cache flushing, Fig. 7 and Fig. 8 demonstrate that an average time reaches to a constant value. Fig. 9 is the combination of the timing graphs shown in Fig. 7 and 8 for fetching the data and the time taken to execute the instruction to fetch that data. If we take the difference of maximum values of an average time for fetching the data and the time to execute an instruction to fetch that data, we will get very negligible time difference, say ki. For any time difference between the timing data and the reference data, ki remains constant and too small due to cache flushing. This implies that, with the constant time information, it is not possible to determine the exact time taken to encrypt/decrypt the data. The performance of the DCF algorithm is found to be little bit slower than the Rijndael algorithm. The performance penalty is due to cache flushing that provokes the processor to search the missing data in the RAM or in a secondary disk. On the other hand, the security provided against attackers by the proposed DCF algorithm is pretty impressive.
V. SIMULATION RESULTS
Here is a brief description of DCF during execution of Rijndael algorithm. Assume that there is a huge data file that's being encrypted using the DCF algorithm. The flowchart in Fig. 10 would portray a logical flow of events. A huge file is read into a user-defined variable, "buffer". The password provided by the user is typically stored as the encryption key. Rijndael initializes itself by building the set of round tables and table lookups into its data structure which helps in processing the data in buffer. A timer is initialized just before Rijndael starts encrypting the data in the buffer. The time should be initialized in nanoseconds. During encryption, Rijndael puts the key and data together in the round operation. During various steps in the encryption process, the random delays are introduced using Sleep(X) function to ensure that the repeated set of instructions does not portray the same execution timeline. Here, the amount of time, the process needs to be suspended 'X', is directly proportional to the total amount of time 'T' taken to process the chunk of data of size 'S'. If the timer becomes zero, flush or remove the data from the cache by using the cacheflush() function. The timer would be initialized with a random time that would make the encryption process time more unpredictable for the hacker. Reinitialize the timer with a random time and perform the encryption with random delay until all the data is processed (encrypted).
VI. CONCLUSION
We have seen that Rijndael is vulnerable to cache timing attack. Beyond AES, such attacks are potentially applicable to any implementation of a cryptographic algorithm that performs data-dependent memory accesses. The main weakness detected in the Rijndael algorithm is the heavy use of table lookups which dominate the running time and the table lookup indices. The countermeasures described in this paper represent a significant step towards developing a stable, attackproof AES algorithm. The DCF algorithm simulates a scenario wherein the table lookups are accessed in constant-time rather than in variable-time. This would disable any attacker from writing a spy program to brute force the key and data out of the cache data stored during the execution of the DCF algorithm. In the implementation of the DCF algorithm, cache is flushed periodically during encryption or decryption process. This would disable the attacker from tapping the cache for data. On the downside, there is a performance hit on the encryption time, but on a brighter note, the DCF algorithm stands strong against the cache timing attack.
Figure 1
1Figure 1. SubBytes
Figure 5 .
5Open SSL AES timings for 128 keys and 128 inputs on a Pentium M processor
Figure 4
4Figure 4. AddRoundKey
Fig. 7 Figure 7 .
77shows the average time taken to execute the ith instruction and xi0 indicates the part of the instruction cycle. Here, X axis shows the instruction cycle and Y Graph showing time taken to execute the instruction Figure 8. Graph showing time taken to collect data from cache during each CPU instruction.
Figure 10 .Figure 9 .
109Dynamic Cache Flushing Algorithm Flowchart Graph showing the difference between the timing data and reference data
For example, the variableindex array lookup T0[k[0] n[0]] near the beginning of the AES computation. A typical hacker might think that the time for this array lookup depends on the array index and the time for the whole AES computation is well correlated with the time for this array lookup. As a result, the AES timings leak information about k[0] n[0] and it can calculate the exact value of k[0] from the distribution of AES timings as a function of n[0]. Assume, that the hacker watches the time taken by the victim to handle many n's and totals the AES times for each possible n[13], and observes that the overall AES time is maximum when n[13] is, say, 147. Suppose that the hacker also observes, by carrying out experiments with known keys k on a computer with the same AES software and the same CPU, that the overall AES time is maximum when k[13] n[13] is, say, 8. The hacker concludes that the victim's key k[13] is 147 8 = 155. This implies that a hacker can easily attack a variable time AES algorithm and can crack the encrypted data and eventually keySimilar comments apply to k[1]
n[1], k[2]
n[2],
etc.
Authors BiographyIn the past, he has done research on bioinformatics projects where he investigated the use of Linux based cluster search engines for finding the desired proteins in input and outputs sequences from multiple databases. For last three year, his research focused primarily on the modeling and simulation of wide range parallel/distributed systems and the web based training applications. Syed Rizvi is the author of 68 scholarly publications in various areas. His current research focuses on the design, implementation and comparisons of algorithms in the areas of multiuser communications, multipath signals detection, multi-access interference estimation, computational complexity and combinatorial optimization of multiuser receivers, peer-to-peer networking, network security, and reconfigurable coprocessor and FPGA based architectures.
AES Proposal: Rijndael, AES Algorithm" Submission. J Daemen, V Rijmen, J. Daemen and V. Rijmen, "AES Proposal: Rijndael, AES Algorithm" Submission, September 3, 1999.
Cache-timing attacks on AES. Daniel J Bernstein, The University of Illinois at. Chicago, ILDaniel J. Bernstein, "Cache-timing attacks on AES", The University of Illinois at Chicago, IL 60607-7045, 2005.
Cache attacks and Countermeasures: the Case of AES. D A Osvik, A Shamir, E Tromer, Cryptology ePrint Archive. ReportD.A. Osvik, A. Shamir and E. Tromer. "Cache attacks and Countermeasures: the Case of AES". In Cryptology ePrint Archive, Report 2005/271, 2005.
Cache-Collision Timing Attacks Against AES. Joseph Bonneau, Ilya Mironov, Extended VersionJoseph Bonneau and Ilya Mironov, "Cache-Collision Timing Attacks Against AES" , (Extended Version) revised 2005-11- 20.
Introduction to the special issue on the IEEE 2002 custom integrated circuits conference. F Svelto, E Charbon, S J Wilton, University of PaviaSvelto, F.; Charbon, E.; Wilton, S.J.E, "Introduction to the special issue on the IEEE 2002 custom integrated circuits conference", University of Pavia.
James Nechvatal, Elaine Barker, Lawrence Bassham, William Burr, Morris Dworkin, James Foti, Edward Roback, Report on the Development of the Advanced Encryption Standard (AES). James Nechvatal, Elaine Barker, Lawrence Bassham, William Burr, Morris Dworkin, James Foti, Edward Roback, "Report on the Development of the Advanced Encryption Standard (AES)", October 2, 2000.
Cache Missing for Fun and Profit. Colin Percival, Colin Percival, "Cache Missing for Fun and Profit", May 13, 2005.
A Performance Comparison of the Five AES Finalists. Bruce Schneier, Doug Whiting, PDF/PostScriptRetrieved on 2006-08-13Bruce Schneier, Doug Whiting (2000-04-07). "A Performance Comparison of the Five AES Finalists" (PDF/PostScript). Retrieved on 2006-08-13.
A simple algebraic representation of Rijndael. Niels Ferguson, Richard Schroeppel, Doug Whiting, Proceedings of Selected Areas in Cryptography. Selected Areas in CryptographySpringer-VerlagRetrieved on 2006-10-06Niels Ferguson, Richard Schroeppel, Doug Whiting (2001). "A simple algebraic representation of Rijndael" (PDF/PostScript). Proceedings of Selected Areas in Cryptography, 2001, Lecture Notes in Computer Science: pp. 103-111, Springer-Verlag. Retrieved on 2006-10-06.
| []
|
[
"Coherent and incoherent processes responsible for various characteristics of nonlinear magneto-optical signals in rubidium atoms Coherent and incoherent processes in magneto-optical signals 2",
"Coherent and incoherent processes responsible for various characteristics of nonlinear magneto-optical signals in rubidium atoms Coherent and incoherent processes in magneto-optical signals 2"
]
| [
"Marcis Auzinsh [email protected] \nThe University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia\n",
"Andris Berzins \nThe University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia\n",
"Ruvin Ferber \nThe University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia\n",
"Florian Gahbauer \nThe University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia\n",
"Linards Kalvans \nThe University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia\n",
"Arturs Mozers \nThe University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia\n"
]
| [
"The University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia",
"The University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia",
"The University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia",
"The University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia",
"The University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia",
"The University of Latvia\nLaser Centre, Rainis Boulevard 19LV-1586RigaLatvia"
]
| []
| We present the results of an investigation of the different physical processes that influence the shape of the nonlinear magneto-optical signals both at small magnetic field values (∼ 100 mG) and at large magnetic field values (several tens of Gauss). We used a theoretical model that provided an accurate description of experimental signals for a wide range of experimental parameters. By turning various effects "on" or "off" inside this model, we investigated the origin of different features of the measured signals. We confirmed that the narrowest structures, with widths on the order of 100 mG, are related mostly to coherences among ground-state magnetic sublevels. The shape of the curves at other scales could be explained by taking into account the different velocity groups of atoms that come into and out of resonance with the exciting laser field. Coherent effects in the excited state can also play a role, although they mostly affect the polarization components of the fluorescence. The results of theoretical calculations are compared with experimental measurements of laser induced fluorescence from the D 2 line of atomic rubidium as a function of magnetic field. | 10.1088/0953-4075/46/18/185003 | [
"https://arxiv.org/pdf/1304.3695v3.pdf"
]
| 119,174,111 | 1304.3695 | 84f69ca0f1b1e20543112ae34add362a1888809e |
Coherent and incoherent processes responsible for various characteristics of nonlinear magneto-optical signals in rubidium atoms Coherent and incoherent processes in magneto-optical signals 2
26 Jul 2013
Marcis Auzinsh [email protected]
The University of Latvia
Laser Centre, Rainis Boulevard 19LV-1586RigaLatvia
Andris Berzins
The University of Latvia
Laser Centre, Rainis Boulevard 19LV-1586RigaLatvia
Ruvin Ferber
The University of Latvia
Laser Centre, Rainis Boulevard 19LV-1586RigaLatvia
Florian Gahbauer
The University of Latvia
Laser Centre, Rainis Boulevard 19LV-1586RigaLatvia
Linards Kalvans
The University of Latvia
Laser Centre, Rainis Boulevard 19LV-1586RigaLatvia
Arturs Mozers
The University of Latvia
Laser Centre, Rainis Boulevard 19LV-1586RigaLatvia
Coherent and incoherent processes responsible for various characteristics of nonlinear magneto-optical signals in rubidium atoms Coherent and incoherent processes in magneto-optical signals 2
26 Jul 2013arXiv:1304.3695v3 [physics.atom-ph]numbers: 3260+i3280Xx4250Gy
We present the results of an investigation of the different physical processes that influence the shape of the nonlinear magneto-optical signals both at small magnetic field values (∼ 100 mG) and at large magnetic field values (several tens of Gauss). We used a theoretical model that provided an accurate description of experimental signals for a wide range of experimental parameters. By turning various effects "on" or "off" inside this model, we investigated the origin of different features of the measured signals. We confirmed that the narrowest structures, with widths on the order of 100 mG, are related mostly to coherences among ground-state magnetic sublevels. The shape of the curves at other scales could be explained by taking into account the different velocity groups of atoms that come into and out of resonance with the exciting laser field. Coherent effects in the excited state can also play a role, although they mostly affect the polarization components of the fluorescence. The results of theoretical calculations are compared with experimental measurements of laser induced fluorescence from the D 2 line of atomic rubidium as a function of magnetic field.
Introduction
When coherent radiation excites an atomic system with ground-state angular momentum F g and excited-state angular momentum F e , coherences can be created among the magnetic sublevels [1,2]. At low laser intensity, coherences appear in the excited state of the atom. As the laser intensity increases, the absorption processes become nonlinear, and coherences are created among the magnetic sublevels of the ground state as well. When the degeneracy among the magnetic sublevels is lifted by applying an external field (in our case magnetic), the coherences are destroyed. As a result, nonlinear magneto-optical resonances (NMOR) can be observed in the laserinduced fluorescence (LIF) plotted as a function of magnetic field. For linearly polarized radiation exciting a transition F g −→ F e = F g + 1, these resonances will be bright, that is, the atoms will be more absorbing at zero magnetic field [3,4,5,6]. When F e ≤ F g , the resonances will be dark, or less absorbing at zero magnetic field [7,8]. The NMOR features can be as narrow as 10 −6 − 10 −5 G when buffer gas or antirelaxation coating of the cell is used because of the slow relaxation rate of the ground state [9]. This characteristic makes them suitable for many applications, such as, for example, magnetometry [10], lasing without inversion [11], electrically induced transparency [12], slow light and optical information storage [13,14], atomic clocks [15], and narrow-band optical filters [16]. However, these narrow resonances are usually found within broader structures with features on the order of several Gauss or several tens of Gauss in a plot of LIF versus magnetic field. Our study focuses on these broader structures, which are interesting in themselves and also for some practical applications at higher magnetic field values, like optical isolators [17]. Using a theoretical model that has been developed over time and mostly was used to describe the narrow magneto-optical resonances but can reproduce the magneto-optical signals with high accuracy over a large range of magnetic field values [18], we investigated the peculiar shape and sign (bright or dark) of these structures, as well as the physical processes that give rise to them.
In order to describe magneto-optical signals over a magnetic field range of several tens of Gauss or more, it is necessary to include in the model excited-state coherences, energy shifts of the magnetic sublevels in external fields, which bring levels out of resonance with the narrow-linewidth laser radiation, and the magnetic-field-induced mixing of the atomic wavefunctions, which changes the transition probabilities of the different transitions between ground and excited-state sublevels [19,20]. Moreover, it is necessary to treat various relaxation processes, the coherence properties of the laser radiation, and the Doppler effect. Since at least the 1970s, magneto-optical signals in alkali atoms have been modelled by solving the optical Bloch equations for the density matrix [21]. Simple models were able to describe the narrow resonances fairly well [22], but failed to describe the signals at fields of several Gauss or more. With time, these models become more sophisticated as the aforementioned effects were incorporated [23,24,25], and now the agreement is often excellent, at least up to magnetic fields over one hundred Gauss. Thus, numerical models have become useful tools for understanding the physical processes that give rise to various features in the signals, because different physical processes can be included in the models or excluded one by one. Analytical studies, on the other hand, can demonstrate more explicitly a link between a particular physical process and the observable outcome. Thus, in [22] analytical formulae were developed that allow one to calculate the contrast of bright resonances. In another study, a theoretical model of the electromagnetically induced absorption (EIA) was constructed for a hypothetical F g = 1 −→ F e = 2 transition [26]. It was possible to show from a purely theoretical point of view that the sub-natural linewidth resonance in EIA was related to the transfer of coherence from the excited state to the ground state. More recently, sophisticated analytical models were developed that are valid in the low-power region, and were applied to experimental measurements on the caesium D 1 line [27,28]. Comparison with experiments confirmed that the narrow resonances arise when polarization is transferred from the excited state to the ground state. In [29] an analytical model was used to analyze the influence of partially resolved hyperfine structure in the ground or excited state on nonlinear magneto-optical rotation signals. Numerical studies such as ours can complement these analytical investigations, because the numerical models can be made to apply over a wider range of laser power densities and consider realistic, Doppler-broadened atomic transitions in the manifold of the hyperfine levels, that is to say, take into account multiple adjacent transitions.
Our study focused on the D 2 line of 87 Rb as a model system. Since the origin of the narrow structure had already been shown to be connected to coherences in the ground state [26,28], our study primarily aimed at understanding the wider features of the magneto-optical signals up to magnetic field values of several tens of Gauss, since such understanding is important in itself and will help to improve the models of the narrow resonances used for applications. Nevertheless, since a numerical model such as ours gives complete flexibility to turn different effects "on" and "off", we were also able to confirm the origin of the narrow structure using a different technique, i. e., one that is not analytical.
The level structure of the transition studied here is shown in Fig. 1 [30]. The transition was excited by linearly polarized laser radiation. Figure 2 shows the relative transition probabilities from the ground-state sublevels of the F g = 2 level to the excitedstate sublevels of the F e = 3 level when the linearly polarized exciting radiation is decomposed into coherent circularly polarized components. It is assumed that the light is polarized perpendicularly to the direction of the external magnetic field (see Fig. 3.) This scheme implies that ∆m = 2 coherences are created between different Zeeman sublevels in the excited state as well as in the ground state. Two distinct processes contribute to ground-state coherence. The first process creates coherence in the ground state through direct interaction with the radiation field via Λ-type absorption. In the second proccess the V -type absorption creates coherences in the excited state, which then can be transferred back to the ground state via spontaneous emission, see Eq. (13.13) in [2]. Fig. 2 shows that both V-type and Λ-type transitions are present in our physical system. Figure 2. Relative transition strengths from the ground-state magnetic sublevels to the excited-state magnetic sublevels when the linearly polarized exciting radiation is decomposed into σ ± circularly polarized components for the F g = 2 −→ F e = 3 transition of the D 2 line. The Lande factor g F is given at the left of each particular hyperfine level
The paper is organized as follows: Sec. II outlines the theoretical model. In Sec. III we describe the experimental conditions, and in Sec. IV we discuss the results and attempt to decompose the modelled signal into components that are related to different physical processes.
Theoretical Model
The theoretical model is based on the density matrix approach. The density matrices are written in the |ξ, F i , m F basis where F i denotes the quantum number of the total atomic angular momentum, m F , the respective magnetic quantum number, and ξ, all other quantum numbers. The time evolution of the density matrix is described by the optical Bloch equations [31] i ∂ρ ∂t
= Ĥ , ρ + i R ρ,(1)
which include the full atomic HamiltonianĤ =Ĥ 0 +Ĥ B +V constructed from the unperturbed atom's HamiltonianĤ 0 , which depends on the internal dynamics of the atom, the HamiltonianĤ B , which describes the atom's interaction with the external magnetic field, and the dipole operatorV , which represents the atom's interaction with the electromagnetic radiation. The interaction with the magnetic field gradually decouples the total electronic angular momentum J and nuclear spin I, which means that F no longer is a good quantum number, while m still remains a good quantum number. To deal with this effect, mixing coefficients between different hyperfine states in the magnetic field are introduced in the model. The relaxation operatorR in (1) accounts for the spontaneous decay that transfers atoms from the excited state to the ground state, the collisional relaxation, and the transit relaxation. The latter occurs when atoms leave and enter the interaction region as a result of their thermal motion. The optical Bloch equations can be written explicitly for each element of the density matrix. Applying the rotating wave approximation and assuming the density matrices do not follow promptly the random phase fluctuations of the electromagnetic radiation, we may decorrelate the time-dependent differential equations from the fluctuating phase and average over it. Thus we may adiabatically eliminate the equations that describe the optical coherences and obtain rate equations for the Zeeman coherences [24]:
∂ρ g i g j ∂t = Ξ g i em + Ξ * g j e k e k ,em d * g i e k d emg j ρ e k em − − e k ,gm Ξ * g j e k d * g i e k d e k gm ρ gmg j + Ξ g i e k d * gme k d e k g j ρ g i gm − −iω g i g j ρ g i g j − γρ g i g j + e k e l Γ e k e l g i g j ρ e k e l + λδ(g i , g j ) (2a) ∂ρ e i e j ∂t = Ξ * gme i + Ξ g k e j g k ,gm d e i g k d * gme j ρ g k gm − − g k ,em Ξ g k e j d e i g k d * g k em ρ eme j + Ξ * g k e i d emg k d * g k e j ρ e i em − −iω e i e j ρ e i e j − (Γ + γ)ρ e i e j .(2b)
In both equations of (2) the first term describes the optically induced transitions to the level described by a particular equation, and the second term, the transitions away from it, with d ij being the element of the dipole transition matrix that can be calculated according to the Wigner-Eckart theorem [2]. The terms Ξ g i e j and complex conjugate Ξ * e j g i are described below. The third term describes the coherence destruction by the magnetic field, with ω ij = E i −E j denoting the energy difference between levels |i and |j caused by both the hyperfine splitting and the nonlinear Zeeman effect. The fourth term describes relaxation due to transit relaxation, collisions and spontaneous decay (only for the excited state). Two additional terms in (2a) stand for population transfer to the ground state via spontaneous decay from the excited state (fifth term) and unpolarized atoms entering the interaction region as a result of their thermal motion (sixth term). The symbol Ξ g i e j in equation (2) describes the strength of interaction between the laser radiation and the atoms and is expressed as follows:
Ξ g i e j = Ω 2 R Γ+γ+∆ω 2 +i ω − kωv + ω g i e j ,(3)
where Ω R is the Rabi frequency, further discussed in Sec. 4, Γ and γ are the rates of spontaneous decay and transit relaxation, ∆ω is the finite spectral width of the exciting radiation,ω is the central frequency of the exciting radiation, kω the respective wave vector, and kωv is the Doppler shift experienced by an atom moving with a velocity v. The dependence of the absolute value of Ξ g i e j at fixed i and j on the magnetic field is responsible for the effects of magnetic scanning discussed in Sec. 4, while the imaginary part of Ξ g i e j represents the dynamic Stark effect.
The steady state solution of the rate equations (2) yields the density matrices that describe population of magnetic sublevels and Zeeman coherences of both the ground and excited states. The density matrix of the excited state is used to calculate the fluorescence signal for an arbitrary polarization component e:
I f l (e) =Ĩ 0 g i ,e j ,e k d * (ob) g i e j d (ob) e k g i ρ e j e k ,(4)
whereĨ 0 is a proportionality coefficient and d (ob) e j g i are elements of the dipole transition matrix for the chosen observation component e. The unpolarized fluorescence signal in a particular direction was calculated by summing over two orthogonal polarization components. To take into account the Doppler effect, this quantity was averaged over the one-dimensional Maxwellian distribution of atomic velocity along the direction of the laser beam propagation axis. In addition, the density matrices for some particular velocity groups are used to obtain angular momentum probability surfaces [2,32,33].
Experiment
The experiments were carried out at room temperature on natural mixture of rubidium isotopes in a cylindrical Pyrex vapour cell with optical quality windows, 25 mm long and 25 mm in diameter, produced by Toptica, A.G. of Graefelfing, Germany. The geometry of the excitation and observation is shown in Fig. 3. The 780 nm exciting laser radiation propagates along the x axis with linear polarization vector E pointing along the y axis. The total LIF (without polarization or frequency discrimination) was observed along the z axis, which was parallel to the magnetic field vector B. The laser was a homemade extended-cavity diode laser. The magnetic field was supplied by a Helmholtz coil and its value was scanned by controlling the current in a Kepco BOP-50-8-M bipolar power supply. Signals were recorded by a photodiode (Thorlabs FDS-100). The laser frequency was determined by means of a saturated spectroscopy setup in conjunction with a wavemeter (WS-7 made by HighFinesse). The beam profile was measured by means of a beam profiler (Thorlabs BP104-VIS). The full width at half maximum was assumed to be the beam diameter used in the calculations [see Eq. (6), Sec. 4]. The ambient magnetic field along the x and y directions was compensated by a pair of Helmholtz coils. The entire experimental setup was located on a nonmagnetic optical table. Possible inhomogeneity of the magnetic field along the laser propagation axis might be caused by imperfect Helmholtz coils and does not exceed 13 µG according to an estimation based on coils' dimensions.
Results and Discussion
As the main tool for our present investigation was a numerical model, the first step was to show that it accurately described the measured signals over a large range of magnetic field values. Previous studies had already shown the model to be accurate in many experimental situations [34] in which narrow magneto-optical resonances form in weak magnetic fields (B 0.3 G) as a result of coherences created among the magnetic sublevels of the ground state. Figure 4 shows plots of LIF versus magnetic field over the range −40 G to +40 G when the laser was tuned to the F g = 2 −→ F e = 3 transition of the D 2 line of 87 Rb at different laser power densities, as well as a plot of contrast versus laser power density. It must be noted that due to proximity of other hyperfine levels in the excited state, the Doppler effect and magnetic scanning, other hyperfine levels were also excited at least partially. These transitions are included in our theoretical model as well. We defined the signal contrast as
C = I min − I max I max ,(5)
where I min is the minimum LIF value (zero first derivative and positive second derivative) around B = 0, and I max is the LIF value at the first point with vanishing first derivative and |B| > 1 G. Filled circles represent experimentally measured values, whereas the line shows the result of a theoretical calculation. In order to obtain an appropriate fit to the data, it was necessary to adjust two parameters. The first parameter was the constant k γ that relates the ratio of the mean thermal velocity v th of the atoms and the characteristic diameter of the laser beam d to the transit relaxation rate γ as
γ = k γ v th d + γ col + γ hom ≈ k γ v th d ,(6)
where γ col is the rate of inelastic atom-atom collisionsand γ hom is the relaxation caused by inhomogenities of the magnetic field. An estimated value of γ col at room temperature, assuming the spin-exchange cross section for Rb-Rb collisions σ ≈ 2 · 10 −14 cm 2 [35] is several orders of magnitude less than the first term in (6). The upper limit of γ hom estimated as shown in [36] is also several orders of magnitude less than the first term. So both, γ col and γ hom were omitted in the actual calculations. The second parameter k R related the Rabi frequency Ω R to the square root of the the experimental laser power density I according to
Ω R = k R ||d|| 2I c ,(7)
where ||d|| is the reduced dipole matrix element that remains unchanged for all transitions within the D 2 line at a well documented value [2], and c is the speed of light. Both fitting parameters (k γ and k R ) would be equal to unity for a rectangular beam profile of the exciting laser and atoms moving with the mean thermal velocity across the middle of the beam profile. In our experiment the beam profile is roughly Gaussian, and so the laser beam diameter cannot be defined unambiguously. Furthermore, atoms are moving along random trajectories with velocities distributed according to the Maxwellian velocity distribution. Thus we allow the values of these constants to deviate from unity in order to obtain an optimal fit between the modelled and experimentally recorded results. A full numerical integration over both (Gaussian and Maxwellian) distributions would be too time consuming, while our approach has proven to describe experimental results with high accuracy in previous studies e.g. [34,18].
The actual values of the fitting parameters were k γ = 0.5 and k R = 0.11. These values indicate that the interaction of atoms and laser radiation in the wings of the (roughly Gaussian) beam profile cannot be neglected, please see [37] for more detailed discussion. Thus for a beam with d = 1.6 mm (estimated in the experiment as defined in Sec. 3) and laser power P = 20 µW, we obtained the following values that were used in the modelling: γ = 95 kHz and Ω R = 0.75 MHz. Another important parameter for modelling and interpreting the results is the natural linewidth, which is Γ = 6.067 MHz [30]. Having obtained the optimum values for these parameters by trial and error, these values were used to fit simultaneously all experimental data obtained for different transitions and different values of the laser power density. (The top left plot in Fig. 4 was measured in a different experiment dedicated to the narrow structure [34], and so the experimental conditions and fitting parameters were slightly different in this case, and the range of the measured magnetic field was smaller.) Agreement between experiment and theory was rather satisfactory, which shows that the model serves as a good basis for understanding the dependence of LIF on the magnetic field over a broad range of magnetic field values. The narrow resonance at zero magnetic field is related to the destruction of coherences in the ground state by the magnetic field as we will show in the next paragraphs. Under our experimental conditions, it had a width of about one hundred milligauss and was clearly visible right at zero magnetic field. A detailed study of this resonance was performed in [34] showing that this structure points up or down (changes the sign of the second derivative) depending of the laser power density. Under the present experimental conditions it appeared as a narrow structure with a negative second derivative (pointing upwards). This narrow resonance was located in the center of another structure with a positive second derivative and a width of several Gauss.
In order to study how different physical effects influence those features of the signal that appear at different scales of the magnetic field, we used the same theoretical model, while turning different physical processes "on" and "off". Three processes were considered: destruction of ground-state coherences by the magnetic field, destruction of excited-state coherences by the magnetic field, and the "Zeeman magnetic scanning effect", which involved optical transitions between different Zeeman sublevels that come into resonance with the laser radiation as a function of the magnetic field strength and the atomic velocity. The results are shown in Fig. 5. When all effects were included, we obtained structures on the scale of 100 mG, several Gauss, and several tens of Gauss [see Fig. 5 (a)]. The latter two are, as we will show later, caused by detuning effects as the hyperfine levels are split in the external magnetic field, and we will refer to these features as the "wide structure".
When the effect of the changing magnetic field on the coherences was neglected, which was done by setting the third term in (2) to zero for both ground and excited states [ Fig. 5 (b)], the small, narrow peaks disappeared completely, whereas the other structures remained largely, but not completely, unchanged. In order to consider only the ground-state coherence effects, the excited-state coherences were decoupled from the magnetic field by setting the third term of (2b) to zero, and the detuning effects were "turned off" by taking the term ω g i e j in the denominator of (3) to be independent of the magnetic field and keeping its value at the value it has at B = 0. Only the narrow structure was reproduced when only the magnetic field's destruction of the ground-state coherence effects were taken into account in Fig. 5 (c). The results shown in Figures 5 (b) and 5 (c) clearly attribute the narrow structure to the ground state coherences and their destruction by the magnetic field. The flip-over of the narrow structure that can be seen in Figs. 4 (a)-(b) and 5 (a) and (c) while increasing the laser power density has been explained earlier [34]. At the same time the resonance with a width of several Gauss in Fig. 5 (b) is seen to be related to detuning effects, which where the only ones considered in that calculation.
When only the excited-state coherent effects were taken into account in a similar way, a structure with negative second derivative and a width of several Gauss appeared; the contrast was only one or two percent [ Fig. 5 (d)]. The structure had the same characteristic width (Γ ≈ ω ∆m=2 ) as the linear Hanle effect of the excited state [38]. The linear Hanle effect cannot be observed in our experiment as it requires discrimination of the polarization components of the LIF, and so we attribute this structure to the nonlinear Hanle effect of the excited state. Calculations at several Rabi frequencies showed that the peak associated with this effect became smaller as the Rabi frequency changed from 1.0 MHz to 2.0 MHz. Moreover, at 2.0 MHz another small dip with positive second derivative appeared inside the peak at zero magnetic field; a further increase in Rabi frequency indicates a similar behaviour, though on a different scale as in Fig. 5 (c) (the effects produced by the destruction of ground state coherences). In any case, the calculations show that excited state coherences play no role in the narrow structure. The main origin of the wide structure can be understood by considering Fig. 6. The left panel shows how a magneto-optical signal can be decomposed into contributions from different velocity groups. The solid black line represents the signal of a vapour at room temperature and is formed from an average over all the velocity groups in the Doppler profile. The dashed and dotted lines represent contributions from different velocity groups. One can see that the superposition of the contributions from the dashed and dotted lines would yield a shape similar to the black line. The right panel explains why each velocity group has its own shape. The laser is assumed to be on resonance at zero magnetic field with a group of atoms that is stationary with respect to the propagation direction of the laser radiation (v x = 0) for the F g = 2 → F e = 3 transition. All other velocity groups therefore interact negligibly with a laser field that is detuned by the Doppler shift. As the magnetic field is applied, all magnetic sublevels shown in Fig. 2, except those with m = 0, are shifted as a result of the Zeeman effect. We may say that a magnetic scanning is performed by bringing into resonance a group of atoms with some velocity v x = v(B). The function v(B) in general is nonlinear and is explicitly determined by the nature of the (nonlinear) Zeeman effect. As a result of the magnetic scanning, the shapes of the angular momentum distributions induced by the laser radiation differ as a function of magnetic field for each velocity group, which can be explicitly shown by the angular momentum probability surfaces [32,33] for the excited state. When the angular momentum probability surfaces are drawn, only the F e = 3 hyperfine level is taken into account, as other hyperfine levels are far away from resonance for the magnetic field values and velocity groups shown in Fig. 6, and their input populations are negligible. We may anticipate from Fig. 6 and the preceding discussion that, at a particular magnetic field value, some group of atoms with corresponding velocities becomes effectively oriented in either the positive or the negative direction of the axis along which the magnetic field is applied. Further, the whole ensemble of atoms becomes aligned along the same axis at magnetic values that produce the LIF maxima around ±10 Gauss.
Conclusion
Nonlinear magneto-optical resonances from the D 2 line of 87 Rb have been studied experimentally and theoretically up to magnetic field values of 40 G. The theoretical model was based on the optical Bloch equations and included the coherence properties of the laser radiation, all adjacent hyperfine transitions, the mixing of magnetic sublevels in the external magnetic field, and the Doppler effect. The model described the experimentally measured signals very well. By removing individual physical processes from the model, it was possible to deduce the physical origin of the different features observed in the signals. As expected, the narrow structure was related to coherences among ground-state Zeeman sublevels induced by the exciting laser radiation. Coherences among excited-state sublevels were found to have a small effect on signals at magnetic field scales of several Gauss. The origin of the wide structure was explained in terms of contributions from different velocity groups. With these results, it is possible to understand the origin of the variation in LIF as a function of magnetic fields in the range up to at least several tens of Gauss.
We may conclude that the results of this study emphasize the necessity to incorporate a number of processes in a theoretical model that aims to provide a quantitative description of magneto-optical effects. The most important of these effects are 1) the Doppler effect, 2) the magnetic scanning, and 3) the change in the transition probabilities due to the magnetic mixing of the hyperfine levels, which can reach 30% for 87 Rb D 2 excitation at B = 40 G. Although each of the processes can be treated separately to obtain an analytical description, in order to have an accurate description that is valid over a wider range of laser power densities and magnetic field values, one has to treat all the processes simultaneously. On the other hand, a numerical model that incorporates a number of processes can be used to estimate limiting conditions for various approximations used in analytical models in the way described above.
Figure 1 .
1Scheme of the hyperfine levels and allowed transitions of the D 2 line of 87 Rb.
Figure 3 .
3Geometry of the excitation and observation directions.
Figure 4 .
4(Colour online) LIF versus magnetic field value for the F g = 2 −→ F e = 3 transition of 87 Rb for different values of the laser power density I: (a) 0.14 mW/cm 2 , (b) 1 mW/cm 2 , (c) 10 mW/cm 2 . The bottom right panel shows the contrast of the central minimum as a function of laser power density. Filled circles correspond to experimentally measured values, whereas the solid line shows the result of a calculation. Note the different scales in (a) and (b-c).
Figure 5 .
5(colour online) Theoretical calculations of LIF versus magnetic field B for the F g = 2 −→ F e = 3 transition of 87 Rb with different physical effects taken into account: (a) all effects taken into account, (b) detuning effects only, (c) ground state coherence effects only, (d) excited state coherent effects only. Note the different scales! The parameters used in the simulation were as follows: γ = 0.019 MHz, ∆ω Laser = 2 MHz, σ Doppler = 216 MHz, D Step ≈ 1.73 MHz
Figure 6 .
6Decomposition of a magneto-optical signal into a superposition of signals from different velocity groups and at different magnetic fields. Left panel: The solid, black line shows the magneto-optical signal as it would be observed in a vapor cell at room temperature. The dashed and dotted lines show the signals for the different velocity groups that make up the room temperature velocity distribution. Right panel: Distribution of the atomic angular momentum at different values of the magnetic field B for the velocity groups in resonance at a (Doppler) detuning of 0 MHz, 5 MHz, and -5MHz.
AcknowledgmentsThe contributions of Artis Kruzins to the experiments is highly appreciated. We are grateful to the Latvian State Research Programme No. 2010/10-4/VPP-2/1 and the NATO Science for Peace project CBP.MD.SFPP.983932, "Novel Magnetic Sensors and Techniques for Security Applications" for financial support.
Interference of Atomic States. E B Aleksandrov, M P Chaika, G I Khvostenko, Springer VerlagBerlinE. B. Aleksandrov, M.P. Chaika, and G. I. Khvostenko. Interference of Atomic States. Springer Verlag, Berlin, 1993.
Optically Polarized Atoms. Physics of atoms and molecules. Marcis Auzinsh, Dmitry Budker, Simon Rochester, Oxford University PressNew YorkMarcis Auzinsh, Dmitry Budker, and Simon Rochester. Optically Polarized Atoms. Physics of atoms and molecules. Oxford University Press, New York, 2010.
Coherent effects on the Zeeman sublevels of hyperfine states in optical pumping of Rb by monomode diode laser. Y Dancheva, G Alzetta, S Cartalava, M Taslakov, Ch Andreeva, Optics Communications. 178Y. Dancheva, G. Alzetta, S. Cartalava, M. Taslakov, and Ch. Andreeva. Coherent effects on the Zeeman sublevels of hyperfine states in optical pumping of Rb by monomode diode laser. Optics Communications, 178:103-110, 2000.
Effect of atomic ground state self-polarization in the optical pumping cycle increase to linear light absorption for j → j+1 transitions. A P Kazantsev, V S Smirnov, A M Tumaikin, I A Yagofarov, Opt. Spectrosk. (USSR). 572A. P. Kazantsev, V. S. Smirnov, A. M. Tumaikin, and I. A. Yagofarov. Effect of atomic ground state self-polarization in the optical pumping cycle increase to linear light absorption for j → j+1 transitions. Opt. Spectrosk. (USSR), 57(2):116-117, 1984.
Enhanced absorption hanle effect on the f g = f → f e = f + 1 closed transitions. F Renzoni, C Zimmermann, P Verkerk, E Arimondo, Journal of Optics B Quantum and Semiclassical Optics. 31F. Renzoni, C. Zimmermann, P. Verkerk, and E. Arimondo. Enhanced absorption hanle effect on the f g = f → f e = f + 1 closed transitions. Journal of Optics B Quantum and Semiclassical Optics, 3(1):S7-S14, Feb. 2001.
Reverse dark resonance in Rb excited by a diode laser. J Alnis, M Auzinsh, J. Phys. B. 34J. Alnis and M. Auzinsh. Reverse dark resonance in Rb excited by a diode laser. J. Phys. B, 34:3889-3898, oct 2001.
Level-crossing measurement of lifetime and hfs constants of the 2 P 3/2 states of the stable alkali atoms. R W Schmieder, A Lurio, W Happer, A Khadjavi, Physical Review A. 2R. W. Schmieder, A. Lurio, W. Happer, and A. Khadjavi. Level-crossing measurement of lifetime and hfs constants of the 2 P 3/2 states of the stable alkali atoms. Physical Review A, 2:1216-1228, oct 1970.
An experimental method for the observation of r.f. transitions and laser beat resonances in oriented Na vapour. G Alzetta, A Gozzini, L Moi, G Orriols, Il Nuovo Cimento B. 361G. Alzetta, A. Gozzini, L. Moi, and G. Orriols. An experimental method for the observation of r.f. transitions and laser beat resonances in oriented Na vapour. Il Nuovo Cimento B, 36(1):5-20, 1976.
Nonlinear magneto-optic effects with ultranarrow widths. Dmitry Budker, Valeriy Yashchuk, Max Zolotorev, Phys. Rev. Lett. 81Dmitry Budker, Valeriy Yashchuk, and Max Zolotorev. Nonlinear magneto-optic effects with ultranarrow widths. Phys. Rev. Lett., 81:5788-5791, Dec 1998.
High-sensitivity magnetometer based on indexenhanced media. O Marlan, Michael Scully, Fleischhauer, Phys. Rev. Lett. 699Marlan O. Scully and Michael Fleischhauer. High-sensitivity magnetometer based on index- enhanced media. Phys. Rev. Lett., 69(9):1360-1363, Aug 1992.
Degenerate quantum-beat laser: Lasing without inversion and inversion without lasing. O Marlan, Scully, Shi-Yao, Athanasios Zhu, Gavrielides, Phys. Rev. Lett. 6224Marlan O. Scully, Shi-Yao Zhu, and Athanasios Gavrielides. Degenerate quantum-beat laser: Lasing without inversion and inversion without lasing. Phys. Rev. Lett., 62(24):2813-2816, Jun 1989.
Electromagnetically induced transparency. S E Harris, Physics Today. 50S. E. Harris. Electromagnetically induced transparency. Physics Today, 50:36-42, jul 1997.
Storage of light in atomic vapor. D F Phillips, A Fleischhauer, A Mair, R L Walsworth, M D Lukin, Phys. Rev. Lett. 865D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth, and M. D. Lukin. Storage of light in atomic vapor. Phys. Rev. Lett., 86(5):783-786, Jan 2001.
Observation of coherent optical information storage in an atomic medium using halted light pulses. Chien Liu, Zachary Dutton, Cyrus H Behroozi, Lene Vestergaard Hau, Nature. 4096819Chien Liu, Zachary Dutton, Cyrus H. Behroozi, and Lene Vestergaard Hau. Observation of coherent optical information storage in an atomic medium using halted light pulses. Nature, 409(6819):490-493, jan 2001.
A chipscale atomic clock based on 87 Rb with improved frequency stability. S Knappe, P D D Schwindt, V Shah, L Hollberg, J Kitching, L Liew, J Moreland, Optics Express. 134S. Knappe, P.D.D. Schwindt, V. Shah, L. Hollberg, J. Kitching, L. Liew, and J. Moreland. A chip- scale atomic clock based on 87 Rb with improved frequency stability. Optics Express, 13(4):1249- 1253, 2005.
Narrowband tunable filter based on velocity-selective optical pumping in an atomic vapor. Alessandro Cerè, Valentina Parigi, Marta Abad, Florian Wolfgramm, Ana Predojević, Morgan W Mitchell, Opt. Lett. 347Alessandro Cerè, Valentina Parigi, Marta Abad, Florian Wolfgramm, Ana Predojević, and Morgan W. Mitchell. Narrowband tunable filter based on velocity-selective optical pumping in an atomic vapor. Opt. Lett., 34(7):1012-1014, Apr 2009.
Optical isolator using an atomic vapor in the hyperfine Paschen-Back regime. L Weller, K S Kleinbach, M A Zentile, S Knappe, I G Hughes, C S Adams, Opt. Lett. 3716L. Weller, K. S. Kleinbach, M. A. Zentile, S. Knappe, I. G. Hughes, and C. S. Adams. Optical isolator using an atomic vapor in the hyperfine Paschen-Back regime. Opt. Lett., 37(16):3405- 3407, Aug 2012.
Dependence of the shapes of nonzero-field level-crossing signals in rubidium atoms on the laser frequency and power density. M Auzinsh, A Berzins, R Ferber, F Gahbauer, L Kalvans, A Mozers, A Spiss, Phys. Rev. A. 8733412M. Auzinsh, A. Berzins, R. Ferber, F. Gahbauer, L. Kalvans, A. Mozers, and A. Spiss. Dependence of the shapes of nonzero-field level-crossing signals in rubidium atoms on the laser frequency and power density. Phys. Rev. A, 87:033412, Mar 2013.
A novel approach to quantitative spectroscopy of atoms in a magnetic field and applications based on an atomic vapor cell with l = λ. A Sargsyan, G Hakhumyan, A Papoyan, D Sarkisyan, A Atvars, M Auzinsh, Applied Physics Letters. 93221119A. Sargsyan, G. Hakhumyan, A. Papoyan, D. Sarkisyan, A. Atvars, and M. Auzinsh. A novel approach to quantitative spectroscopy of atoms in a magnetic field and applications based on an atomic vapor cell with l = λ. Applied Physics Letters, 93(2):021119, 2008.
Hyperfine Paschen-Back regime realized in Rb nanocell. Armen Sargsyan, Grant Hakhumyan, Claude Leroy, Yevgenya Pashayan-Leroy, Aram Papoyan, David Sarkisyan, Opt. Lett. 378Armen Sargsyan, Grant Hakhumyan, Claude Leroy, Yevgenya Pashayan-Leroy, Aram Papoyan, and David Sarkisyan. Hyperfine Paschen-Back regime realized in Rb nanocell. Opt. Lett., 37(8):1379-1381, Apr 2012.
Hanle effect in an atomic beam excited by a narrow-band laser. J L Picqué, J. Phys. B. 113J. L. Picqué. Hanle effect in an atomic beam excited by a narrow-band laser. J. Phys. B, 11(3):L59-L63, 1978.
Nonlinear hanle effect in Cs vapor under strong laser excitation. A V Papoyan, M Auzinsh, K Bergmann, European Physical Journal D. 21A. V. Papoyan, M. Auzinsh, and K. Bergmann. Nonlinear hanle effect in Cs vapor under strong laser excitation. European Physical Journal D, 21:63-71, oct 2002.
Coherent spectroscopy of degenerate two-level systems in Cs. C Andreeva, S Cartaleva, Y Dancheva, V Biancalana, A Burchianti, C Marinelli, E Mariotti, L Moi, K Nasyrov, Physical Review A. 66112502C. Andreeva, S. Cartaleva, Y. Dancheva, V. Biancalana, A. Burchianti, C. Marinelli, E. Mariotti, L. Moi, and K. Nasyrov. Coherent spectroscopy of degenerate two-level systems in Cs. Physical Review A, 66(1):012502, July 2002.
Validity of rate equations for Zeeman coherences for analysis of nonlinear interaction of atoms with broadband laser radiation. Kaspars Blushs, Marcis Auzinsh, Physical Review A. 6963806Kaspars Blushs and Marcis Auzinsh. Validity of rate equations for Zeeman coherences for analysis of nonlinear interaction of atoms with broadband laser radiation. Physical Review A, 69:063806, 2004.
F-resolved magneto-optical resonances in the D 1 excitation of cesium: Experiment and theory. M Auzinsh, R Ferber, F Gahbauer, A Jarmola, L Kalvans, Physical Review A. 78113417M. Auzinsh, R. Ferber, F. Gahbauer, A. Jarmola, and L. Kalvans. F-resolved magneto-optical resonances in the D 1 excitation of cesium: Experiment and theory. Physical Review A, 78(1):013417, 2008.
Electromagnetically induced absorption and transparency in magneto-optical resonances in elliptically polarized field. D V Brazhnikov, A V Taichenachev, A M Tumaikin, V I Yudin, J. Opt. Soc. Am. B. 221D. V. Brazhnikov, A. V. Taichenachev, A. M. Tumaikin, and V. I. Yudin. Electromagnetically induced absorption and transparency in magneto-optical resonances in elliptically polarized field. J. Opt. Soc. Am. B, 22(1):57-64, 2005.
Measurement of longitudinal and transverse spin relaxation rates using the ground-state Hanle effect. N Castagna, Antoine Weis, Physical Review A. 84553421N. Castagna and Antoine Weis. Measurement of longitudinal and transverse spin relaxation rates using the ground-state Hanle effect. Physical Review A, 84(5):053421, November 2011.
Ground-state hanle effect based on atomic alignment. Evelina Breschi, Antoine Weis, Physical Review A. 86553427Evelina Breschi and Antoine Weis. Ground-state hanle effect based on atomic alignment. Physical Review A, 86(5):053427, November 2012.
Light-induced polarization effects in atoms with partially resolved hyperfine structure and applications to absorption, fluorescence, and nonlinear magneto-optical rotation. M Auzinsh, D Budker, S M Rochester, Phys. Rev. A. 8053406M. Auzinsh, D. Budker, and S. M. Rochester. Light-induced polarization effects in atoms with partially resolved hyperfine structure and applications to absorption, fluorescence, and nonlinear magneto-optical rotation. Phys. Rev. A, 80:053406, Nov 2009.
Rubidium 87 D line data. Daniel A Steck, Daniel A. Steck. Rubidium 87 D line data, Dec. 2010. revision 2.1.4, 23 December 2009.
Foundations of Laser Spectroscopy. S Stenholm, Dover Publications, Inc., MineoloaNew YorkS. Stenholm. Foundations of Laser Spectroscopy. Dover Publications, Inc., Mineoloa, New York, 2005.
Angular momenta dynamics in magnetic and electric field: Classical and quantum approach. Marcis Auzinsh, Canadian Journal of Physics. 75Marcis Auzinsh. Angular momenta dynamics in magnetic and electric field: Classical and quantum approach. Canadian Journal of Physics, 75:853-872, 1997.
Atomic polarization visualized. S M Rochester, D Budker, American Journal of Physics. 69S. M. Rochester and D. Budker. Atomic polarization visualized. American Journal of Physics, 69:450-454, 2001.
Conversion of bright magneto-optical resonances into dark resonances at fixed laser frequency for D 2 excitation of atomic rubidium. M Auzinsh, A Berzinsh, R Ferber, F Gahbauer, L Kalvans, A Mozers, D Opalevs, Physical Review A. 85333418M. Auzinsh, A. Berzinsh, R. Ferber, F. Gahbauer, L. Kalvans, A. Mozers, and D. Opalevs. Conversion of bright magneto-optical resonances into dark resonances at fixed laser frequency for D 2 excitation of atomic rubidium. Physical Review A, 85(3):033418, 2012.
Optical pumping. W Happer, Reviews of Modern Physics. 442W. Happer. Optical pumping. Reviews of Modern Physics, 44(2):169-249, 1972.
Influence of magnetic-field inhomogeneity on nonlinear magneto-optical resonances. S Pustelny, D F Kimball, S M Rochester, V V Yashchuk, D Budker, Phys. Rev. A. 7463406S. Pustelny, D. F. Jackson Kimball, S. M. Rochester, V. V. Yashchuk, and D. Budker. Influence of magnetic-field inhomogeneity on nonlinear magneto-optical resonances. Phys. Rev. A, 74:063406, Dec 2006.
Nonlinear magneto-optical resonances for systems with J ∼ 100 observed in K 2 molecules. M Auzinsh, R Ferber, I Fescenko, L Kalvans, M Tamanis, Phys. Rev. A. 8513421M. Auzinsh, R. Ferber, I. Fescenko, L. Kalvans, and M. Tamanis. Nonlinear magneto-optical resonances for systems with J ∼ 100 observed in K 2 molecules. Phys. Rev. A, 85:013421, Jan 2012.
The Hanle effect and level-crossing spectroscopy. G Moruzzi, F Strumia, Plenum PressOxfordG. Moruzzi and F. Strumia. The Hanle effect and level-crossing spectroscopy. Plenum Press, Oxford, 1991.
| []
|
[
"Simple Multi-Resolution Representation Learning for Human Pose Estimation",
"Simple Multi-Resolution Representation Learning for Human Pose Estimation"
]
| [
"Trung Q Tran \nSchool of Computing KAIST Daejeon\nSouth Korea\n",
"Giang V Nguyen \nSchool of Computing KAIST Daejeon\nSouth Korea\n",
"Daeyoung Kim [email protected] \nSchool of Computing KAIST Daejeon\nSouth Korea\n"
]
| [
"School of Computing KAIST Daejeon\nSouth Korea",
"School of Computing KAIST Daejeon\nSouth Korea",
"School of Computing KAIST Daejeon\nSouth Korea"
]
| []
| Human pose estimation -the process of recognizing human keypoints in a given image -is one of the most important tasks in computer vision and has a wide range of applications including movement diagnostics, surveillance, or self-driving vehicle. The accuracy of human keypoint prediction is increasingly improved thanks to the burgeoning development of deep learning. Most existing methods solved human pose estimation by generating heatmaps in which the ith heatmap indicates the location confidence of the ith keypoint. In this paper, we introduce novel network structures referred to as multiresolution representation learning for human keypoint prediction. At different resolutions in the learning process, our networks branch off and use extra layers to learn heatmap generation. We firstly consider the architectures for generating the multiresolution heatmaps after obtaining the lowest-resolution feature maps. Our second approach allows learning during the process of feature extraction in which the heatmaps are generated at each resolution of the feature extractor. The first and second approaches are referred to as multi-resolution heatmap learning and multi-resolution feature map learning respectively. Our architectures are simple yet effective, achieving good performance. We conducted experiments on two common benchmarks for human pose estimation: MS-COCO and MPII dataset. | 10.1109/icpr48806.2021.9412729 | [
"https://arxiv.org/pdf/2004.06366v1.pdf"
]
| 215,754,411 | 2004.06366 | 2dd77a6e0616db7a772c190e662ca2e1bb471d9b |
Simple Multi-Resolution Representation Learning for Human Pose Estimation
Trung Q Tran
School of Computing KAIST Daejeon
South Korea
Giang V Nguyen
School of Computing KAIST Daejeon
South Korea
Daeyoung Kim [email protected]
School of Computing KAIST Daejeon
South Korea
Simple Multi-Resolution Representation Learning for Human Pose Estimation
Human pose estimation -the process of recognizing human keypoints in a given image -is one of the most important tasks in computer vision and has a wide range of applications including movement diagnostics, surveillance, or self-driving vehicle. The accuracy of human keypoint prediction is increasingly improved thanks to the burgeoning development of deep learning. Most existing methods solved human pose estimation by generating heatmaps in which the ith heatmap indicates the location confidence of the ith keypoint. In this paper, we introduce novel network structures referred to as multiresolution representation learning for human keypoint prediction. At different resolutions in the learning process, our networks branch off and use extra layers to learn heatmap generation. We firstly consider the architectures for generating the multiresolution heatmaps after obtaining the lowest-resolution feature maps. Our second approach allows learning during the process of feature extraction in which the heatmaps are generated at each resolution of the feature extractor. The first and second approaches are referred to as multi-resolution heatmap learning and multi-resolution feature map learning respectively. Our architectures are simple yet effective, achieving good performance. We conducted experiments on two common benchmarks for human pose estimation: MS-COCO and MPII dataset.
I. INTRODUCTION
Human pose estimation is one of the vital tasks in computer vision and has received a great deal of attention from researchers for the past few decades. From the spatial aspect, this problem is divided into 2D and 3D human pose estimation. Geometrically, the 3D human pose might be predicted through the respective 2D human pose combining with a 3D exemplar matching [1]. This paper focuses on the deep learning approach for 2D human pose estimation which aims to localize human anatomical keypoints on the torso, face, arms, and legs.
The pioneer of deep learning methods formulated human pose estimation as a CNN-based regression towards body joints [2]. The model uses an AlexNet [3] backend (consisting of 7 layers) and an extra final layer that directly outputs joint coordinates. The later state-of-the-art methods reshaped this problem by estimating k heatmaps for all k human keypoints, where the ith heatmap represents the location confidence of the ith keypoint [4], [5], [6], [7], [8]. Heatmap-based approaches consist of two major parts as shown in Fig. 1: the first part (encoder) works as a feature extractor which is responsible for understanding the image while the second one (decoder) is to generate the heatmaps corresponding to the human keypoints. Convolutional pose machines (CPM) [5] used a multi-stage training scheme where the image features and the heatmaps produced by the previous stage are fed as the input; thus, the prediction is refined throughout stages. Commonly, the output of the feature extractor is the low-resolution feature maps. Stacked Hourglass [6] and Cascaded pyramid network (CPN) [7] adopted a multi-resolution learning strategy to generate the heatmaps from the feature maps at a variety of resolutions. Instead of independently processing at multiple resolutions as CPN, Hourglass uses skip layers to preserve spatial information at each resolution. However, these two methods were defeated when Xiao et al. [8] proposed a simple yet effective baseline which utilizes ResNet [9] as its backbone for feature extractor followed by a few deconvolutional layers for heatmap generator (Fig. 2). SimpleBaseline [8] for human pose estimation is the most effortless way to generate the heatmaps from the low-resolution feature maps, obtaining good performance on MS-COCO 2017 benchmark [10] (improving AP by 3.5 and 1.0 points compared to Hourglass [6] and CPN [7] respectively, with the similar backbone and input size). In the feature extractor, the deeper the layer is, the more specific the learned features are. For example, the first layer may learn overall features by abstracting the pixels and encoding the edges; the second layer may learn how to arrange the edges; the third layer encodes the face; the fourth layer encodes the eyes. Simply to see that the model needs to learn specialized features like eyes, nose because they correspond to the human keypoints. In particular, there are many cases of occluded keypoints. For example, the wrist is behind the back, so the wrist may not be detected. However, we actually can infer the wrist thanks to other keypoints such as elbow, shoulder, or even human skeleton. This means the model needs not only specific features but also overall patterns. This paper is inspired by the idea that the simple architecture could be ameliorated if it can learn the features from multiple resolutions, for the high resolution allows capturing overall information and the low resolution aims to extract specific characteristics. We propose novel network architectures utilizing the simple baseline [8], combining with the multiresolution learning strategy. Our first approach achieves the multi-resolution heatmaps after the lowest-resolution feature maps are obtained. To do so, we branch off at each resolution of the heatmap generator and add extra layers for heatmap generation. In our second approach, the networks directly learn the heatmap generation at each resolution of the feature extractor. Our experiments were conducted on two common benchmarks for human pose estimation: MS-COCO [10] and MPII [11]. On the COCO val2017 dataset, our best model gains AP by 0.6 points compared to SimpleBaseline [8] which has a similar backbone and input size. On the MPII dataset, our best model achieves PCKh@0. 5 This section presents the simple baseline [8] whose the heatmap generator composed of deconvolutional layers. The network structure is illustrated in Fig. 2. From the input image, the model uses residual blocks to learn the features of the image. After each residual block, the resolution is decreased by half while the number of output channels is doubled. In Fig. 2, four residual blocks are working together as a feature extractor, and their numbers of output channels are C, 2C, 4C, and 8C respectively. We also use these notations for later architectures.
After reaching 8C lowest-resolution feature maps, the network begins the top-down sequence of upsampling to obtain the high-resolution feature maps. Instead of using upsampling algorithms, SimpleBaseline [8] leverages deconvolutional layers where each of them is built out of a transposed convolutional layer [12], a batch normalization, and a Relu activation. At last, a convolutional layer is added to generate k highresolution heatmaps representing the location confidence for all k human keypoints. Mean Squared Error (MSE) is used as the loss function between the predicted and ground-truth heatmaps:
JointsLoss = k i=1 ( 1 w×h w p=1 h q=1 (H i,p,q −Ĥ i,p,q ) 2 ) k ,(1)
where H i andĤ i are the ground-truth and predicted heatmap of the ith keypoint respectively, (w, h) is the size of the heatmap.
III. OUR METHOD
To investigate the impact of multi-resolution representation, in this section, we propose learning the multi-resolution representation for both the heatmap generator and the feature extractor. These two approaches are referred to as multiresolution heatmap learning and multi-resolution feature map learning, respectively. We use ResNet [9] as our feature extractor because it is the most common backbone network for image feature extraction.
A. Multi-resolution heatmap learning
We started thinking about this kind of architecture by assuming that the ResNet backbone [9] works very well on the image feature extraction. The architectures of the multiresolution heatmap learning are illustrated in Fig. 3. The lowest-resolution feature maps are fed into the sequence of deconvolutional layers to obtain the higher resolutions. The number of output channels of these deconvolutional layers is kept unchanged and is set to be equal to the number of output channels (denoted by C) of the first residual block.
In the baseline method, k heatmaps are generated after obtaining the highest resolution. In our method, we branch off at each deconvolutional layer (excluding the highest-resolution deconvolutional layer) and add some convolutional layers to generate the low-resolution heatmaps. The higher-resolution heatmaps could be obtained from the low-resolution heatmaps by using extra deconvolutional layers. The reason we do so is that the high-resolution feature maps help generate the heatmaps with overall information while the low-resolution feature maps focus on specific characteristics. We propose two architectures with a slight difference as shown in Fig. 3: Fig. 3a, the lowest-resolution heatmaps are upsampled to the higher resolution (called medium resolution) and then combined with the heatmaps generated at this medium resolution. The result of this combination is fed into a deconvolutional layer to obtain the highestresolution heatmaps. • With a small change, in Fig. 3b, the heatmaps at each resolution are upsampled to the highest-resolution heatmaps independently and then combined at the end.
• In+ L2 loss + C k 2C 4C 8C C C C k k k k (a) MRHeatNet1 k L2 loss C k 2C 4C 8C C C C k k k k + (b) MRHeatNet2 + Elem-
B. Multi-resolution feature map learning
Instead of learning at each resolution of the heatmap generator as in the multi-resolution heatmap learning strategy, the multi-resolution feature map learning aims to directly learn how to generate the heatmaps at each resolution of the feature extractor ( Fig. 4). At each residual block corresponding to each resolution of the feature extractor (excluding the lowest resolution), the network branches off and goes through respec-tive deconvolutional layers to obtain the highest resolution. Especially, the branch from the highest-resolution residual block does not go through any deconvolutional layers but directly goes to the element-sum component. At last, a 1 × 1 convolutional layer is added to generate k predicted heatmaps for all k keypoints.
Following this stream, we propose two architectures as illustrated in Fig. 4a and Fig. 4b. The main difference between these two architectures is the number of output channels of deconvolutional layers. In the network shown in Fig. 4a, the number of output channels of all deconvolutional layers is set to be equal to the number of output channels (denoted by C) of the highest-resolution residual block, this may lead to an information loss.
The feature extractor consists of four residual blocks: the first residual block outputs C feature maps with the size of W × H, the second residual block aims to learn more features and outputs 2C feature maps with the size of W/2 × H/2, the third residual block outputs 4C feature maps with the size of W/4 × H/4, and the fourth residual block finally outputs 8C lowest-resolution feature maps with the size of W/8 × H/8. It is easy to see the principle of the image feature extraction here: the number of feature maps is increased by a factor of 2 (more features are learned) while the resolution is halved. Therefore, in the top-down sequence of upsampling, the resolution is increased two times, the number of feature maps should be decreased two times as well. For the network shown in Fig. 4a, after the first deconvolutional layer in the main branch, the resolution of feature maps is increased two times, but the number of feature maps is decreased eight times (from 8C to C). Therefore, some previously learned information may be lost. To overcome this point, the architecture in Fig. 4b uses the deconvolutional layers with the number of output channels depending on the number of feature maps extracted by the previously adjacent layer. For instance, after the fourth residual block, 8C lowest-resolution feature maps are outputted; as a result, the numbers of output channels of following deconvolutional layers are 4C, 2C, and C, respectively. The effectiveness of learning the heatmap generation from multiple resolutions of the feature extractor will be clarified in Section IV.
IV. EXPERIMENT a) Dataset: We evaluate our architectures on two common benchmarks for human pose estimation: MS-COCO [10] and MPII [11].
• The COCO dataset contains more than 200k images and 250k person instances labeled with keypoints. Each person is annotated with 17 keypoints. We train our models on COCO train2017 dataset with 57k images and 150k person instances. Our models are evaluated on COCO val2017 and test-dev2017 dataset, with 5k and 20k images, respectively. • The MPII dataset contains around 25k images with over 40k person samples. Each person is annotated with 16 joints. MPII covers 410 human activities collected from YouTube videos where the contents are everyday human activities. Since the annotations of MPII test set are not available, we train our models on a subset of 22k training samples and evaluate our models on a validation set of 3k samples [4]. b) Evaluation metric: We use different metrics for our evaluation on the MS-COCO and MPII dataset:
• In the COCO dataset, each person object has the groundtruth keypoints with the form [x 1 , y 1 , v 1 , ..., x k , y k , v k ], where x, y are the keypoint locations and v is a visibility flag (v = 0: not labeled, v = 1: labeled but not visible, and v = 2: labeled and visible). The standard evaluation metric is based on Object Keypoint Similarity (OKS) [13]:
OKS = i [exp(−d 2 i /2s 2 k 2 i )δ(v i > 0)] i [δ(v i > 0)](2)
In which, d i is the Euclidean distance between the detected and corresponding ground-truth keypoint, v i is the visibility flag of the ground-truth keypoint, s is the object scale, and k i is a per-keypoint constant that controls falloff. Predicted keypoints that are not labeled (v i = 0) do not affect the OKS. The OKS plays the same role as the IoU in object detection, so the average precision (AP) and average recall (AR) scores could be computed if given the OKS.
• For the MPII dataset, we use Percentage of Correct Keypoints with respect to head (PCKh) metric [11]. Firstly, we recall Percentage of Correct Keypoints (PCK) metric [14]. PCK is the percentage of correct detection that falls within a tolerance range which is a fraction of torso diameter. The equation could be expressed as:
y i −ŷ i 2 y rhip − y lsho 2 ≤ r,(3)
where y i andŷ i are the ground-truth and predicted location of the ith keypoint respectively, y rhip and y lsho are the ground-truth location of right hip and left shoulder respectively, r is the threshold bounded between 0 and 1. y rhip − y lsho 2 represents the torso diameter. For example, [email protected] (r = 0.2) means that: the distance between the predicted and ground-truth keypoint ≤ 0.2 × torso diameter. PCKh is almost the same as PCK except that the tolerance range is a fraction of head size. c) Network parameter: For all our experiments, we use ResNet [9] as our backbone for the image feature extraction, consisting of 4 residual blocks as shown in Fig. 3 and Fig. 4. Each deconvolutional layer uses 4 × 4 kernel filters. Each convolutional layer uses 1 × 1 kernel filters. The numbers of output channels of the residual block, deconvolutional layer, and convolutional layer are denoted by C and k as shown in Fig. 3 and Fig. 4. C is set to 256. k is set to 17 or 16 for the COCO or MPII dataset respectively.
A. Experimental results on COCO dataset
Training. The data pre-processing and augmentation follow the setting in [8]. The ground-truth human bounding box is extended in height or width to a fixed aspect ratio (height : width = 4 : 3). The human box after cropped from the image is resized to a fixed size of 256 × 192 for a fair comparison with [6], [7], [8]. The data augmentation includes random rotation (±30 • ), random scale (±40%), and flip. We use Adam optimizer [22]. The batch size is 64. The learning schedule is set up as follows: the base learning rate is set to 1e − 3, and is dropped to 1e − 4 and 1e − 5 at the 120th and 150th epoch, respectively. The training process is terminated within 170 epochs. [7]. Pretrain means the backbone is pre-trained on the ImageNet classification task. Testing. We use the two-stage top-down paradigm, similar to [7], [8]. Keypoint locations are obtained by using the highest heatvalue's location in predicted heatmaps and a quarter offset in the direction from the highest response to the second-highest response.
Comparisons on COCO val2017 dataset. TABLE I reports our evaluation results compared to Hourglass [6], CPN [7], and SimpleBaseline [8]. Note that the results of Hourglass [6] are cited from [7]. For the fair comparison, we use the faster-RCNN detector [23] with the detection AP of 56.4 (being the same with that of SimpleBaseline [8]) while the person detection AP of Hourglass [6] and CPN [7] is 55.3.
As shown in TABLE I, both our architectures outperform Hourglass [6] and CPN [7]. With the same ResNet-50 backbone, our MRFeaNet2 achieves an AP score of 70.9, improving the AP by 4.0 and 2.3 points compared to Hourglass and CPN respectively. Online Hard Keypoints Mining (OHKM) proved the efficiency when helping CPN gain the AP by 0.8 points (from 68.6 to 69.4), but still being 1.5 points lower than the AP of MRFeaNet2.
Compared to SimpleBaseline [8], our multi-resolution heatmap learning architectures have slightly worse performance. In the case of using the ResNet-50 backbone, Sim-pleBaseline has the AP score of 70.4 while the AP scores of MRHeatNet1 and MRHeatNet2 are 70.2 and 70.3 respectively. This may be explained that the deconvolutional layers cannot completely recover all information which the feature extractor already learned, so only learning from the outputs of deconvolutional layers is not enough to generate the heatmaps.
On the other hand, our multi-resolution feature map learning architectures have better performance compared to Simple-Baseline [8]. With the ResNet-50 backbone, MRFeaNet1 gains AP by 0.2 points while the AP of MRFeaNet2 increases by 0.5 points. MRFeaNet2 still obtains the AP improvement of 0.4 and 0.6 points compared to SimpleBaseline in the case of using the ResNet-101 and ResNet-152 backbone, respectively. This proves that learning heatmap generation from multiple resolutions of the feature extractor can help improve the performance of keypoint prediction.
Comparisons on COCO test-dev dataset. TABLE II shows the performance of our models and previous methods on the COCO test-dev dataset. Note that the results of SimpleBasline [8] are reproduced by us using the provided models. We use the human detector with the person detection AP of 60.9 on COCO test-dev for SimpleBasline and our models. Our networks outperform bottom-up approaches. Our MRFeaNet2 achieves the AP improvement of 2.2 points compared to Mul-tiPoseNet [18]. In comparison with top-down approaches, our models are better even with the smaller backbone and image size. Our MRFeaNet2, which uses the ResNet-50 backbone, obtains the AP of 70.4 while the AP score of G-RMI [20] is 68.5 even using the larger backbone network, larger image size, and extra training data. Compared to SimpleBaseline [8], our MRFeaNet2 still improves the AP by 0.4, 0.3, and 0.2 points in the case of using the ResNet-50, ResNet-101, and ResNet-152 backbone, respectively.
B. Experimental results on MPII dataset
Training. The data pre-processing and augmentation are similar to the setting in the experiment on the COCO dataset. The input size of human bounding box is set to 256 × 256 for a fair comparison with other methods. The data augmentation includes random rotation (±30 • ), random scale (±25%), and flip. Adam optimizer [22] is also used. The batch size is 64. The learning rate starts from 1e−3, drops to 1e−4 and 1e−5 at the 90th and 120th epoch, respectively. The training process is terminated within 140 epochs. On the other hand, the results also show that the performance could be improved if using the larger backbone network. To make this statement clear, the [email protected] scores of SimpleBaseline [8] and our models are presented on a chart as shown in Fig. 5. MRFeaNet1 152 , which is the best model on the MPII dataset, obtains the score improvement of 0.4 and 0.7 points compared to MRFeaNet1 101 and MRFeaNet1 50 respectively. MRHeatNet1 achieves the highest improvement which is 1.1 points when the backbone network is transformed from ResNet-50 to ResNet-152.
C. Qualitative results
Qualitative results on COCO test2017 dataset. We use our models trained on the COCO train2017 dataset with the ResNet-50 backbone to visualize human keypoint prediction. Our qualitative results on the unseen images of the COCO test2017 dataset are shown as in Fig. 6. Both our models work well on the simple cases (the 1 st and 2 nd row).
• The figures in the 3 rd and 4 th row are harder with some occluded keypoints, but the multi-resolution feature map learning models still relatively precisely predict the human keypoints. The multi-resolution heatmap learning models do not work well: MRHeatNet1 omits the right elbow in the 3 rd row, and the eye detection of MRHeat-Net2 is not reasonable in both of these two cases. • In the 5 th row, both legs of the woman are hidden under the table, but both of our models can make their opinion. The prediction results are different among the models. If carefully looking at the hip prediction, the locations proposed by MRFeaNet2 are the most reasonable result. Qualitative results on MPII dataset. We use our MR-FeaNet1 model trained on a subset of the MPII training set with the ResNet-152 backbone to visualize human keypoint prediction. Fig. 7 shows the keypoint predictions and corresponding heatmaps on the unseen images of the MPII test set. Each heatmap represents the location confidence of the respective keypoint. With the simple cases as in the 1 st and 2 nd row, all keypoints are predicted with high confidence.
• The man in the 3 rd row has his right leg and left ankle occluded, so the prediction of these keypoints has low confidence. However, all prediction results of this case are reasonable and acceptable. • Especially, the man in the 4 th row has two ankles not displayed, so the ankle prediction is unreasonable. The heatmaps corresponding to these two ankles are suitable and meaningful, where there is no location predicted with high confidence.
V. CONCLUSION
In this paper, we introduce two novel approaches for multi-resolution representation learning solving human pose estimation. The first approach reconciles a multi-resolution representation learning strategy with the heatmap generator where the heatmaps are generated at each resolution of the deconvolutional layers. The second approach achieves the heatmap generation from each resolution of the feature extractor. While our multi-resolution feature map learning models outperform the baseline and many previous methods, the proposed architectures are relatively straightforward and integrable. The future work includes the applications to other tasks that have the architecture of encoder-decoder (feature extraction -specific tasks) such as image captioning and image segmentation.
Fig. 1 :
1Simple pipeline for human pose estimation using heatmaps.
Fig. 2 :
2Human pose estimation using deconvolutional layers as the heatmap generator.
Fig. 3 :
3Multi-resolution heatmap learning. We propose two architectures for generating the heatmaps at each resolution of the deconvolutional layers. (a) The lowest-resolution heatmaps are upsampled and then combined with the higher-resolution heatmaps. (b) The heatmaps at each resolution are individually learned and then combined at the end. The residual block halves the resolution of the input. The deconvolutional layer doubles the resolution of the input.
Fig. 4 :
4Multi-resolution feature map learning. We propose two architectures for learning the features at each resolution of the residual blocks. (a) The number of output channels of deconvolutional layers is kept unchanged. (b) The number of output channels is different among the deconvolutional layers. The highest-resolution heatmaps are obtained from the feature maps at each resolution of the feature extractor. Notations inFig. 3are also used here. The residual block halves the resolution of the input. The deconvolutional layer doubles the resolution of the input.
Our architectures are simple yet effective, and experiments show the superiority of our approaches over numerous methods. • Our approaches could be applied to other tasks that have the architecture of encoder (feature extractor) -decoder (specific tasks) such as image captioning and image segmentation.II. HUMAN POSE ESTIMATION USING DECONVOLUTIONAL LAYERS AS THE HEATMAP GENERATORof 89.8.
Contributions: Our main contributions are:
• We introduce two novel approaches to achieve multi-
resolution representation for both heatmap generation and
feature map extraction.
• L2 loss
C
k
2C
4C
8C
C
C
C
Feature maps
Heatmaps
Deconvolutional layer
Residual block
Convolutional layer
TABLE I :
IComparisons on COCO val2017 dataset. OHKM means Online Hard Keypoints Mining
TABLE II :
IIComparisons on COCO test-dev dataset.Method
Backbone
Input size
AP
AP 50
AP 75
AP M
AP L
AR
AR 50
AR 75
AR M
AR L
Bottom-up approach: keypoint detection and grouping
OpenPose [15]
-
-
61.8
84.9
67.5
57.1
68.2
-
-
-
-
-
Associative Embedding [16]
-
-
65.5
86.8
72.3
60.6
72.6
70.2
89.5
76.0
64.6
78.1
PersonLab [17]
ResNet-152
-
68.7
89.0
75.4
64.1
75.5
75.4
92.7
81.2
69.7
83.0
MultiPoseNet [18]
-
-
69.6
86.3
76.6
65.0
76.3
73.5
88.1
79.5
68.6
80.3
Top-down approach: person detection and single-person keypoint detection
Mask-RCNN [19]
ResNet-50-FPN
-
63.1
87.3
68.7
57.8
71.4
-
-
-
-
-
G-RMI [20]
ResNet-101
353 × 257
64.9
85.5
71.3
62.3
70.0
69.7
88.7
75.5
64.4
77.1
Integral Pose Regression [21] ResNet-101
256 × 256
67.8
88.2
74.8
63.9
74.0
-
-
-
-
-
G-RMI + extra data [20]
ResNet-101
353 × 257
68.5
87.1
75.5
65.8
73.3
73.3
90.1
79.5
68.1
80.4
SimpleBaseline [8]
ResNet-50
256 × 192
70.0
90.9
77.9
66.8
75.8
75.6
94.5
83.0
71.5
81.3
SimpleBaseline [8]
ResNet-101
256 × 192
70.9
91.1
79.3
67.9
76.7
76.7
94.9
84.2
72.7
82.2
SimpleBaseline [8]
ResNet-152
256 × 192
71.6
91.2
80.1
68.7
77.2
77.2
94.9
85.0
73.4
82.6
Our multi-resolution representation learning models
MRHeatNet1
ResNet-50
256 × 192
69.7
90.8
77.8
66.6
75.4
75.4
94.4
82.9
71.3
81.1
MRHeatNet2
ResNet-50
256 × 192
69.9
90.8
78.3
66.9
75.6
75.6
94.5
83.3
71.6
81.2
MRFeaNet1
ResNet-50
256 × 192
70.1
90.7
78.4
67.0
75.9
75.8
94.3
83.3
71.7
81.3
MRFeaNet2
ResNet-50
256 × 192
70.4
90.9
78.7
67.3
76.3
76.2
94.6
83.7
72.0
81.9
MRFeaNet2
ResNet-101
256 × 192
71.2
91.0
79.6
68.2
76.9
77.0
94.7
84.5
72.9
82.5
MRFeaNet2
ResNet-152
256 × 192
71.8
91.2
80.1
68.9
77.5
77.4
94.8
84.9
73.5
82.8
TABLE III :
IIITesting. We use the human bounding boxes provided with the images. TABLE III shows the PCKh scores of our architectures and previous methods at r = 0.5. The results of SimpleBaseline[8] are reproduced by us using the provided models.Similar to the experiments on the COCO dataset, our multiresolution representation learning architectures outperform numerous previous methods. In comparison with SimpleBaselineFig. 6: Qualitative results of our proposed architectures on COCO test2017 dataset.Fig. 7: Qualitative results of our MRFeaNet1 152 on MPII test set. Each prediction has 16 heatmaps corresponding to 16 human keypoints. From left to right, top to bottom, these 16 keypoints are right ankle, right knee, right hip, left hip, left knee, left ankle, pelvis, thorax, upper neck, head top, right wrist, right elbow, right shoulder, left shoulder, left elbow, and left wrist.[8], the multi-resolution feature map learning method achieves better performance. Our MRFeaNet1 gains the [email protected] score by 0.6, 0.3 and 0.2 points compared to SimpleBaseline in the case of using the ResNet-50, ResNet-101, and ResNet-152 backbone, respectively.Comparisons on MPII dataset ([email protected]). ( 50 ),
( 101 ), or ( 152 ) means the ResNet-50, ResNet-101, or ResNet-
152 backbone is used, respectively.
Method
Hea Sho Elb Wri Hip Kne Ank Total
Pishchulin et al. [24]
74.3 49.0 40.8 34.1 36.5 34.4 35.2 44.1
Tompson et al. [25]
95.8 90.3 80.5 74.3 77.6 69.7 62.8 79.6
Carreira et al. [26]
95.7 91.7 81.7 72.4 82.8 73.2 66.4 81.3
Tompson et al. [4]
96.1 91.9 83.9 77.8 80.9 72.3 64.8 82.0
Hu et al. [27]
95.0 91.6 83.0 76.6 81.9 74.5 69.5 82.4
Pishchulin et al. [28]
94.1 90.2 83.4 77.3 82.6 75.7 68.6 82.4
Lifshitz et al. [29]
97.8 93.3 85.7 80.4 85.3 76.6 70.2 85.0
Gkioxary et al. [30]
96.2 93.1 86.7 82.1 85.2 81.4 74.1 86.1
Rafi et al. [31]
97.2 93.9 86.4 81.3 86.8 80.6 73.4 86.3
Belagiannis et al. [32] 97.7 95.0 88.2 83.0 87.9 82.6 78.4 88.1
Insafutdinov et al. [33] 96.8 95.2 89.3 84.4 88.4 83.4 78.0 88.5
Wei et al. [5]
97.8 95.0 88.7 84.0 88.4 82.8 79.4 88.5
SimpleBaseline 50 [8]
96.4 95.3 89.0 83.2 88.4 84.0 79.6 88.5
MRHeatNet1 50
96.7 95.2 88.9 83.8 88.1 83.6 78.6 88.4
MRHeatNet2 50
96.8 95.5 88.6 83.8 88.5 83.6 78.7 88.5
MRFeaNet1 50
96.5 95.5 89.6 84.3 88.6 84.6 80.6 89.1
MRFeaNet2 50
96.6 95.4 88.9 83.9 88.5 84.6 80.9 88.9
SimpleBaseline 101 [8] 96.9 95.9 89.5 84.4 88.4 84.5 80.7 89.1
MRHeatNet1 101
96.7 95.7 89.7 84.4 89.1 84.7 81.4 89.3
MRHeatNet2 101
97.4 95.6 89.3 84.2 89.0 84.9 81.2 89.3
MRFeaNet1 101
96.8 95.6 89.4 84.6 89.2 85.2 81.2 89.4
MRFeaNet2 101
96.6 95.2 89.3 84.2 89.2 85.9 81.6 89.3
SimpleBaseline 152 [8] 97.0 95.9 90.0 85.0 89.2 85.3 81.3 89.6
MRHeatNet1 152
96.8 96.0 90.1 84.4 88.9 85.3 81.4 89.5
MRHeatNet2 152
96.9 95.6 89.9 84.6 88.9 86.0 81.2 89.5
MRFeaNet1 152
97.2 95.9 90.2 85.3 89.3 85.4 82.0 89.8
MRFeaNet2 152
96.7 95.4 89.9 85.1 88.8 85.7 81.8 89.5
88.3
88.5
88.7
88.9
89.1
89.3
89.5
89.7
89.9
ResNet-50
ResNet-101
ResNet-152
[email protected]
Backbone network
SimpleBaseline
MRHeatNet1
MRHeatNet2
MRFeaNet1
MRFeaNet2
Fig. 5: [email protected] score of SimpleBaseline and our models
on MPII dataset.
MRHeatNet1
MRHeatNet2
MRFeaNet1
MRFeaNet2
3d human pose estimation= 2d pose estimation+ matching. C.-H Chen, D Ramanan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC.-H. Chen and D. Ramanan, "3d human pose estimation= 2d pose estimation+ matching," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7035-7043.
Deeppose: Human pose estimation via deep neural networks. A Toshev, C Szegedy, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionA. Toshev and C. Szegedy, "Deeppose: Human pose estimation via deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1653-1660.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural infor- mation processing systems, 2012, pp. 1097-1105.
Efficient object localization using convolutional networks. J Tompson, R Goroshin, A Jain, Y Lecun, C Bregler, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ. Tompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler, "Efficient object localization using convolutional networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 648-656.
Convolutional pose machines. S.-E Wei, V Ramakrishna, T Kanade, Y Sheikh, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionS.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, "Convolutional pose machines," in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 4724-4732.
Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, SpringerA. Newell, K. Yang, and J. Deng, "Stacked hourglass networks for human pose estimation," in European conference on computer vision. Springer, 2016, pp. 483-499.
Cascaded pyramid network for multi-person pose estimation. Y Chen, Z Wang, Y Peng, Z Zhang, G Yu, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionY. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun, "Cascaded pyramid network for multi-person pose estimation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7103-7112.
Simple baselines for human pose estimation and tracking. B Xiao, H Wu, Y Wei, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)B. Xiao, H. Wu, and Y. Wei, "Simple baselines for human pose estimation and tracking," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 466-481.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, SpringerT.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," in European conference on computer vision. Springer, 2014, pp. 740-755.
2d human pose estimation: New benchmark and state of the art analysis. M Andriluka, L Pishchulin, P Gehler, B Schiele, Proceedings of the IEEE Conference on computer Vision and Pattern Recognition. the IEEE Conference on computer Vision and Pattern RecognitionM. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, "2d human pose estimation: New benchmark and state of the art analysis," in Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, 2014, pp. 3686-3693.
A guide to convolution arithmetic for deep learning. V Dumoulin, F Visin, arXiv:1603.07285arXiv preprintV. Dumoulin and F. Visin, "A guide to convolution arithmetic for deep learning," arXiv preprint arXiv:1603.07285, 2016.
COCO -Common Objects in Context. COCOCOCO, "COCO -Common Objects in Context," http://cocodataset.org/ #keypoints-eval.
Articulated pose estimation with flexible mixtures-of-parts. Y Yang, D Ramanan, CVPR 2011. IEEE. Y. Yang and D. Ramanan, "Articulated pose estimation with flexible mixtures-of-parts," in CVPR 2011. IEEE, 2011, pp. 1385-1392.
Realtime multi-person 2d pose estimation using part affinity fields. Z Cao, T Simon, S.-E Wei, Y Sheikh, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZ. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7291-7299.
Associative embedding: End-toend learning for joint detection and grouping. A Newell, Z Huang, J Deng, Advances in neural information processing systems. A. Newell, Z. Huang, and J. Deng, "Associative embedding: End-to- end learning for joint detection and grouping," in Advances in neural information processing systems, 2017, pp. 2277-2287.
Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. G Papandreou, T Zhu, L.-C Chen, S Gidaris, J Tompson, K Murphy, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)G. Papandreou, T. Zhu, L.-C. Chen, S. Gidaris, J. Tompson, and K. Mur- phy, "Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 269-286.
Multiposenet: Fast multi-person pose estimation using pose residual network. M Kocabas, S Karagoz, E Akbas, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)M. Kocabas, S. Karagoz, and E. Akbas, "Multiposenet: Fast multi-person pose estimation using pose residual network," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 417-433.
Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionK. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
Towards accurate multi-person pose estimation in the wild. G Papandreou, T Zhu, N Kanazawa, A Toshev, J Tompson, C Bregler, K Murphy, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionG. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy, "Towards accurate multi-person pose estimation in the wild," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4903-4911.
Integral human pose regression. X Sun, B Xiao, F Wei, S Liang, Y Wei, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)X. Sun, B. Xiao, F. Wei, S. Liang, and Y. Wei, "Integral human pose regression," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 529-545.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, 2015, pp. 91-99.
Strong appearance and expressive spatial models for human pose estimation. L Pishchulin, M Andriluka, P Gehler, B Schiele, Proceedings of the IEEE international conference on Computer Vision. the IEEE international conference on Computer VisionL. Pishchulin, M. Andriluka, P. Gehler, and B. Schiele, "Strong ap- pearance and expressive spatial models for human pose estimation," in Proceedings of the IEEE international conference on Computer Vision, 2013, pp. 3487-3494.
Joint training of a convolutional network and a graphical model for human pose estimation. J J Tompson, A Jain, Y Lecun, C Bregler, Advances in neural information processing systems. J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler, "Joint training of a convolutional network and a graphical model for human pose estimation," in Advances in neural information processing systems, 2014, pp. 1799-1807.
Human pose estimation with iterative error feedback. J Carreira, P Agrawal, K Fragkiadaki, J Malik, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik, "Human pose estimation with iterative error feedback," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4733- 4742.
Bottom-up and top-down reasoning with hierarchical rectified gaussians. P Hu, D Ramanan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionP. Hu and D. Ramanan, "Bottom-up and top-down reasoning with hierarchical rectified gaussians," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5600-5609.
Deepcut: Joint subset partition and labeling for multi person pose estimation. L Pishchulin, E Insafutdinov, S Tang, B Andres, M Andriluka, P V Gehler, B Schiele, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionL. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. V. Gehler, and B. Schiele, "Deepcut: Joint subset partition and labeling for multi person pose estimation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4929-4937.
Human pose estimation using deep consensus voting. I Lifshitz, E Fetaya, S Ullman, European Conference on Computer Vision. SpringerI. Lifshitz, E. Fetaya, and S. Ullman, "Human pose estimation using deep consensus voting," in European Conference on Computer Vision. Springer, 2016, pp. 246-260.
Chained predictions using convolutional neural networks. G Gkioxari, A Toshev, N Jaitly, European Conference on Computer Vision. SpringerG. Gkioxari, A. Toshev, and N. Jaitly, "Chained predictions using convolutional neural networks," in European Conference on Computer Vision. Springer, 2016, pp. 728-743.
An efficient convolutional network for human pose estimation. U Rafi, B Leibe, J Gall, I Kostrikov, BMVC. 12U. Rafi, B. Leibe, J. Gall, and I. Kostrikov, "An efficient convolutional network for human pose estimation." in BMVC, vol. 1, 2016, p. 2.
Recurrent human pose estimation. V Belagiannis, A Zisserman, 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition. IEEEV. Belagiannis and A. Zisserman, "Recurrent human pose estimation," in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). IEEE, 2017, pp. 468-475.
Deepercut: A deeper, stronger, and faster multi-person pose estimation model. E Insafutdinov, L Pishchulin, B Andres, M Andriluka, B Schiele, European Conference on Computer Vision. SpringerE. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schiele, "Deepercut: A deeper, stronger, and faster multi-person pose estimation model," in European Conference on Computer Vision. Springer, 2016, pp. 34-50.
| []
|
[
"Warning Signs for Wave Speed Transitions of Noisy Fisher-KPP Invasion Fronts",
"Warning Signs for Wave Speed Transitions of Noisy Fisher-KPP Invasion Fronts"
]
| [
"Christian Kuehn "
]
| []
| []
| Invasion waves are a fundamental building block of theoretical ecology. In this study we aim to take the first steps to link propagation failure and fast acceleration of traveling waves to critical transitions (or tipping points). The approach is based upon a detailed numerical study of various versions of the Fisher-Kolmogorov-Petrovskii-Piscounov (FKPP) equation. The main motivation of this work is to contribute to the following question: how much information do statistics, collected by a stationary observer, contain about the speed and bifurcations of traveling waves? We suggest warning signs based upon closeness to carrying capacity, second-order moments and transients of localized initial invasions. | 10.1007/s12080-013-0189-1 | [
"https://arxiv.org/pdf/1212.0356v1.pdf"
]
| 14,058,487 | 1212.0356 | 90a265d52288957b6a298f2eb01dfe9d38369c41 |
Warning Signs for Wave Speed Transitions of Noisy Fisher-KPP Invasion Fronts
3 Dec 2012
Christian Kuehn
Warning Signs for Wave Speed Transitions of Noisy Fisher-KPP Invasion Fronts
3 Dec 2012arXiv preprint manuscript No. (will be inserted by the editor) the date of receipt and acceptance should be inserted laterCritical transitionsinvasion wavespropagation failureFisher-KPPFKPPSPDE
Invasion waves are a fundamental building block of theoretical ecology. In this study we aim to take the first steps to link propagation failure and fast acceleration of traveling waves to critical transitions (or tipping points). The approach is based upon a detailed numerical study of various versions of the Fisher-Kolmogorov-Petrovskii-Piscounov (FKPP) equation. The main motivation of this work is to contribute to the following question: how much information do statistics, collected by a stationary observer, contain about the speed and bifurcations of traveling waves? We suggest warning signs based upon closeness to carrying capacity, second-order moments and transients of localized initial invasions.
Introduction
The propagation of waves has been a central topic in spatial ecology for a long time. A primary motivation arises from fronts where a new species is introduced into an environment or an existing species considerably extends its habit. A classical example is the spread of muskrats in central Europe [77]. Other documented examples are butterflies and bush crickets in the UK [79] and the cane toad invasion in Aus-C. Kuehn Vienna University of Technology, Institute for Analysis and Scientific Computing, Vienna, 1040, Austria. E-mail: [email protected] tralia [63]. Also bacterial growth [54,26] shows very similar spreading and wave phenomena. The references in [31,76] contain even more examples.
From a theoretical perspective a first groundbreaking result is the modelling of invasion waves via reaction-diffusion equations by [22] and Kolmogorov, Petrovskii, Piscounov [43] (FKPP) who studied the partial differential equation (PDE)
∂u ∂t = ∂ 2 u ∂x 2 + u(1 − u).(1)
There are many different aspects that could be included in a reaction-diffusion model which are very interesting to match theory and experiment; see [31,33,20,53]. Nevertheless, the basic guiding principles obtained from simple models are still highly relevant. Here we shall restrict ourselves to the study of the following stochastic partial differential equation (SPDE)
∂u ∂t = ∂ 2 u ∂x 2 + f (u) + 'noise'.(2)
For now, the reader may just think of the classical FKPP nonlinearity f (u) = u(1 − u) and some noise process that vanishes at zero-population level; for more technical details see Sections 3-4. The detailed choices are discussed later.
The main theme of this paper is the interplay between invasion waves and so-called critical transitions [71,45]. Basically, critical transitions (or tipping points) are drastic sudden changes in dynamical systems; for some background and details see Section 2. The first major question is whether (1)- (2) can undergo a 'critical transition'. We discuss this question from a more technical perspective in Section 2. On a heuristic level, one may just consider a parameter in (2) that is slowly varying. Suppose that there exists a wave with positive speed for some parameter range while the wave is stationary (or reverses direction) for another parameter range. Whether an invasion reaches a new habitat or not can have drastically different consequences so one probably would like to refer to this situation as a critical transition.
Another case we shall consider in this paper is the situation where the wave speed becomes infinite at a special parameter value. Hence, a small parameter variation can cause a dramatically accelerating invasion wave. The next step is to check whether early-warning signs for a critical transition exist. In this context, changes in vegetation patterns have been the main motivation recently [41,34]. There are only a few studies on early-warning signs for spatial systems [14,13]. In fact, early-warning signs for noisy waves generated by SPDEs have not been considered yet. This paper makes a first step in this direction. We focus on transitions for the wave speed (e.g. propagation failure) as it controls when and where an invasion front appears. Although some detailed measurements of waves are available [35,51] it is very difficult to obtain precise global empirical information [31, p.92] about a wave. Here we restrict ourselves to a single spatial observation location i.e. records by a single stationary 'ecological observer' over a fixed time interval. The general idea that one may obtain spatial conclusions from local observations is not new [21]. However, our detailed comparative numerical study of several different variants of (2) with a focus on local early-warning signs seems to be a completely new direction; for more details on the numerical methods see Section 5. The main themes and results from the numerical studies are the following:
(a) A description of the statistics for early-warning signs in SPDEs with wave propagation failure based upon closeness to carrying capacity and second-order moments. (b) A comparative study of (a) for different noise types (white, space-time white) and different multiplicative noise nonlinearities (parametric, finitesystem size, etc.). (c) Investigation of statistics near continuous wave speed transitions (and their unpredictability) for Allee effect nonlinearities in the deterministic part of the FKPP SPDE.
(d) Suggestion of transient minima to analyze wave propagation failure and wave speed blow-up.
Beyond the technical contributions we also try to link different methodologies. We combine approaches from biological invasions, critical transitions, Fisher-KPP (and Nagumo) waves, SPDEs and numerical methods. This approach should also be helpful to link several, mostly distinct, communities such as theoretical ecology, waves in theoretical physics and mathematical methods for SPDEs.
The paper is organized as follows. In Sections 2-5 we give brief reviews of the essential facts required for the remaining part of the paper. Due to the interdisciplinary aspects, the brief reviews seem necessary. Readers familiar with all the background may forward to Section 6 where the multiplicative noise case for the FKPP SPDE and statistical warning signs are studied. The nonlinear noise case is considered in Section 7 and the Allee effect in Section 8. Section 9 on transient phenomena and the influence of initial conditions concludes the main part of the paper. In Section 10 a number of generalizations and open problems are listed.
Background -Critical Transitions
A primary motivation to study critical transitions (or tipping points) arose from ecology, e.g. due to the theoretical work of Scheffer and co-workers [73,72,74]. Then it became clear from many distinct applied problems [71] as well as from abstract mathematical considerations [45,46] that many features for early-warning signs are generic across many dynamical systems. Recent studies of laboratory [16,82] and full ecosystem [9] experiments re-inforced this viewpoint.
Here we recall a few aspects of critical transitions for finite-dimensional systems relevant for this paper. Consider the pitchfork bifurcation normal form [47, p.282]
dw dt = w ′ = µw + w 3 , for w ∈ R, µ ∈ R.(3)
The homogeneous trivial branch {w = 0} consists of stable equilibria for µ < 0 and unstable equilibria for µ > 0 since the linearized system around w = 0 is W ′ = µW with solution W (t) = W (0)e µt . The bifurcation at µ = 0 is sub-critical with two unstable branches {w = ± √ −µ} for µ < 0. Consider a slow parameter variation µ ′ = ǫ with 0 < ǫ ≪ 1 and µ(0) < 0. Orbits near the homogeneous branch will reach a neighborhood of (w, µ) = (0, 0) and then jump away quickly indicating a critical transition [45,Fig.3(c)]. Before the jump the system is slow to recover from perturbations ('slowing down') for µ < 0 since W (0)e µt → W (0) as µ → 0 for fixed t. For a deterministic system, it is impossible to measure the slowing-down effect once it starts tracking the homogeneous branch {w = 0} i.e. it is exponentially close to w = 0. However, for a stochastic version of (3) given by
w ′ = µw + w 3 + 'noise'(4)
the random perturbations can constantly kick the system away from the trivial branch. Extracting statistics from these perturbations can make the slowing down effect measurable [71,45]. This is one motivation to study stochastic traveling waves (2).
An important question is which bifurcation points or quantitative transitions we would like to classify as critical transitions. In multiple time scale systems, such as (4) augmented with µ ′ = ǫ, the classification of local bifurcation points is relatively straightforward [46,. The mathematical classification and early-warning signs from [45,46] can be applied to many pattern-forming bifurcations in spatially extended systems on bounded domains. One first derives the amplitude equations on the domain locally [12]. Only a discrete set of eigenvalues occurs [36, p.210] and the usual local bifurcations for a finite number of eigenvalues passing through the imaginary axis can often be applied.
For patterns on unbounded domains the situation is less clear. We do not offer any solution to this problem and consider an example to illustrate the difficulties. Consider a traveling wave solution u(x, t) = u(x − ct), e.g. for (1), with (x, t) ∈ R × R + which satisfies u(x, 0) = 1 for x ≤ 0 and u(x, 0) = 0 for x > 0. Imagine a habitat [x 1 , x 2 ] ⊂ R with x 1,2 > 0 and define the mapping
I(u, T ) = 1 |x 2 − x 1 | x2 x1 |u(x, T )|dx.
If the invasion wave spreads towards x = ∞ (s > 0) and saturates at the carrying capacity u ≡ 1 then there exists a finite time T i such that for all T ≥ T i we have I(u, T ) = 1. If a slow parameter variation causes the wave to become stationary (s = 0) then I(u, T ) = 0 for all T ≥ 0. Although this indicates how one may define one possible critical transition scenario for waves, the situation is actually unclear since for fixed T > 0 one may have I(u, T ) = 0 for s > 0 and s = 0. This illustrates again that global definitions are intricate [46,Sec.8].
For this paper we simply rely on the intuitive notion that the transition to a standing wave and also the transition to wave speed blow-up are important in the context of critical transitions and earlywarning signs.
Background -FKPP Equation(s)
A more general version of the PDE (1) studied by Fisher [22] as well as by Kolmogorov, Petrovskii and Piscounov [43] is given by
∂u ∂t = D ∂ 2 u ∂x 2 + f (u; µ)(5)for u = u(x, t), (x, t) ∈ R × [0, ∞).
The parameter D > 0 controls the diffusion and if f (u; µ) = µf (u) then µ > 0 can be interpreted as a growth rate. Of course, an initial condition has to be specified. Often one considers u(x, t = 0) either with compact support localized near x = 0 or an initial condition with Gaussian decay. The localized initial condition for a population u to appear in a new environment is not only a mathematical simplification but does occur under realistic conditions e.g. due to global long-range transportation networks [44]. The nonlinearity f : R 2 → R represents growth and saturation effects and is required to satisfy the conditions
f (0; µ) = 0, f ′ (0; µ) > 0, f (1; µ) = 0, f ′ (1; µ) < 0.
The classical example is logistic growth f (u; µ) = µu (1 − u). In this case one may rescale t → t/µ, x → x D/µ to obtain from (5) the FKPP equation
∂u ∂t = ∂ 2 u ∂x 2 + u(1 − u).(6)
Initially, the FKPP equation (6) modeled the spread of genes in a population but it has since become a paradigmatic model for populations dispersing under the influence of diffusion [59, p.439-444]. Using a traveling wave ansatz u(x, t) = u(x − ct) =: u(ξ) for (6) yields the ODE
d 2 u dξ 2 + c du dξ + u(1 − u) = 0.(7)
Analyzing (7) in the phase space variables (u, u ′ ) =:
(u, v) shows that the point (u, v) = (1, 0) is a saddle and (u, v) = (0, 0) is a stable node or spiral. It is straightforward [59, p.441-442] to check that heteroclinic orbits from (1, 0) to (0, 0) with u ≥ 0, which correspond to non-negative traveling waves, can only exist in the stable node case for wave speeds c ≥ 2.
Since the orbit is directed from u = 1 to u = 0 one also refers to this situation as the stable state u ≡ 1 invading the unstable state u ≡ 0; see Figure 1(a).
Remark: Wave speeds c > 0 correspond to waves traveling to the right. However, the FKPP equation (6) is invariant under the symmetry x → −x so that a localized initial condition near x = 0 triggers a pair of fronts, one traveling to the left one to the right.
It is known that the traveling wave solutions to (6) form the important solution set [30, Thm 1.4-1.5]. The minimal wave speed c * F KP P = 2 is the asymptotic speed of propagation for the FKPP nonlinearity [4]. Since the wave speed is determined by the linearized problem
∂ũ ∂t = ∂ 2ũ ∂x 2 +ũ(8)
at the leading edge near (u, v) = (0, 0) as detailed in [81, p.38-42] one also refers to the traveling wave with c * F KP P = 2 as a pulled front. For a more general nonlinearity f (u; µ) pushed fronts can exist where the asymptotic wave speed c is larger than the linear spreading speed c * [81, p.56]. The wave speed for both types is asymptotic and only achieved after a transient period. For pulled fronts the asymptotic expansion yields [81, p.78]
c(t) = c * − k 1 t + k 2 t 3/2 + O 1 t 2 , as t → ∞
with explicitly computable positive constants k 1,2 > 0. Hence, the wave speed is approached from below by a power law for pulled fronts. For pushed fronts the convergence to the asymptotic speed is exponentially fast [81, p.74]. Another correction occurs when a cutoff for the reaction term is introduced [7] which leads to a logarithmic correction term. Furthermore, if an initial condition does not decay fast enough as |x| → ∞ then faster speeds than c * occur [81, p.46]. In particular, for an initial condition decaying like O(e −α|x| ) for α > 0 the speed increases as
c(α) = O(1/α) as α → 0 [70].
The results for invasion fronts of the FKPP equation already indicate that the variety of scaling behaviors could be ideal to determine early-warnings. In fact, wave spreading in the stochastic case is even more intricate [48,49].
Background -Stochastic PDEs
As a stochastic generalization of (5) the intuitive idea is to consider the equation
∂u ∂t = D ∂ 2 u ∂x 2 + µf (u) + g(u)η(x, t)(9)
where η(x, t) formally represents the 'noise'. Here we consider two choices for the term η(x, t). The simplest is to consider a real-valued (1D) Brownian mo-
tion B(t) [17, Chapter 8] with mean E[B(t)] = 0 and covariance E[B(t)B(s)] = min(t, s) for 0 ≤ s ≤ t.
Then white noise can be defined via η(x, t) = η(t) = B where the derivative is with respect to time and interpreted in the generalized sense [3, p.52-53]. The covariance is E[η(t)η(s)] = δ(t−s) and one may then write (9) in two equivalent forms
∂u ∂t = D ∂ 2 u ∂x 2 + µf (u) + g(u)Ḃ(t), du = D ∂ 2 u ∂x 2 + µf (u) dt + g(u)dB.(10)
The existence and regularity theory of (10) is well understood [23]. If the noise should depend on space and time the theory is substantially more involved. One possibility is to consider a Hilbert space U (e.g. L 2 (R)) and a symmetric non-negative linear operator Q acting on U and define a U -valued Q-Wiener process W (t). If T r(Q) < +∞ there exists a complete orthonormal system {f k } ∞ k=1 such that
Qf k = λ k f k , for k ∈ N where {λ k } ∞ k=1
is a nonnegative bounded sequence. Then one may use the convergent sequence One is tempted to take Q = Id to mirror the finite-dimensional case to obtain a 'white noise' process. However, Q = Id is not of trace-class (since T r(Q) = +∞) and the series (11) does not converge. However, one may construct a cylindrical Wiener process for Q = Id [66, p.96-99] and characterize space-time white noise asẆ = η(x, t) which has covariance E[η(x, t)η(y, s)] = δ(x − y)δ(t − s). The existence and regularity theory for space-time white noise is slightly more involved and already leads to problems if x ∈ R 2 [10, p.54]. Since we exclusively restrict to x ∈ R these problems do not arise here and the existence theory works [29, Section 6.1].
W (t) = ∞ k=1 λ k B k (t)f k(11)u = u(·, t) in the form ∂u ∂t = D ∂ 2 u ∂x 2 + µf (u) + g(u)Ẇ du = D ∂ 2 u ∂x 2 + µf (u) dt + g(u)dW.(12)
Remark: Instead of viewing the equation on function spaces one may also consider an approach [84] where the solution u = u(x, t) is a real-valued random field which is a basically equivalent [39] approach. In this case, one has
E[W (x, t)W (y, s)] = min(t, s) min(x, y) and that space-time white noise is ∂ 2 ∂x∂t W (x, t) = η(x, t).
Background -Numerical SPDEs
First, we briefly review basic methods to solve the SPDE (12) numerically for space-time white noise. The case (10) will follow as a special case. A natural first step is to start with a spatial discretization [37]. Consider a finite interval [x 1 , x N ] ⊂ R for some N > 1 and augment (12) with zero, reflective or periodic boundary conditions. Define (∆x) := (x N − x 1 )/N and consider the numerical solution U j (t) ≈ u(x 1 + (j − 1)(∆x), t). Then the space-discrete version of (12) is a system of stochastic ordinary differential equations (SODEs)
dU j = D (∆x) 2 N l=1 L jl U l + µf (U j ) =:Fj (U) dt + g(U j ) √ ∆x Gjj (U) dB j ,(13)
for j = 1, 2, . . . , N where {B j } N k=1 are independent one-dimensional Brownian motions and the N × N matrix L depends on the boundary conditions. For reflection conditions [24] it follows that
L jj = −2 if j ∈ {2, 3, . . . , N − 1}, L 11 = −1 = L N N , L ij = 1
if |i − j| = 1 and L ij = 0 otherwise. For periodic conditions [24] one uses L jj = −2 for all j, L 1N = 1 = L N 1 , L ij = 1 if |i−j| = 1 and L ij = 0 otherwise. For zero boundary conditions [28] the values u N 1 ≡ 0 ≡ u N N are fixed, the first and last equation in (13) are discarded and the (N − 2) × (N − 2) matrix L obeys L jj = −2 for j ∈ {2, 3, . . . , N − 1}, L ij = 1 if |i − j| = 1 and L ij = 0 otherwise. For the simpler case (10) one has a single Brownian motion so that B j = B for all j and the factor 1/ √ ∆x in (13) is removed [24].
It remains to solve the SODE (13) which can be more compactly written as
dU = F (U )dt + G(U )dB(14)
where we view F (U ) = (F 1 (U ), . . . , F N (U )) T , B = (dB 1 , . . . , dB N ) T as (column) vectors and G(U ) is a diagonal matrix with diagonal entries G jj (U ). As a numerical scheme we shall always use either use the Euler-Maruyama method [32] [38]. Hence, the Milstein method provides a quite remarkable compromise between theoretical error estimates and practical implementation issues; see also [32] for a computational introduction with test codes for scalar problems. The Euler-Maruyama method is faster but not as robust so it complements Milsteinmethods nicely if many sample paths have to be calculated for a well-understood parameter regime.
To state both schemes consider t ∈ [0, T ] and define (∆t) := T /K for some fixed K ∈ N. Denote the numerical solution by U k ≈ U (k(∆t)) where k ∈ {0, 1, 2, . . . , K} and let (∆B k ) = B((k + 1)(∆t)) − B(k(∆t)) denote a vector of N (0, ∆t) normally distributed independent increments used at the k-th time step. The explicit Euler-Maruyama method is given by
U k+1 j = U k j + ∆t F j (U k ) + G jj (U k ) (∆B k ) j(15)U k+1 j = U k j + ∆t F j (U k ) + G jj (U k ) (∆B k ) j + 1 2 G jj (U k ) ∂Gjj ∂Uj (U k ) (∆B k ) 2 j − ∆t .(16)
For non-diagonal noise satisfying a suitable commutativity condition the scheme is still quite simple [42, p.348,(3.16)] while for more general cases one has to be careful [38]. The implicit version of the Milstein scheme for our problem is [42, p.400]
U k+1 j = U k j + ∆t F j (U k+1 ) + G jj ( ) (∆B k ) j + 1 2 G jj ( ) ∂Gjj ∂Uj ( ) (∆B k ) 2 j − ∆t(17)
where we have the choice to make the scheme fullyimplicit with = U k+1 or semi-implicit with = U k . Since the deterministic drift term F causes the stability problems if D(∆t) > (∆x) 2 [24, p.64,67] it makes sense to chose the semi-implicit version. The algebraic problem to solve for U k+1 in (17) can be solved using standard techniques such as Newton's method.
It should be noted that the convergence and error estimate of the numerical scheme do not immediately yield error estimates for quantitative properties or scaling laws of traveling waves for the FKPP equation. For example, it has been demonstrated [18, p.71] that for pulled fronts of a discretized deterministic FKPP (D = 1 = µ) the speed given to leadingorder by
c * = 2 − 2(∆t) + 1 12 (∆x) 2 + · · · .(18)
A similar effect is expected for the stochastic FKPP equation and properties such as the diffusion properties of the wave speed. Therefore, we have to view numerical scaling laws as approximations which carry some information about the discretization step sizes ∆x and ∆t. To minimize this effect, the formula (18), the stability requirement ∆t < (∆x) 2 and the goal to minimize computation time indicate that we should choose ∆t only slightly smaller than (∆x) 2 . We are not interested in computing the exact wave speed, only its trend under parameter variation will be relevant here. A simple method to compute an upper boundĉ on the wave speed for the initial condition u(0, 0) = 1 and u(x, 0) = 0 for a stochastic wave is to collect the set of points (j ∆x, k ∆t) such that u(j ∆x, k ∆t) is less and u((j − 1) ∆x, k ∆t) is bigger than a threshold (usually we pick the threshold as 0.05). For each point one computes the estimate c ≈ ∆x/∆t and obtainsĉ as the maximum.
Linear Multiplicative Noise
The first stochastic version of the FKPP equation we consider was studied by Elworthy, Zhao and Gaines [19,24] and is given by
∂u ∂t = µ 2 2 ∂ 2 u ∂x 2 + 1 µ 2 u(1 − u) +ǫ uḂ(19)
with one-dimensional time-dependent white noiseḂ, a small parameter 0 < µ ≪ 1 and noise strengtĥ ǫ > 0. The multiplicative noise can be motivated e.g. by the interaction of a population u with the environment [78] or by parameter noise in the deterministic part [68, p.8] such as a fluctuating growth rate [83]. A multiplicative noise term of the form g(u) = u has also been used in a model for plankton spreading [52, eq.(4b)]. For further mathematical background on SPDEs of the form (19) we refer to Section 4. It is proven in [19] that there are three major regimes for (19) depending upon the noise strength parameterǫ. For some κ = O(1) as µ → 0, the caseŝ ǫ ∼ κ/µ 2 ,ǫ ∼ κ/µ andǫ ∼ κ are identified as the strong, mild and weak noise regimes respectively. Elworthy, Zhao and Gaines prove and numerically demonstrate that for weak noise the wave propagation of the pulled front is basically unaffected while the wave fails to propagate in the strong noise regime [19,]. In the mild noise regime the wave speed is decreased as
√ 2 − κ [24, p.65].
It is important to note that the spontaneous collapse of newlyintroduced alien populations, which can occur in a strong noise regime, has been considered from an applied perspective in [76].
Using the scaling law of Brownian motion and the transformation
x → xµ 2 √ 2 , t → tµ 2 , ǫ := µǫ,
as discussed in Section 3, in the SPDE (19) yields the more familiar form of the FKPP equation
∂u ∂t = ∂ 2 u ∂x 2 + u(1 − u) + ǫ uḂ.(20)
This gives the quite natural view that ǫ ≫ 1, ǫ ∼ 1 and ǫ ≪ 1 are the strong, mild and weak noise regimes. Figure 1 shows typical solutions for the three regimes; for details on the numerical methods see Section 5. The initial condition is taken as the introduction of a species at a particular fixed location so that
u(x, t = 0) = 1 if x = 0, 0 otherwise.(21)
It is understood that the numerical initial condition is obtained by choosing a mesh having a mesh point x = 0 with u(0, 0) = 1. Now consider the situation of the 'ecological observer' who can only measure the invasion wave at one particular point in space. Based on the results by Elworthy, Zhao and Gaines [19,24] on propagation failure of the wave with increasing noise strength it is intuitive that the local statistics recorded at a fixed point carry information about the traveling front. For convenience we pick the point as x = 0. Consider a single sample path u(0, t). Let T denote the final time and consider the two basic statistics Figure 2 shows an average ofū over 200 sample paths which we denote byŪ . From the results it is becoming clear, once one compares theŪ plot with thê c plot, that a decreasing mean population size does provide the expected early-warning sign for a decreased invasion front speed. We also simulated the same case shown in Figure 2 for space-time white noiseẆ in (20). The results are qualitatively similar with a slight quantitative shift towards faster waves at comparable noise strength.
u = 1 T −t0 T t0 u(0, t) dt, Σ = 1 T −t0 T t0 (u(0, t) −ū) 2 dt 1/2 .
In both cases a relevant new result is that also the local fluctuations captured by the variance show a quite interesting behavior. Consider the scenario where the actual carrying capacity for the population is unknown. In this case, the population level u is insufficient to determine how far we are from propagation failure of the wave. Naively, one may interpret small population fluctuations as an indicator for a fast propagating wave but Figure 2 shows that it could equally well be a very slow propagating wave for a low population level. Hence one has to increase or decrease the noise strength to probe to which part of Figure 2 the observations match.
Nonlinear Multiplicative Noise
As pointed out at the beginning of Section 6, the noise terms ǫuḂ and ǫuẆ could be interpreted as parametric or environmental noise. Another possible source of noise is 'individual-based' or 'finite-systemsize' which we shall focus on in this section. Müller and Tribe showed in [57] that the SPDE ∂u ∂t = 1 6
∂ 2 u ∂x 2 + µu(1 − u) + √ 2uẆ (22)
arises as a limit of a contact process on a lattice originally studied in [5] as a model for long-range offspring displacement; traveling wave solutions to (22) exist for suitable parameter values [80]. However, Müller and Tribe also studied the behavior of (22) with a scaled noise term √ uẆ varying the parameter µ and proved [56, Thm 1] that there exists a critical value µ c , independent of u(x, 0), such that
P(u(x, t) survives) = 0 if µ < µ c , P(u(x, t) survives) = 1 if µ > µ c .(23)
Hence propagation failure of waves can occur like in the situation with noise term uẆ . Using the map- Figure 3 shows the dependence of the population level, its fluctuations and the wave speed on the parameter ǫ. The results are very similar to Figure 2 with propagation failure for higher noise level as expected from (23). In particular, the conclusions from Section 6 about inferring wave propagation properties from local data still apply. One may conjecture that the conclusions might apply to even more general versions of the FKPP equation with a noise term ǫg(u)Ẇ (or ǫg(u)Ḃ) as long as g(0) = 0.
ping t → t/µ, x → x/ √ 6µ,∂u ∂t = ∂ 2 u ∂x 2 + µu(1 − u) + ǫ √ uẆ .(24)
However, the SPDEs (20) and (24) have noise terms that increase monotonically with the population level. This may not be realistic in all situations as one expects the noise to change as u approaches the carrying capacity. This is one motivation to study the SPDE
∂u ∂t = ∂ 2 u ∂x 2 + µu(1 − u) + ǫ u(1 − u)Ẇ .(25)
This model was studied by several groups. In [55] it was proved for sufficiently small noise that compactly supported initial data remain within a timedependent interval and that a well-defined front as well as an asymptotic wave speed exist; interestingly, the shape and asymptotic form of waves has been of interest by an independent group for a discrete stochastic model [50].
Detailed numerical studies of the wave speed for (25) have been carried out [62,8] focusing on the small noise regime and the fluctuation properties of the front. Due to the special structure of the FKPP equation one may also exploit a duality argument of (25) to a particle system [15,75]. Doering, Müller and Smereka conjecture [15, eq (55)] from the duality relation that the asymptotic wave speed in the strong noise regime is given by c ∼ 2/ǫ 2 as ǫ → ∞. Although it is unclear whether this conjecture is correct it is evident from numerical simulations [15, Fig 2] that the wave speed decreases upon increasing the noise. Figure 4 shows the mean, standard deviation and wave speed calculated for (25). There are some minor differences between this case and g(u) = u and g(u) = √ u shown in Figures 2 and 3. There is a larger plateau for small noise and it takes larger noise strengths to reach the vicinity of propagation failure. However, the main warning-signs from local data still remain as it is still possible to conclude from large population levels and low fluctuations a Another different form of the noise term given by g(u) = u(1 − u) was considered in [68, eq (39)-(41)] but we shall not consider it here as the results are similar.
In summary, one should always measure the closeness to carrying capacity and the size of the fluctuations (standard deviation, variance). If system parameters change slowly one may determine from Figures 2-4 whether the distance to propagation failure has increased or decreased.
Transitions for the Allee Effect
Although propagation failure is extremely interesting from the viewpoint of critical transitions, it is certainly not the only invasion wave phenomenon where local early-warning signs are desirable. As already discussed in Section 3 there can also be pushed fronts if the nonlinearity f (u) is chosen differently. A reasonable prototypical model to study is the fol-
lowing SPDE ∂u ∂t = ∂ 2 u ∂x 2 + u(1 − u)(u − µ) + ǫg(u)Ẇ(26)
which has been considered in [68]. The nonlinearity f (u) = u(1 − u)(u + µ) may obviously arise due to an Allee effect in the context of ecology but it is also commonly used in other areas of mathematical biology, e.g. in neuroscience (26) would be referred to as Nagumo's equation [60]. We briefly recall some results about the deterministic PDE (ǫ = 0) described in [65]. For µ ∈ [−1/2, 1/2] there exists a closed-form wave
u(x, t) = u(x − ct) = u(ξ) = 1 1 + e 1 √ 2 (ξ−ξ − )
for an arbitrary phase ξ − > 0 and propagation speed
c = 1 √ 2 − √ 2µ.(27)
There are three interesting special points. [38][39][40][41][42]. Therefore, it is interesting to try to find early-warning signs for approaching the three special points µ = −1/2, 0, 1/2 cases. The front reversal case µ = 1/2 is clearly important as a direction change for an invasion front could be regarded as a critical transition but the other two cases could be of interest as well.
For the SPDE (26) we shall choose the simple multiplicative noise g(u) = u. We consider the pushedto-pulled transition first and try to apply our approach from Sections 6-7. Figure 5 shows the analog of the top parts of Figures 2-4. We observe that it is impossible to detect a trend or infer the speed of the wave. Hence, for the pushed-to-pulled transition with space-time white noise the classical variancebased early-warning signs cannot be applied for local data perturbed by a fixed noise level and observed at the center of the wave. Therefore, one should also think of new early-warning sign techniques in the context of wave propagation.
Clearly, the result depends upon the choice of T 0 and the initial condition u(x, 0). However, if both are fixed then we may compare the results. Figure 7(a) shows the results for a parametric study of µ ∈ [−0.75, 0.5]. The insets (b)-(c) show a finer mesh resolution near the pushed-to-pulled transition at µ = −1/2 and near propagation failure which is slightly shifted from the theoretical value at µ = 1/2 as the small finite-width initial condition and the noise both seem to contribute to reach the absorbing state u ≡ 0 for parameter values µ smaller than 1/2. In fact, due to these effects, the transition is more drastic than the formula (27) predicts. From Figure 7(b) it is apparent that the pushedto-pulled transition is probably unpredictable from local data collected at x = 0. Since the wave speed transition is continuous one should probably not classify the pushed-to-pulled transition as a 'critical transition'. Therefore, it is not crucial to predict it but the result shows the limitation of the ecological observer at x = 0. The same conclusion applies to the change from the pushed to the bistable regime at µ = 0. For the propagation failure scenario Figure 7(c) shows a scaling law for the decrease of u m and a slightly increasing variance may help us to anticipate the upcoming critical transition. This should not be surprising since we already considered similar propagation failure cases in Sections 6-7. The difference is that we used a completely different indicator in Figure 7(c). As for the classical FKPP equation one should remark for the Allee effect situation that different noise terms certainly do make sense, e.g. g(u) = u(1 − u) considered in [2, eq.(2)]. Based on the observations for varying the noise terms for the classical FKPP equation, and obtaining similar results for several choices, we shall not consider these generalizations for (26).
Noncompact Initial Invasions
Based on the results in Section 8 we have observed that the initial transient regime, starting from a localized invasion wave, can be useful. It remains to consider the case when the initial condition is not localized. In particular, we consider the FKPP equation
∂u ∂t = ∂ 2 u ∂x 2 + u(1 − u) + ǫ uẆ .(29)
Recall from Section 3 that the wave speed scales as c(α) = O(1/α) for α → 0. Figure 8(a) shows the dependence of the initial transient observed at x = 0 on the complete initial data. Therefore, small minimum values indicate comparatively slower waves and no response to the initial condition (u m ≈ 1) signals a very fast wave. There are two main conclusions from the results for (29)-(30) and from Section 8. Firstly, one definitely should try to measure an invasion wave immediately once the first occurrence of a new population in a new environment has been observed. Secondly, knowing the basic structure of the initial condition can be crucial for prediction, e.g. in Figure 7 0 ≪ u m < 1 still indicates a well-defined asymptotic wave speed in the pushed regime while for Figure 8 the condition 0 ≪ u m < 1 indicates closeness to a wave speed blow-up point. Hence, it is crucial to know, on a qualitative level, whether the initial invasion is really localized or whether it really consists of a full front.
Outlook
Since early-warning signs and stochastic scaling laws for noisy traveling waves are still a relatively new direction, we have only been able to cover a few aspects here. Many open problems arose which we summarize here.
The restrictions to one spatial dimension x ∈ R and one population component u have to removed in the future. There are many interesting cases e.g. multicomponent systems such as reaction-diffusion models with predation [61], FKPP-type plankton dynamics [6] or Nagumo (Allee effect)-type equations [27]. Multiple spatial dimensions can lead to more complicated bifurcation structures [40]. One may also remove all restrictions which can generate interesting life-death transitions for multi-component, 2D and 3D systems [58]. Another highly relevant generalization are heterogeneous [69] and random [85] environments. Furthermore, the structure of the FKPP equation may be too restrictive which suggests to add transport/advection terms and active boundaries in which case discontinuous wave speed transitions have been reported [11]. Also the assumption of time-white or space-time-white noise is too restrictive and one should extend the view to spatiallycolored noise [25] and trace-class covariance operators. Another issue that looked interesting is the relevance of fluctuations ('front diffusion') [1,67] for early-warning signs.
In all cases, our main driving question in this paper seems to be open: How much information do local statistics of an SPDE, collected at one (or multiple) locations, carry about the speed and bifurcations of traveling waves? It is seems plausible to obtain basic answers to these questions using numerical simulations. To develop a mathematical theory for quantitative scaling laws of SPDEs and their application to critical transitions is expected to be a challenging problem for a long time.
Fig. 1
1Simulation of (20) using the implicit Milstein scheme (17) with parameters K = 100, T = 20, N = 10 3 on the interval [−50, 50] with Neumann boundary conditions and initial condition u(x, 0) = 1 if x = 0 and u(x, 0) = 0 otherwise. (a) ǫ = 0.02, (b) ǫ = 0.3 and (c) ǫ = 1.2.
Fig. 2
2Dependence of the time averageŪ and the wave speed c on the noise strength ǫ averaged over 200 sample paths. The SPDE (20) has been numerically solved (using Euler-Maruyama (15)) with K = 100, T = 20, N = 10 3 on the interval [−50, 50] with Neumann boundary conditions and initial condition u(x, 0) = 1 if x = 0 and u(x, 0) = 0 otherwise. The top part showsŪ (circles) which has been calculated as the mean of the time series u(0, t) recorded by an ecological observer at the origin for t ∈ [10, 20]. The dots indicate ±1 standard deviation Σ for the time series; the curves are associated interpolations forming a confidence neighborhood. The bottom part of the figure shows an (upper bound) estimate for the wave speed.
Fig. 3
3Dependence of the time averageŪ and the wave speed on the noise strength ǫ for(24). Parameter values are as forFigure 2.
Fig. 4
4Dependence of the time averageŪ and the wave speed on the noise strength ǫ for (25). Parameter values are as for Figure 2 except for the slightly smaller time-step size N = 3 · 10 3 . fast wave while increasing fluctuations lead to slower speeds and low population levels with smaller fluctuations indicate closeness to propagation failure.
Fig. 5
5Dependence of the time averageŪ on the noise strength ǫ averaged over 200 sample paths. The SPDE (26) for g(u) = u has been numerically solved (using Euler-Maruyama (15)) with K = 100, T = 15, N = 10 3 on the interval [−50, 50] with Neumann boundary conditions and initial condition u(x, 0) = 1 if x ∈ [−1, 1] and u(x, 0) = 0 otherwise.Ū (circles) has been calculated as the mean of the time series u(0, t) recorded by an ecological observer at the origin for t ∈ [7.5, 15]. The dots indicate ±1 standard deviation Σ for the time series; the curves are associated interpolations forming a confidence neighborhood.Note carefully that we always used in our computations inFigures 2-5the regime for u(0, t) when the wave is already fully formed with t ∈ [T 0 , T ] for some T 0 ≫ 1. However, the transient regime starting from the localized initial condition may also contain important information.Figure 6shows three numerical simulations for µ = −0.3, 0.2, 0.4.The computation suggests that the initial transient spreading of the wave u(0, t) for t ∈ [0, T 0 ] is interesting. A simple measure to consider isu m := min t∈[0,T0] {u(0, t) : for a given u(x, 0)}.
Fig. 7
7(a) Dependence of the minimum u m defined in 28 on µ with T 0 = 10, averaged over 200 sample paths. The SPDE (26) for g(u) = u has been numerically solved (using Euler-Maruyama (15)) with K = 100, T = 20, ǫ = 0.05, N = 10 3 on the interval [−50, 50] with Neumann boundary conditions and initial condition u(x, 0) = 1 if x ∈ [−1, 1] and u(x, 0) = 0 otherwise. The circles indicate u m and the dots ±1 standard deviation Σ calculated from the sample paths. (b) Zoom near the theoretical pushedto-pulled transition at µ = 1/2. (c) Zoom near propagation failure transition.
Fig. 6 Fig. 8
68Simulation of (26) for g(u) = u using the implicit Milstein scheme (17) with parameters K = 100, T = 20, N = 2 · 10 3 on the interval [−50, 50] with Neumann boundary conditions and initial condition u(x, 0) = 1 if x ∈ [−1, 1] and u(x, 0) = 0 otherwise. (a) µ = −0.3, (b) µ = 0.2 and (c) Dependence of the minimum u m defined in 28 on α for (29)-(30); average over 200 sample paths. The parameters for the numerical simulation(using Euler-Maruyama (15)) on the interval [−50, 50] with Neumann boundary conditions are K = 150, T = 10, ǫ = 0.05 and N = 500. (a) The circles indicate u m and the dots ±1 standard deviation calculated from the sample paths. (b) Wave speed c (circles) and associated ±1 standard deviation (dots).
0) = e −α|x| , for α > 0.
Figure 8 (
8b) shows an upper bound to the wave speed and raises the interesting question whether we should, or should not, view a blow-up point for the wave speed as a critical transition.
with independent one-dimensional Brownian motions B k (t) as a definition[66, p.86-89]. As expected one has E[W (t)] = 0 and E[W (t)W (s)] = min(t, s)Q so that Q can be viewed as the covariance operator. In this case one may formally write (9) as looking for
Remark:The Milstein method is usually good as an exploratory tool due to its robustness. It has strong order-one convergence[42, Thm 10.3.5]. It is relatively straightforward to implement the Milstein method as no multiple stochastic integral evaluations occur since x ∈ R [42, Chapter 10-11][38, p.2]. Furthermore, it has recently been shown that it nicely extends to multiplicative trace-class noiseor the the Milstein
method in its explicit [42, p.345-351] or implicit [42,
p.399-404] form stated below.
Acknowledgements: I would like to thank the European Commission (EC/REA) for support by a Marie-Curie International Re-integration Grant.
Ballistic and diffusive corrections to front propagation in the presence of multiplicative noise. J Armero, J Casademunt, L Ramirez-Piscina, J M Sancho, Phys. Rev. E. 585J. Armero, J. Casademunt, L. Ramirez-Piscina, and J.M. Sancho. Ballistic and diffusive corrections to front propagation in the presence of multiplicative noise. Phys. Rev. E, 58(5):5494-5500, 1998.
External fluctuations in front propagation. J Armero, J M Sancho, J Casademunt, A M Lacasta, L Ramirez-Piscina, F Sagués, Phys. Rev. Lett. 7617J. Armero, J.M. Sancho, J. Casademunt, A.M. La- casta, L. Ramirez-Piscina, and F. Sagués. External fluctuations in front propagation. Phys. Rev. Lett., 76(17):3045-3048, 1996.
L Arnold, Stochastic Differential Equations: Theory and Applications. WileyL. Arnold. Stochastic Differential Equations: Theory and Applications. Wiley, 1974.
Speed of fronts of the reaction-diffusion equation. R D Benguria, M C Depassier, Phys. Rev. Lett. 776R.D. Benguria and M.C. Depassier. Speed of fronts of the reaction-diffusion equation. Phys. Rev. Lett., 77(6):1171-1173, 1996.
Statistical mechanics of crabgrass. M Bramson, R Durrett, G Swindle, Ann. Probab. 17M. Bramson, R. Durrett, and G. Swindle. Statistical mechanics of crabgrass. Ann. Probab., 17:444-481, 1989.
Invasion waves in populations with excitable dynamics. J Brindley, V H Biktashev, M A Tsyganov, Biological Invasions. 7J. Brindley, V.H. Biktashev, and M.A. Tsyganov. In- vasion waves in populations with excitable dynamics. Biological Invasions, 7:807-816, 2005.
Shift in the velocity front due to a cutoff. E Brunet, B Derrida, Phys. Rev. E. 563E. Brunet and B. Derrida. Shift in the velocity front due to a cutoff. Phys. Rev. E, 56(3):2597-2604, 1997.
Phenemenological theory giving full statistics of the position of fluctuating fronts. E Brunet, B Derrida, A H Mueller, S Munier, Phys. Rev. E. 73056126E. Brunet, B. Derrida, A.H. Mueller, and S. Mu- nier. Phenemenological theory giving full statistics of the position of fluctuating fronts. Phys. Rev. E, 73:(056126), 2006.
. S R Carpenter, J J Cole, M L Pace, R Batt, W A Brock, T Cline, J Coloso, J R Hodgson, J , S.R. Carpenter, J.J. Cole, M.L. Pace, R. Batt, W.A. Brock, T. Cline, J. Coloso, J.R. Hodgson, J.F.
Early warning signs of regime shifts: a whole-ecosystem experiment. D A Kitchell, L Seekell, B Smith, Weidel, Science. 332Kitchell, D.A. Seekell, L. Smith, and B. Weidel. Early warning signs of regime shifts: a whole-ecosystem ex- periment. Science, 332:1079-1082, 2011.
P.-L Chow, Stochastic Partial Differential Equations. Chapman & Hall / CRCP.-L. Chow. Stochastic Partial Differential Equations. Chapman & Hall / CRC, 2007.
A Costa, R A Blythe, M R Evans, Discontinuous. transition in a boundary driven contact processA. Costa, R.A. Blythe, and M.R. Evans. Discontinu- ous transition in a boundary driven contact process.
. J. Stat. Mech. Theor. Exp. 20109P09008)J. Stat. Mech. Theor. Exp., 2010(9):(P09008), 2010.
Pattern formation outside of equilibrium. M C Cross, P C Hohenberg, Rev. Mod. Phys. 653M.C. Cross and P.C. Hohenberg. Pattern formation outside of equilibrium. Rev. Mod. Phys., 65(3):851- 1112, 1993.
Slowing down in spatially patterned systems at the brink of collapse. V Dakos, M Kéfi, M Rietkerk, E H Van Nes, M Scheffer, Am. Nat. 1776V. Dakos, M. Kéfi, M. Rietkerk, E.H. van Nes, and M. Scheffer. Slowing down in spatially patterned sys- tems at the brink of collapse. Am. Nat., 177(6):153- 166, 2011.
Spatial correlation as leading indicator of catastropic shifts. V Dakos, E H Van Nes, R Donangelo, H Fort, M Scheffer, Theor. Ecol. 33V. Dakos, E.H. van Nes, R. Donangelo, H. Fort, and M. Scheffer. Spatial correlation as leading indicator of catastropic shifts. Theor. Ecol., 3(3):163-174, 2009.
Interacting particles,the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation, and duality. C R Doering, C Mueller, P Smereka, Physica A. 325C.R. Doering, C. Mueller, and P. Smereka. In- teracting particles,the stochastic Fisher-Kolmogorov- Petrovsky-Piscounov equation, and duality. Physica A, 325:243-259, 2003.
Early warning signals of extinction in deteriorating environments. J M Drake, B D Griffen, Nature. 467J.M. Drake and B.D. Griffen. Early warning signals of extinction in deteriorating environments. Nature, 467:456-459, 2010.
R Durrett, Probability: Theory and Examples. 4th edition. CUPR. Durrett. Probability: Theory and Examples -4th edition. CUP, 2010.
Front propagation into unstable states: universal algebraic convergence towards uniformly translating pulled fronts. U Ebert, W Van Saarloos, Physica D. 146U. Ebert and W. van Saarloos. Front propagation into unstable states: universal algebraic convergence towards uniformly translating pulled fronts. Physica D, 146:1-99, 2000.
The propagation of travelling waves for stochastic generalized KPP equations. K D Elworthy, H Z Zhao, J G Gaines, Mathl. Comput. Modelling. 204K.D. Elworthy, H.Z. Zhao, and J.G. Gaines. The propagation of travelling waves for stochastic gener- alized KPP equations. Mathl. Comput. Modelling, 20(4):131-166, 1994.
Invasion theory and biological control. W F Fagan, M A Lewis, M G Neubert, P Van Den Driessche, Ecol. Lett. 5W.F. Fagan, M.A. Lewis, M.G. Neubert, and P. van den Driessche. Invasion theory and biological control. Ecol. Lett., 5:148-157, 2002.
Inferring the dynamics of a spatial epidemic from time-series data. J A N Filipe, W Otten, G J Gibson, C A Gilligan, Bull. Math. Biol. 66J.A.N. Filipe, W. Otten, G.J. Gibson, and C.A. Gilli- gan. Inferring the dynamics of a spatial epidemic from time-series data. Bull. Math. Biol., 66:379-391, 2004.
The wave of advance of advantageous genes. R A Fisher, Ann. Eugenics. 7R.A. Fisher. The wave of advance of advantageous genes. Ann. Eugenics, 7:353-369, 1937.
Stochastic flows for nonlinear secondorder parabolic SPDE. F Flandoli, Ann. Prob. 242F. Flandoli. Stochastic flows for nonlinear second- order parabolic SPDE. Ann. Prob., 24(2):547-558, 1996.
Numerical experiments with S(P)DEs. J G Gaines, Stochastic Partial Differential Equations. A. Etheridge216CUPJ.G. Gaines. Numerical experiments with S(P)DEs. In A. Etheridge, editor, Stochastic Partial Differential Equations, volume 216 of LMS Lecture Note Series, pages 55-71. CUP, 1995.
Noise in Spatially Extended Systems. J Garcia-Ojalvo, J Sancho, SpringerJ. Garcia-Ojalvo and J. Sancho. Noise in Spatially Extended Systems. Springer, 1999.
Studies of bacterial branching growth using reactiondiffusion models for colonial development. I Golding, Y Kozlovsky, I Cohen, E Ben-Jacob, Physica A. 260I. Golding, Y. Kozlovsky, I. Cohen, and E. Ben-Jacob. Studies of bacterial branching growth using reaction- diffusion models for colonial development. Physica A, 260:510-554, 1998.
Homoclinic orbits of the FitzHugh-Nagumo equation: Bifurcations in the full system. J Guckenheimer, C Kuehn, SIAM J. Appl. Dyn. Syst. 9J. Guckenheimer and C. Kuehn. Homoclinic orbits of the FitzHugh-Nagumo equation: Bifurcations in the full system. SIAM J. Appl. Dyn. Syst., 9:138-153, 2010.
Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise I. I Gyöngy, Potential Anal. 9I. Gyöngy. Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise I. Potential Anal., 9:1-25, 1998.
Warning Signs for Wave Speed Transitions of Noisy Fisher-KPP Invasion Fronts 13. Warning Signs for Wave Speed Transitions of Noisy Fisher-KPP Invasion Fronts 13
An Introduction to Stochastic Partial Differential Equations. M Hairer, Lecture Notes. M. Hairer. An Introduction to Stochastic Par- tial Differential Equations. Lecture Notes, 2009. http://www.hairer.org/notes/SPDEs.pdf.
Travelling fronts and entire solutions of the Fisher-KPP equation in R N. F Hamel, N Nadirashvili, Arch. Ration. Mech. Anal. 157F. Hamel and N. Nadirashvili. Travelling fronts and entire solutions of the Fisher-KPP equation in R N . Arch. Ration. Mech. Anal., 157:91-163, 2001.
The spatial spread of invasions: new developments in theory and evidence. A Hastings, K Cuddington, K F Davies, C J Dugaw, S Elmendorf, A Freestone, S Harrison, M Holland, J Lambrinos, U Malvadkar, B A Melbourne, K Moore, Ecol. Lett. 8A. Hastings, K. Cuddington, K.F. Davies, C.J. Dugaw, S. Elmendorf, A. Freestone, S. Harrison, M. Holland, J. Lambrinos, U. Malvadkar, B.A. Mel- bourne, and K. Moore. The spatial spread of inva- sions: new developments in theory and evidence. Ecol. Lett., 8:91-101, 2005.
An algorithmic introduction to numerical simulation of stochastic differential equations. D J Highham, SIAM Review. 433D.J. Highham. An algorithmic introduction to nu- merical simulation of stochastic differential equations. SIAM Review, 43(3):525-546, 2001.
Pathogens can slow down or reverse invasion fronts of their hosts. F M Hilker, M A Lewis, H Seno, M Langlais, H Malchow, Biological Invasions. 7F.M. Hilker, M.A. Lewis, H. Seno, M. Langlais, and H. Malchow. Pathogens can slow down or reverse inva- sion fronts of their hosts. Biological Invasions, 7:817- 832, 2005.
Global resilience of tropical forest and savanna to critical transitions. M Hirota, M Holmgren, E H Van Nes, M Scheffer, Science. 334M. Hirota, M. Holmgren, E.H. van Nes, and M. Schef- fer. Global resilience of tropical forest and savanna to critical transitions. Science, 334:232-235, 2011.
Factors governing rate invasion: a natural experiment using Argentine ants. D A Holway, Oecologia. 115D.A. Holway. Factors governing rate invasion: a nat- ural experiment using Argentine ants. Oecologia, 115:206-212, 1998.
Pattern Formation: An introduction to methods. R Hoyle, Cambridge University PressR. Hoyle. Pattern Formation: An introduction to methods. Cambridge University Press, 2006.
The numerical approximation of stochastic partial differential equations. Milan. A Jentzen, P E Kloeden, J. Math. 77A. Jentzen and P.E. Kloeden. The numerical approxi- mation of stochastic partial differential equations. Mi- lan J. Math., 77:205-244, 2009.
A Jentzen, M Röckner, arXiv:1001.2751v4A Milstein scheme for SPDEs. A. Jentzen and M. Röckner. A Milstein scheme for SPDEs. arXiv:1001.2751v4, pages 1-37, 2012.
On the equivalence of different approaches to stochastic partial differential equations. G Jetschke, Math. Nachr. 128G. Jetschke. On the equivalence of different ap- proaches to stochastic partial differential equations. Math. Nachr., 128:315-329, 1986.
Qualitative results for solutions of the steady Fisher-KPP equation. P M Jordan, A Puri, Appl. Math. Lett. 15P.M. Jordan and A. Puri. Qualitative results for solu- tions of the steady Fisher-KPP equation. Appl. Math. Lett., 15:239-250, 2002.
Spatial vegetation patterns and imminent desertification in mediterran arid ecosystems. S Kéfi, M Rietkerk, C L Alados, Y Peyo, V P Papanastasis, A Elaich, P C De Ruiter, Nature. 449S. Kéfi, M. Rietkerk, C.L. Alados, Y. Peyo, V.P. Pa- panastasis, A. ElAich, and P.C. de Ruiter. Spatial vegetation patterns and imminent desertification in mediterran arid ecosystems. Nature, 449:213-217, 2007.
Numerical Solution of Stochastic Differential Equations. P E Kloeden, E Platen, SpringerP.E. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations. Springer, 2010.
A study of the diffusion equation with increase in the amount of substance, and its application to a biological problem. A Kolmogorov, I Petrovskii, N Piscounov, Selected Works of A. N. Kolmogorov I. V.M. TikhomirovKluwer1A. Kolmogorov, I. Petrovskii, and N. Piscounov. A study of the diffusion equation with increase in the amount of substance, and its application to a biolog- ical problem. In V.M. Tikhomirov, editor, Selected Works of A. N. Kolmogorov I, pages 248-270. Kluwer, 1991. Translated by V. M. Volosov from Bull. Moscow Univ., Math. Mech. 1, 1-25, 1937.
Indications of marine bioinvasion from network theory. An analysis of the global cargo ship network. A Kölzsch, B Blasius, Euro. Phys. J. B. 84A. Kölzsch and B. Blasius. Indications of marine bioinvasion from network theory. An analysis of the global cargo ship network. Euro. Phys. J. B, 84:601- 612, 2011.
A mathematical framework for critical transitions: bifurcations, fast-slow systems and stochastic dynamics. C Kuehn, Physica D. 24012C. Kuehn. A mathematical framework for criti- cal transitions: bifurcations, fast-slow systems and stochastic dynamics. Physica D, 240(12):1020-1035, 2011.
A mathematical framework for critical transitions: normal forms, variance and applications. C Kuehn, J. Nonl. Sci. acceptedC. Kuehn. A mathematical framework for critical transitions: normal forms, variance and applications. J. Nonl. Sci., pages 1-56, 2012. accepted.
Elements of Applied Bifurcation Theory. Yu A Kuznetsov, SpringerNew York, NY3rd editionYu.A. Kuznetsov. Elements of Applied Bifurcation Theory. Springer, New York, NY, 3rd edition, 2004.
Langevin approach to a chemical wave front: selection of the propagation velocity in the presence of internal noise. A Lemarchand, A Lesne, M Mareschal, Phys. Rev. E. 515A. Lemarchand, A. Lesne, and M. Mareschal. Langevin approach to a chemical wave front: selection of the propagation velocity in the presence of internal noise. Phys. Rev. E, 51(5):4457-4465, 1995.
Spread rate for a nonlinear stochastic invasion. M Lewis, J. Math. Biol. 41M. Lewis. Spread rate for a nonlinear stochastic in- vasion. J. Math. Biol., 41:430-454, 2000.
Modeling and analysis of stochastic invasion processes. M Lewis, S Pacala, J. Math. Biol. 41M. Lewis and S. Pacala. Modeling and analysis of stochastic invasion processes. J. Math. Biol., 41:387- 429, 2000.
Rates of spread of an invading species: mimosa pigra in Northern Australia. W M Lonsdale, J. Ecol. 81W.M. Lonsdale. Rates of spread of an invading species: mimosa pigra in Northern Australia. J. Ecol., 81:513-521, 1993.
Oscillations and waves in a virally infected plankton system. Part I: The lysogenic stage. H Malchow, F M Hilker, S V Petrovskii, K Brauer, Ecol. Complexity. 13H. Malchow, F.M. Hilker, S.V. Petrovskii, and K. Brauer. Oscillations and waves in a virally infected plankton system. Part I: The lysogenic stage. Ecol. Complexity, 1(3):211-233, 2004.
The dynamics of invasion waves. J A J Metz, D Mollison, F Van Den, Bosch, The Geometry of Ecological Interactions: Simplifying Spatial Complexity. U. Dieckmann, R. Law, and J.A.J. MetzCUPJ.A.J. Metz, D. Mollison, and F. van den Bosch. The dynamics of invasion waves. In U. Dieckmann, R. Law, and J.A.J. Metz, editors, The Geometry of Ecological Interactions: Simplifying Spatial Complexity, pages 482-512. CUP, 2000.
Reaction-diffusion modelling of bacterial colony patterns. M Mimura, H Sakaguchi, M Matsushita, Physica A. 282M. Mimura, H. Sakaguchi, and M. Matsushita. Reaction-diffusion modelling of bacterial colony pat- terns. Physica A, 282:283-303, 2000.
Random travelling waves for the KPP equation with noise. C Mueller, R B Sowers, J. Funct. Anal. 1282C. Mueller and R.B. Sowers. Random travelling waves for the KPP equation with noise. J. Funct. Anal., 128(2):439-498, 1995.
A phase transition for a stochastic PDE related to the contact process. C Mueller, R Tribe, 100Probab. Theory Relat. FieldsC. Mueller and R. Tribe. A phase transition for a stochastic PDE related to the contact process. Probab. Theory Relat. Fields, 100:131-156, 1994.
Stochastic PDEs arising from the long range contact and long range voter processes. C Mueller, R Tribe, 102Probab. Theory Relat. FieldsC. Mueller and R. Tribe. Stochastic PDEs arising from the long range contact and long range voter pro- cesses. Probab. Theory Relat. Fields, 102:519-545, 1995.
A phase diagram for a stochastic reaction diffusion system. C Mueller, R Tribe, Probab. Theory Relat. Fields. 149C. Mueller and R. Tribe. A phase diagram for a stochastic reaction diffusion system. Probab. Theory Relat. Fields, 149:561-637, 2011.
Mathematical Biology I: An Introduction. J D Murray, Springer3rd editionJ.D. Murray. Mathematical Biology I: An Introduc- tion. Springer, 3rd edition, 2002.
An active pulse transmission line simulating nerve axon. J Nagumo, S Arimoto, S Yoshizawa, Proc. IRE. IRE50J. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line simulating nerve axon. Proc. IRE, 50:2061-2070, 1962.
How predation can slow, stop or reverse a prey invasion. M R Owen, M A Lewis, Bull. Math. Biol. 63M.R. Owen and M.A. Lewis. How predation can slow, stop or reverse a prey invasion. Bull. Math. Biol., 63:655-684, 2001.
Interfacial velocity corrections due to multiplicative noise. L Pechenik, H Levine, Phys. Rev. E. 594L. Pechenik and H. Levine. Interfacial velocity cor- rections due to multiplicative noise. Phys. Rev. E, 59(4):3893-3900, 1999.
Invasion and the evolution of speed in toads. B L Phillips, G P Brown, J K Webb, R Shine, Nature. 439803B.L. Phillips, G.P. Brown, J.K. Webb, and R. Shine. Invasion and the evolution of speed in toads. Nature, 439:803, 2006.
A geometric classification of traveling front propagation in the Nagumo equation with cutoff. N Popovic, MURPHYS 2010: Proceedings of the International Workshop on Multi-Rate Processes and Hysteresis. Pecs26812023N. Popovic. A geometric classification of traveling front propagation in the Nagumo equation with cut- off. In MURPHYS 2010: Proceedings of the Interna- tional Workshop on Multi-Rate Processes and Hys- teresis, Pecs, 2010, volume 268 of J. Phys. Conference Series, page (012023), 2011.
A geometric analysis of front propagation in an integrable Nagumo equation with a linear cutoff. N Popovic, Physica D. 241N. Popovic. A geometric analysis of front propagation in an integrable Nagumo equation with a linear cut- off. Physica D, 241:1976-1984, 2012.
Stochastic Equations in Infinite Dimensions. G Da Prato, J Zabczyk, Cambridge University PressG. Da Prato and J. Zabczyk. Stochastic Equations in Infinite Dimensions. Cambridge University Press, 1992.
Diffusion coefficient of propagating fronts with multiplicative noise. A Rocco, J Casademunt, U Ebert, W Van Saarloos, Phys. Rev. E. 6512102Christian KuehnA. Rocco, J. Casademunt, U. Ebert, and W. van Saar- loos. Diffusion coefficient of propagating fronts with multiplicative noise. Phys. Rev. E, 65:(012102), 2001. Christian Kuehn
Kinematic reduction of reaction-diffusion fronts with multiplicative noise: derivation of stochastic sharpinterface equations. A Rocco, L Ramirez-Piscina, J Casademunt, Phys. Rev. E. 65056116A. Rocco, L. Ramirez-Piscina, and J. Casademunt. Kinematic reduction of reaction-diffusion fronts with multiplicative noise: derivation of stochastic sharp- interface equations. Phys. Rev. E, 65:(056116), 2002.
Mathematical analysis of the optimal habitat configurations for species persistence. L Roques, F Hamel, Math. Biosci. 210L. Roques and F. Hamel. Mathematical analysis of the optimal habitat configurations for species persistence. Math. Biosci., 210:34-59, 2007.
Recolonisation by diffusion can generate increasing rates of spread. L Roques, F Hamel, J Fayard, B Fady, E K Klein, J. Theor. Biol. 77L. Roques, F. Hamel, J. Fayard, B. Fady, and E.K. Klein. Recolonisation by diffusion can generate in- creasing rates of spread. J. Theor. Biol., 77:205-212, 2010.
Early-warning signals for critical transitions. M Scheffer, J Bascompte, W A Brock, V Brovkhin, S R Carpenter, V Dakos, H Held, E H Van Nes, M Rietkerk, G Sugihara, Nature. 461M. Scheffer, J. Bascompte, W.A. Brock, V. Brovkhin, S.R. Carpenter, V. Dakos, H. Held, E.H. van Nes, M. Rietkerk, and G. Sugihara. Early-warning signals for critical transitions. Nature, 461:53-59, 2009.
Catastrophic shifts in ecosystems. M Scheffer, S Carpenter, J A Foley, C Folke, B Walker, Nature. 413M. Scheffer, S. Carpenter, J.A. Foley, C. Folke, and B. Walker. Catastrophic shifts in ecosystems. Nature, 413:591-596, 2001.
Catastrophic regime shifts in ecosystems: linking theory to observation. M Scheffer, S R Carpenter, TRENDS in Ecol. and Evol. 1812M. Scheffer and S.R. Carpenter. Catastrophic regime shifts in ecosystems: linking theory to observation. TRENDS in Ecol. and Evol., 18(12):648-656, 2003.
Shallow lakes theory revisited: various alternative regimes driven by climate, nutrients, depth and lake size. M Scheffer, E H Van Nes, Hydrobiologia. 584M. Scheffer and E.H. van Nes. Shallow lakes theory re- visited: various alternative regimes driven by climate, nutrients, depth and lake size. Hydrobiologia, 584:455- 466, 2007.
Stationary states and the stability of the stepping stone model involving mutation and selection. T Shiga, K Uchiyama, Probab. Theory Related Fields. 73T. Shiga and K. Uchiyama. Stationary states and the stability of the stepping stone model involving mu- tation and selection. Probab. Theory Related Fields, 73:87-117, 1986.
Now you see them, now you dont! population crashes of established introduced species. D Simberloff, L Gibbons, Biological Invasions. 6D. Simberloff and L. Gibbons. Now you see them, now you dont! population crashes of established intro- duced species. Biological Invasions, 6:161-172, 2004.
Random dispersal in theoretical populations. J G Skellam, Biometrika. 381J.G. Skellam. Random dispersal in theoretical popu- lations. Biometrika, 38(1):196-218, 1951.
Noise induced phenomena in Lotka-Volterra systems. B Spagnolo, A Fiasconaro, D Valenti, Fluct. Noise Lett. 32B. Spagnolo, A. Fiasconaro, and D. Valenti. Noise induced phenomena in Lotka-Volterra systems. Fluct. Noise Lett., 3(2):177-185, 2003.
Ecological and evolutionary processes at expanding range margins. C D Thomas, E J Bodsworth, R J Wilson, A D Simmons, Z G Davies, M Musche1, L Conradt, Nature. 411C.D. Thomas, E.J. Bodsworth, R.J. Wilson, A.D. Simmons, Z.G. Davies, M. Musche1, and L. Conradt. Ecological and evolutionary processes at expanding range margins. Nature, 411:577-581, 2001.
A travelling wave solution to the Kolmogorov equation with noise. R Tribe, Stochastics. 563R. Tribe. A travelling wave solution to the Kol- mogorov equation with noise. Stochastics, 56(3):317- 340, 1996.
Front propagation into unstable states. W Van Saarloos, Physics Reports. 386W. van Saarloos. Front propagation into unstable states. Physics Reports, 386:29-222, 2003.
Recovery rates reflect distance to a tipping point in a living system. A J Veraart, E J Faassen, V Dakos, E H Van Nes, M Lurling, M Scheffer, Nature. 481A.J. Veraart, E.J. Faassen, V. Dakos, E.H. van Nes, M. Lurling, and M. Scheffer. Recovery rates reflect distance to a tipping point in a living system. Nature, 481:357-359, 2012.
Effects of noise in symmetric two-species competition. J M G Vilar, R V Solé, Phys. Rev. Lett. 8018J.M.G. Vilar and R.V. Solé. Effects of noise in sym- metric two-species competition. Phys. Rev. Lett., 80(18):4099-4102, 1998.
An introduction to stochastic partial differential equations. InÉcole d'été de probabilités de Saint-Flour, XIV -1984. J B Walsh, Lecture Notes in Math. 1180SpringerJ.B. Walsh. An introduction to stochastic partial differential equations. InÉcole d'été de probabilités de Saint-Flour, XIV -1984, volume 1180 of Lecture Notes in Math., pages 265-439. Springer, 1986.
An Introduction to Fronts in Random Media. J Xin, SpringerJ. Xin. An Introduction to Fronts in Random Media. Springer, 2009.
| []
|
[
"arXiv:physics/9812034v1 [physics.ins-det] Studies of 100 µm-thick silicon strip detector with analog VLSI readout",
"arXiv:physics/9812034v1 [physics.ins-det] Studies of 100 µm-thick silicon strip detector with analog VLSI readout"
]
| [
"T Hotta \nResearch Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan\n",
"M Fujiwara \nResearch Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan\n",
"T Kinashi \nDepartment of Physics\nYamagata University Yamagata\n990YamagataJapan\n",
"Y Kuno \nInstitute of Particle and Nuclear Studies (IPNS)\nHigh Energy Accelerator Research Organization (KEK)\n305TsukubaIbarakiJapan\n",
"M Kuss \nResearch Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan\n",
"T Matsumura \nResearch Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan\n",
"T Nakano \nResearch Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan\n",
"S Sekikawa \nInstitute of Physics\nUniversity of Tsukuba\n305TsukubaIbarakiJapan\n",
"H Tajima \nDepartment of Physics\nUniversity of Tokyo\nBunkyo-ku113TokyoJapan\n",
"K Takanashi \nResearch Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan\n"
]
| [
"Research Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan",
"Research Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan",
"Department of Physics\nYamagata University Yamagata\n990YamagataJapan",
"Institute of Particle and Nuclear Studies (IPNS)\nHigh Energy Accelerator Research Organization (KEK)\n305TsukubaIbarakiJapan",
"Research Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan",
"Research Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan",
"Research Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan",
"Institute of Physics\nUniversity of Tsukuba\n305TsukubaIbarakiJapan",
"Department of Physics\nUniversity of Tokyo\nBunkyo-ku113TokyoJapan",
"Research Center for Nuclear Physics\nOsaka University\n567IbarakiOsakaJapan"
]
| []
| We evaluate the performances of a 100 µm-thick silicon strip detector (SSD) with a 300 MeV proton beam and a 90 Sr β-ray source. Signals from the SSD have been read out using a VLSI chip. Common-mode noise, signal separation efficiency and energy resolution are compared with those for the SSD's with a thickness of 300 µm and 500 µm. Energy resolution for minimum ionizing particles (MIP's) is improved by fitting the non-constant component in a common-mode noise with a linear function. | null | [
"https://arxiv.org/pdf/physics/9812034v1.pdf"
]
| 13,065,081 | physics/9812034 | b86f723a8fb4ead9c33b434bfcd0c6ef9d98be5e |
arXiv:physics/9812034v1 [physics.ins-det] Studies of 100 µm-thick silicon strip detector with analog VLSI readout
18 Dec 1998
T Hotta
Research Center for Nuclear Physics
Osaka University
567IbarakiOsakaJapan
M Fujiwara
Research Center for Nuclear Physics
Osaka University
567IbarakiOsakaJapan
T Kinashi
Department of Physics
Yamagata University Yamagata
990YamagataJapan
Y Kuno
Institute of Particle and Nuclear Studies (IPNS)
High Energy Accelerator Research Organization (KEK)
305TsukubaIbarakiJapan
M Kuss
Research Center for Nuclear Physics
Osaka University
567IbarakiOsakaJapan
T Matsumura
Research Center for Nuclear Physics
Osaka University
567IbarakiOsakaJapan
T Nakano
Research Center for Nuclear Physics
Osaka University
567IbarakiOsakaJapan
S Sekikawa
Institute of Physics
University of Tsukuba
305TsukubaIbarakiJapan
H Tajima
Department of Physics
University of Tokyo
Bunkyo-ku113TokyoJapan
K Takanashi
Research Center for Nuclear Physics
Osaka University
567IbarakiOsakaJapan
arXiv:physics/9812034v1 [physics.ins-det] Studies of 100 µm-thick silicon strip detector with analog VLSI readout
18 Dec 1998
We evaluate the performances of a 100 µm-thick silicon strip detector (SSD) with a 300 MeV proton beam and a 90 Sr β-ray source. Signals from the SSD have been read out using a VLSI chip. Common-mode noise, signal separation efficiency and energy resolution are compared with those for the SSD's with a thickness of 300 µm and 500 µm. Energy resolution for minimum ionizing particles (MIP's) is improved by fitting the non-constant component in a common-mode noise with a linear function.
Introduction
A silicon strip detector (SSD) has the highest position resolution among the electric tracking devices in particle physics experiments. However, an error in measuring the track angle is dominated by the multiple scattering effect for particles with a low velocity. If the effect is reduced with a very thin SSD, new experiments which are impossible by the present technology will be realized.
One example is a search for the T violation in the decay of B mesons [1], in which the T -violating transverse τ + polarization in the decay B → Dτ + ν will be measured to a precision of 10 −2 . In order to obtain the τ polarization the decay vertices of B and τ must be measured separately. A simulation shows that the experiment will be feasible only with very thin SSD's at asymmetricenergy B factories.
In general, a thin SSD has a small signal-to-noise (S/N) ratio because the energy deposit in the detector is proportional to the thickness and its large capacitance results in a large noise. Thus careful treatment of a noise in the off-line analysis is important.
In this paper, we evaluate the performances of a 100 µm-thick silicon strip detector. The performances are compared with those of the 300 µm and 500 µmthick silicon strip detectors.
Detector
Single-sided silicon detectors with the dimensions of 1 cm × 1.3 cm have been fabricated by Hamamatsu Photonics. The strip pitch is 100 µm. The widths of implantation strips and aluminum electrodes are 42 µm and 34 µm, respectively. Three detectors with different thicknesses (100 µm, 300 µm, and 500 µm) were tested. The 100 µm-thick SSD was made by etching a 300 µmthick wafer. The analog VLSI chips (VA2 3 ) [2] are used as a readout circuit of the detectors. An SSD and a VLSI chip were mounted on a printed circuit board called "hybrid board".
Experiment
Two different particles were used for evaluation of the detector performances. A proton beam was used to measure the response of detectors for baryons or heavy particles. To see the response for light and high velocity particles which satisfy the minimum ionizing particle (MIP) condition (E/m > 3), electrons from a 90 Sr β-ray source was used.
The experiment was carried out with a proton beam at the Research Center for Nuclear Physics, Osaka University. Scattered protons from a 12 C target were momentum analyzed by a magnetic spectrometer. A detector system that consists of an SSD and two trigger plastic scintillation counters was placed at the focal plane of the spectrometer. The momentum of detected protons was 800 MeV/c with the momentum spread of < 0.05%. The energy loss for a proton with 800 MeV/c is 68 keV for the 100 µm-thick SSD, which is about 1.7 times larger than that for the minimum ionizing protons.
The readout system is schematically shown in Fig. 1. The hybrid board consisting of a silicon strip and a VA2 chip was connected to a "repeater card", which contained level converters for logic signals, buffer amplifier for analog output signal, and adjustable bias supply for the VA2 chip. The VA2 chip was controlled by a VME based timing board which received a trigger signal and generated clock pulses for VA2 and a VME based flash ADC board. Analog multiplexed output from VA2 was sent to a flash ADC through the repeater card. Two layers of trigger counters were placed in front of the SSD. The repeater card was connected to the hybrid board with a ribbon cable for both the analog and logic signals. The length of the ribbon cable was about 15 cm.
In order to compare the characteristics of silicon strip detectors, the operation parameters of the VA2 readout chips were fixed to standard values without optimization for each measurement. Signal shaping time was about 700 ns. Signals were read out in 4 µsec clock repetition. Typical trigger rate was about 30 Hz.
In addition to a proton beam test, measurements with a 90 Sr β-ray source were also performed. The 90 Sr β-ray source was placed at 15 mm from the SSD. A collimator with a size of 2 mm in diameter and 10 mm in thickness was used to irradiate electrons perpendicularly to the SSD. In order to realize the minimum ionizing condition, a high energy component of β-rays was selected by a trigger scintillation counter placed behind the SSD. Operation parameters of the VA2 chip was the same as those at the proton beam test. Readout clock was 400 ns. The trigger rate at the β-ray source test was about 7 Hz.
Analysis and Results
An output from each strip has a different offset level. These differences have been trimmed at the first step of the off-line analysis. Solid lines in Fig. 2 show the maximum pulse height distributions after the pedestal trimming for 100 µm, 300 µm, and 500 µm-thick SSD's at the proton beam test. Note that we have neglected the effect of charge division among adjacent strip. Dotted lines show the same distributions under the condition that no charged particle hit the detector. The noise peak and proton signal peak have overlapped for the 100 µm-thick SSD, while the proton signals are clearly distinguished from noises for the 500 µm and 300 µm-thick SSD's.
For 100 µm-thick SSD, a strong noise level correlation between non-adjacent channels has been observed. This indicates that the main component of the noise has a common phase and amplitude among the strips. This component called common-mode noise (CMN) has been calculated as an averaged pulse height over all strips. In the calculation, channels with significantly large pulse heights; larger than 3 standard deviation (σ) of the noise distribution, have been excluded. Fig. 3 shows the maximum pulse height distribution after the CMN subtraction for the 100 µm SSD. Proton events are clearly separated from the noise.
We have investigated the characteristics of noise more carefully. Fig. 4(a) shows the strip dependence of the noise width after the CMN subtraction. The width depends on the strip number, whereas pulse height differences between adjacent two strips shown in Fig. 4(b) have a constant value of about 6. This indicates that the intrinsic σ of the noise is expected to be about 4.2 (= 6/ √ 2) for all strips. Thus, we conclude that the CMN has a non-constant component. Instead of simply averaging the pulse heights, we fit them with a linear curve to get CMN as a function of a channel number. Fig. 4(c) shows the noise widths after this method is applied. The widths are about 4.2 for all strips as expected.
If the CMN is not removed correctly by assuming a constant CMN, a noise width depends on a strip number (Fig. 4(a)). This may cause a strip dependent S/N separation which are not desirable for any experiments. Fitting the CMN with a linear curve is particularly important for the detection of MIP's with a thin SSD where the S/N ratio is small. The maximum pulse height distribution for electrons with 100 µm-thick SSD after subtracting the CMN by linear-fitting is shown in Fig. 5(b) compared with that with a constant CMN subtraction ( Fig. 5(a)). Although electron events are not separated from the noise for both cases, the separation of signals from noises is improved by the linear-fitting method 4 . Fig. 5 indicates that there is a finite probability of misidentifying a noise as a particle track by selecting the maximum pulse height. The detection efficiency and signal misidentification probability for electrons with 100 µm-thick SSD are plotted as a function of threshold energies in Fig. 6. When a threshold level is set to detect the electron with an efficiency more than 99% the probability of misidentification obtained by linear-fitting of CMN is 27% smaller compared to that by the constant CMN-subtraction method. The S/N ratio obtained from β-ray source tests for the 100 µm and 300 µm SSD's are summarized in Table 1. Better S/N ratio is obtained by fitting CMN with a linear curve. The S/N ratios obtained with the assumption of a constant CMN for both the 100 µm and 300 µm SSD's are slightly worse. For a 300 µm SSD, the difference of two methods in subtracting the CMN is not very important in an actual application because the S/N ratio is sufficiently large. Noise width obtained in the 90 Sr β-ray source test and the proton beam test are summarized in Table 2 in energy unit (keV). The width of CMN at the β-ray source test is different from that at the proton beam test. But the noise after the CMN subtraction is almost the same.
There remains a possibility to improve the S/N ratio by considering the charge division among adjacent strips during finding a particle trajectory. Performances of the prototype detector might be improved by optimizing its operating conditions.
Conclusion
An SSD with a thickness of 100 µm was tested with 800 MeV/c protons and β-rays from 90 Sr source. By using an analog VLSI chip for readout, we remove the CMN. Assuming that CMN is constant among all strips, proton signals are separated from noises for the 100 µm, 300 µm and 500 µm-thick SSD's after the CMN subtraction. We found that a non-constant component in CMN makes the energy resolution worse. For the 100 µm SSD, the signal and noise separation was improved by fitting CMN with a linear curve at a β-ray source test. We conclude that a 100 µm SSD with analog VLSI readout can be used as a very thin tracking device in a future experiment. However, careful treatment
Fig. 1 .
1The schematic view of the readout system.
Fig. 2 .
2The maximum pulse height for proton signals (solid lines) and noises (dotted lines) for the 100 µm, 300 µm, and 500 µm-thick SSD's.
Fig. 3 .Fig. 4 .
34Maximum pulse height for protons (solid line) and noise (dotted line) after the CMN subtraction for the 100 µm-thick SSD. Strip dependence of the noise width, σ. (a) After subtracting the constant CMN, (b) width of the difference between adjacent strips, (c) After the CMN subtraction by linear-fitting.
Fig. 5 .Fig. 6 .
56Maximum pulse height of electrons from 90 Sr source for the 100 µm SSD after the CMN subtraction by constant (a) and linear-fitting (Detection efficiency for electrons (dotted curve) with the 100 µm-thick SSD. Solid and dashed curves indicate the fraction of noise peak after the constant CMN-subtraction and the linear-fitting methods were applied, respectively.
Table 1 S
1/N ratios for β-ray electron signal. Noise width[keV] at the 90 Sr β-ray source test (and proton beam test).SSD thickness
100 µm 300 µm
without CMN subtraction
4.91
17.1
constant CMN
7.45
28.7
linear-fitted CMN
7.88
29.7
Table 2
SSD thickness
100 µm
300 µm
500 µm
no CMN subtraction 7.34 (27.7) 6.27 (4.77) -(3.27)
constant CMN
4.83 (4.18) 3.73 (3.58) -(2.89)
linear-fitted CMN
4.57 (4.14) 3.60 (3.56) -(2.84)
Produced by Integrated Detector and Electronics AS (IDEAS), Oslo, Norway.
β-rays were irradiated at the central strips by using a collimator. It is expected that this improvement is clearly seen for the strips near the edge of the detector.
AcknowledgementsThis work has been supported by the Grant-in-Aid for General Science Research (No. 07454057 and No. 09640358) by the Ministry of Education, Science and Culture.
. Y Kuno, Chinese J. of Phys. 321015Y. Kuno, Chinese J. of Phys. 32(1994)1015.
. O Toker, S Masciocchi, E Nygård, A Rudge, P Weilhammer, Nucl. Instr. and Meth. 340572O. Toker, S. Masciocchi, E. Nygård, A. Rudge and P. Weilhammer, Nucl. Instr. and Meth. A340(1994)572.
| []
|
[
"Regularity Results for Generalized Electro-Magnetic Problems",
"Regularity Results for Generalized Electro-Magnetic Problems"
]
| [
"Peter Kuhn ",
"Dirk Pauly "
]
| []
| []
| We prove regularity results up to the boundary for time independent generalized Maxwell equations on Riemannian manifolds with boundary using the calculus of alternating differential forms. We discuss homogeneous and inhomogeneous boundary data and show 'polynomially weighted' regularity in exterior domains as well. | 10.1524/anly.2010.1024 | [
"https://arxiv.org/pdf/1105.4091v1.pdf"
]
| 119,145,766 | 1105.4091 | 20eac5ac5090e5b07e1cc005abaeda157a243671 |
Regularity Results for Generalized Electro-Magnetic Problems
20 May 2011 2008
Peter Kuhn
Dirk Pauly
Regularity Results for Generalized Electro-Magnetic Problems
20 May 2011 2008regularityMaxwell's equationselectro-magnetic problems AMS MSC-Classifications 35Q6078A2578A30
We prove regularity results up to the boundary for time independent generalized Maxwell equations on Riemannian manifolds with boundary using the calculus of alternating differential forms. We discuss homogeneous and inhomogeneous boundary data and show 'polynomially weighted' regularity in exterior domains as well.
Introduction
Regularity theorems are important tools in almost all fields of partial differential equations. In our efforts to completely determine the low frequency behavior of the timeharmonic solutions of the generalized Maxwell's equations in exterior domains of R N [4,5,6,7,8] as well as to prove compactness results and trace theorems for Sobolev spaces of differential forms on N-dimensional Riemannian manifolds [2] we have been forced to show regularity results, which meet our needs. Here 'generalized' means using the calculus of alternating differential forms on Riemannian manifolds of arbitrary dimension, which is a convenient and well-known way to formulate Maxwell's equations and to emphasize their independence of the special choice of a coordinate system. Since these results are of particular interest of their own we will prove in the paper at hand results for the time independent case like the following:
Let M be a N-dimensional smooth Riemannian manifold and Ω ⊂ M be some connected open subset. If the exterior derivative of some differential form E from L 2 s (Ω) and the coderivative of εE belong to some suitable weighted Sobolev space H m s+1 (Ω) and the tangential trace ι * E belongs to the corresponding trace Sobolev space H m+1/2 (∂ Ω) as well, then E already belongs to the higher order Sobolev space H m+1 s (Ω) . (For details please see section 3.) Here ε is a real valued, symmetric, bounded and uniformly positive definite linear transformation (one may think of a matrix) on differential forms, ι denotes the natural embedding of the boundary, i.e. ι : ∂ Ω ֒→ Ω , and s ∈ R indicates some polynomially weight. For manifolds with compact closure, i.e. 'bounded domains', the weight s plays no role since then all results for s are equivalent to the special case s = 0 .
Regularity results as well as regularity estimates, which automatically will be shown within our proofs, presented here are flexibly usable in the context of time independent generalized Maxwell's equations. For example, if we consider (linear media and) the static generalized Maxwell equations
dE = G , δεE = f , ι * E = λ , δH = F , dµH = g , ι * µH = κ
or the time-harmonic generalized Maxwell equations (with frequency ω)
dE + i ωµH = G , ι * E = λ , δH + i ωεE = F , ι * µH = κ ,
e.g. arising from the full generalized Maxwell equations by Fourier's transformation with respect to time (or a time-harmonic ansatz), we get regularity of the solutions and corresponding estimates immediately or by induction, respectively. We should mention that the generalized Maxwell equations also comprise the system of linear acoustics and the 2-dimensional version of Maxwell's equations as well as periodic boundary conditions in a unified approach.
In the special classical case of bounded sub-domains of the Euclidian space R 3 and homogeneous boundary traces such results for Maxwell problems have been proved earlier by Weber [17].
Preliminaries and definitions
Let M be a N-dimensional smooth Riemannian manifold and Ω ⊂ M denote some connected open subset with compact closure in M . On • C ∞,q (Ω) , the vector space of all smooth (C ∞ ) differential forms of rank q (shortly q-forms) on Ω with compact support in Ω , we have a scalar product
H m (h ℓ (V ℓ ∩Ω)) 1/2 < ∞ ,
where E ℓ I denote the component functions of (h −1 ℓ ) * E = E ℓ I dx I (sum convention) with respect to Cartesian coordinates. Here we introduced an obvious (ordered) multi index notation dx I = dx i 1 ∧ · · · ∧ dx iq for I := (i 1 , . . . , i q ) ∈ {1, . . . , N} q . Transformation theorems and [22,Satz 4.1] for scalar functions show that this definition is independent of the chosen charts. Another covering yields the same Sobolev space but with an equivalent norm. Furthermore, for all m ∈ N 0 and any C m+1 -diffeomorphism τ :Ω → Ω there exists a constant c > 0 , such that
c −1 ||E|| H m,q (Ω) ≤ ||τ * E|| H m,q (Ω) ≤ c||E|| H m,q (Ω) (2.1)
holds for all E ∈ H m,q (Ω) .
Definition 2.1 Let m ∈ N 0 . We call ∂ Ω a 'C m -boundary', if ∂ Ω is a (N − 1)- dimensional C m -submanifold of M , i.e. for each x ∈ ∂ Ω there exists a C m -boundary chart (V, h) with h(x) = 0 and h(V ) = U 1 , such that h(∂ Ω ∩ V ) = U 0 1 , h(Ω ∩ V ) = U − 1 , h (M \ Ω) ∩ V = U + 1 and h • k −1 ∈ C m k(Ṽ ∩ V ), R N hold for all charts (for Ω) (Ṽ , k) of x ∈ ∂ Ω .
Here U r ⊂ R N denotes the open ball centered at the origin with radius r > 0 and we define
U ± r := x ∈ U r : ± x N > 0 , U 0 r := x ∈ U r : x N = 0 .
Using sufficiently smooth restricted boundary charts and following the ideas of the definition of H m,q (Ω) we may also introduce for all m ∈ [0, ∞) the Sobolev spaces H m,q (∂ Ω) .
We also define H −m,q (∂ Ω) for m ∈ (0, ∞) as the dual space of • H m,q (∂ Ω) = H m,q (∂ Ω) and introduce the exterior derivative, co-derivative and star-operator on H −m,q (∂ Ω) by weak formulations. Utilizing boundary charts, (2.1) and the corresponding results for scalar Sobolev spaces, e.g. [22,Satz 8.7,Satz 8.8], which will be applied componentwise to q-forms in R N , we obtain the following lemma: Lemma 2.2 Let m ∈ N and Ω possess a C m+1 -boundary. Moreover, let ι : ∂ Ω ֒→ Ω denote the natural embedding. Then there exists a linear and continuous tangential trace operator
γ t : H m,q (Ω) → H m−1/2,q (∂Ω) satisfying γ t Φ = ι * Φ and d ∂ Ω γ t Φ = γ t dΦ for all Φ ∈ C ∞,q (Ω)
, the vector space of all C ∞,q (M)-forms restricted to Ω . Moreover, γ t is surjective, i.e. there exists a linear and continuous tangential extension operatoř
γ t : H m−1/2,q (∂Ω) → H m,q (Ω)
with the property γ tγt = id (right inverse).
By the star operator we define linear and continuous normal trace and extension operators by
γ n := (−1) (q−1)N * ∂ Ω γ t * : H m,q (Ω) −→ H m−1/2,q−1 (∂Ω) , γ n := (−1) q(N −q) * γ t * ∂ Ω : H m−1/2,q−1 (∂Ω) −→ H m,q (Ω) ,
which possess the corresponding properties. By Stokes' theorem we obtain
dE, H L 2,q+1 (Ω) + E, δH L 2,q (Ω) = γ t E, γ n H L 2,q (∂Ω) (2.2)
for (E, H) ∈ H 1,q,q+1 (Ω) , if Ω has a C 2 -boundary. It is well known that this suggests to define the tangential trace
γ t E ∈ H −1/2,q (∂Ω) of a q-form E ∈ D q (Ω) by γ t E(ϕ) = γ t E, ϕ H −1/2,q (∂Ω) := dE,γ n ϕ L 2,q+1 (Ω) + E, δγ n ϕ L 2,q (Ω) (2.3)
for all ϕ ∈ H 1/2,q (∂Ω) . Clearly acting on E ∈ H 1,q (Ω) it satisfies
γ t E, ϕ H −1/2,q (∂Ω) = γ t E, ϕ L 2,q (∂Ω) (2.4)
for all ϕ ∈ H 1/2,q (∂Ω) . Hence in this case we have γ t E = γ t E, · L 2,q (∂Ω) and we identify the continuous linear functional γ t E with the element γ t E ∈ H 1/2,q (∂Ω) . We note that γ t still commutes with the exterior derivative and that the mapping
γ t : D q (Ω) −→ D q (∂Ω) := e ∈ H −1/2,q (∂Ω) : d ∂ Ω e ∈ H −1/2,q+1 (∂Ω)
is continuous. Moreover, we have for all E ∈ D q (Ω)
γ t E = 0 ⇐⇒ E ∈ • D q (Ω) ,(2.5)
where we set
• D q (Ω) := • C ∞,q (Ω) ,
taking the closure in D q (Ω) . We note • ε(x) is a linear mapping on q-forms for all x ∈ Ω ,
• D q (Ω) = E ∈ D q (Ω) : ∀ H ∈ ∆ q+1 (Ω) dE, H L 2,q+1 (Ω) + E, δH
• ε possesses real L ∞ (Ω)-coefficients, i.e. the matrix representation of ε corresponding to an arbitrary chart basis {dh I } has L ∞ (Ω, R)-entries,
• ε is symmetric, i.e. for all E, H ∈ L 2,q (Ω) we have
εE, H L 2,q (Ω) = E, εH L 2,q (Ω) ,
• ε is uniformly positive definite, i.e.
∃ c > 0 ∀ E ∈ L 2,q (Ω) εE, E L 2,q (Ω) ≥ c||E|| 2 L 2,q (Ω)
.
We call ε C m -admissible, if and only if ε is admissible and has C m (Ω)-coefficients, which are bounded together with all their derivatives up to the boundary. Here we mean componentwise differentiation and write ∂ α ε for |α| ≤ m .
We note that admissible transformations ε generate an equivalent scalar product on L 2,q (Ω) by
(E, H) −→ εE, H L 2,q (Ω) .
Of course most of these concepts extend to manifolds, whose closures are not compact. Particularly we may consider the special case of M := R N as a smooth Riemannian manifold of dimension N ∈ N and an exterior domain Ω ⊂ R N , i.e. Ω is connected and R N \ Ω compact. The definitions of spaces carry over to exterior domains as long as the compactness of Ω is not necessary.
Using the weight function
ρ := (1 + r 2 ) 1/2 , r(x) := |x|
we introduce for m ∈ N 0 and s ∈ R the scalar weighted Sobolev spaces
H m s (Ω) := u ∈ L 2 loc (Ω) : ρ s+|α| ∂ α u ∈ L 2 (Ω) for all |α| ≤ m , ⊂ H m s (Ω) := u ∈ L 2 loc (Ω) : ρ s ∂ α u ∈ L 2 (Ω) for all |α| ≤ m
utilizing the usual multi index notation for partial derivatives. (To distinguish between these different polynomially weighted Sobolev spaces of exterior domains we will use roman and bold roman letters simultaneously.) Equipped with their natural scalar products these are Hilbert spaces. Now we have a global chart (Ω, id) and Ω becomes naturally a N-dimensional smooth Riemannian manifold with Cartesian coordinates {x 1 , . . . , x N } . As before with componentwise partial derivatives ∂ α u = (∂ α u I ) dx I , if u = u I dx I , we introduce for m ∈ N 0 and s ∈ R componentwise the Sobolev spaces H m,q s (Ω) resp. H m,q s (Ω) of q-forms. In the special case m = 0 we define L 2,q s (Ω) := H 0,q s (Ω) = H 0,q s (Ω) .
Then for f = f I dx I , g = g I dx I ∈ L 2,q s (Ω) we have the scalar product
f, g L 2,q s (Ω) = Ω ρ 2s f ∧ * g =: * f,g q = Ω ρ 2s f, g q dλ = Ω ρ 2s f I g I dλ ,
where λ denotes Lebesgue's measure in R N . Furthermore, for s ∈ R we need some special weighted Sobolev spaces suited for the exterior derivative and co-derivative:
D q s (Ω) := E ∈ L 2,q s (Ω) : dE ∈ L 2,q+1 s+1 (Ω) ⊂ D q s (Ω) := E ∈ L 2,q s (Ω) : dE ∈ L 2,q+1 s (Ω) ∆ q s (Ω) := H ∈ L 2,q s (Ω) : δH ∈ L 2,q−1 s+1 (Ω) ⊂ ∆ q s (Ω) := H ∈ L 2,q s (Ω) : δH ∈ L 2,q−1 s (Ω)
Equipped with their natural graph norms these are all Hilbert spaces. To generalize the homogeneous tangential boundary condition we introduce again The properties 'admissible' and 'C m -admissible' extend analogously to our exterior domain case as well. Nevertheless we need some additional decay properties of our transformations.
Definition 2.4 Let m ∈ N 0 and τ ≥ 0 . We call ε τ -C m -admissible of first resp. second kind, if and only if ε = ε 0 +ε with some ε 0 > 0 is C m -admissible and the perturbationε satisfies
∀ |α| ≤ m ∂ αε = O(r −τ ) resp. O(r −(τ +|α|) ) as r → ∞ .
In each case we call τ the order of decay of the perturbationε . Without loss of generality we may assume ε 0 = 1 , i.e. ε = id +ε , throughout this paper.
We note that a transformation is 0-C m -admissible of first kind, if and only if it is C m -admissible.
Finally if the exterior domain Ω has got a C 2 -boundary there exist adequate trace and extension operators as well. By obvious restriction, extension by zero and cutting techniques we obtain linear and continuous tangential trace and extension operators
s∈R H m,q s (Ω) γt −→ H m−1/2,q (∂ Ω)γ t −→ s∈R H m,q s (Ω) , γ tγt = id ,
whereγ t even maps to compactly supported forms and γ t even operates on H m,q loc (Ω) . Here continuity is to be understood in the sense of
H m,q s (Ω) γt −→ H m−1/2,q (∂ Ω)γ t −→ H m,q s (Ω) (2.6)
for all s ∈ R . Again by the star operator we get the corresponding linear and continuous normal trace and extension operators γ n := ± * ∂ Ω γ t * ,γ n := ± * γ t * ∂ Ω . As indicated above by Stokes' theorem (2.2) we then get for all s ∈ R a linear and continuous tangential trace
operator γ t : D q s (Ω) −→ H −1/2,q (∂Ω) , which is (well) defined by γ t E(ϕ) = γ t E, ϕ H −1/2,q (∂Ω) := dE,γ n ϕ L 2,q+1 (Ω) + E, δγ n ϕ L 2,q (Ω)
for all E ∈ D q s (Ω) and ϕ ∈ H 1/2,q (∂Ω) . Once more for E ∈ H m,q s (Ω) we identify the continuous linear functional γ t E with the element γ t E ∈ H 1/2,q (∂Ω) and of course the mapping γ t : D q s (Ω) −→ D q (∂Ω) is continuous as well. We still have for all s ∈ R and all E ∈ D q s (Ω)
γ t E = 0 ⇐⇒ E ∈ • D q s (Ω) . (2.7)
3 Regularity theorems Theorem 3.1 Let m ∈ N 0 , Ω be a connected open subset with compact closure and C m+2 -boundary of some smooth Riemannian manifold M as well as ε be some C m+1admissible transformation. Furthermore, let
E ∈ D q (Ω) ∩ ε −1 ∆ q (Ω) with dE ∈ H m,q+1 (Ω) , δεE ∈ H m,q−1 (Ω) , γ t E ∈ H m+1/2,q (∂ Ω) .
Then E ∈ H m+1,q (Ω) and there exists a positive constant c independent of E , such that
||E|| H m+1,q (Ω) ≤ c ||E|| L 2,q (Ω) + || dE|| H m,q+1 (Ω) + ||δεE|| H m,q−1 (Ω) + ||γ t E|| H m+1/2,q (∂ Ω) .
Theorem 3.2 Let s ∈ R , m ∈ N 0 , Ω ⊂ R N be an exterior domain with C m+2 -boundary and ε be some C m+1 -admissible transformation. Furthermore, let
E ∈ D q s (Ω) ∩ ε −1 ∆ q s (Ω) with γ t E ∈ H m+1/2,q (∂ Ω) . (i) Then dE ∈ H m,q+1 s (Ω) and δεE ∈ H m,q−1 s (Ω) imply E ∈ H m+1,q s
(Ω) and with some constant c > 0
||E|| H m+1,q s (Ω) ≤ c ||E|| L 2,q s (Ω) + || dE|| H m,q+1 s (Ω) + ||δεE|| H m,q−1 s (Ω) + ||γ t E|| H m+1/2,q (∂ Ω)
holds uniformly with respect to E .
(ii) If additionally ε is 0-C m+1 -admissible of second kind and τ -C 0 -admissible of first (or second) kind with some τ > 0 then dE ∈ H m,q+1 s+1 (Ω) and δεE ∈ H m,q−1 s+1 (Ω) imply E ∈ H m+1,q s (Ω) and there exists some positive constant c , such that the estimate
||E|| H m+1,q s (Ω) ≤ c ||E|| L 2,q s (Ω) + || dE|| H m,q+1 s+1 (Ω) + ||δεE|| H m,q−1 s+1 (Ω) + ||γ t E|| H m+1/2,q (∂ Ω)
holds uniformly with respect to E .
Remark 3.3
Utilizing the transformation E εE and/or the Hodge star-operator we obtain similar results for spaces like ε −1 D q (Ω) ∩ ∆ q (Ω) and/or with prescribed normal traces γ n .
Proofs
Riemannian manifolds with compact closure
Proof of Theorem 3.1 Extending the boundary form γ t E to Ω by Lemma 2.2 viǎ
E :=γ t γ t E ∈ H m+1,q (Ω) yields thatẼ := E −Ě is an element of • D q (Ω) ∩ ε −1 ∆ q (Ω) and still satisfies dẼ ∈ H m,q+1 (Ω) , δεẼ ∈ H m,q−1 (Ω) .
Hence our problem is reduced to the discussion of forms with homogeneous tangential trace. The classical case N = 3 , q = 1 and Ω is some bounded domain in R 3 has been proved by Weber in [17] using the natural regularity of (q − 1 = 0)-resp. (q + 2 = 3)-forms, i.e. scalar functions. Here in the generalized case we have to deal with some additional difficulties.
Using a partition of unity we localize our problem and only consider the more difficult case of boundary charts. (A very simple proof of inner regularity utilizing Fourier's transformation is presented in section 4.2.) By (2.1) and Lemma A.8 we transform our problem to the special domain U − 1 ⊂ U 1 ⊂ R N using a C m+2 -boundary chart. Hence we have to show the following assertion for the model problem:
Lemma 4.1 Let ε be C m+1 -admissible (in U − 1 ) and E ∈ • D q (U − 1 ) ∩ ε −1 ∆ q (U − 1 ) with supp E ⊂ U − ̺ for some ̺ ∈ (0, 1) as well as dE ∈ H m,q+1 (U − 1 ) , δεE ∈ H m,q−1 (U − 1 )
.
Then E ∈ H m+1,q (U − 1 )
and there exists a positive constant c , such that
||E|| H m+1,q (U − 1 ) ≤ c ||E|| L 2,q (U − 1 ) + || dE|| H m,q+1 (U − 1 ) + ||δεE|| H m,q−1 (U − 1 )
holds uniformly with respect to E .
Proof First let us discuss the case N ≥ 3 by induction over q and m . Since we
have • D 0 (U − 1 ) = • H 1 (U − 1 ) (d acts as ∇ !) the case q = 0 is trivial. Moreover, because of ∆ N (U − 1 ) = H 1 (U − 1 ) (δ acts as ∇ !) the case q = N is trivial as well.
Thus we may assume 1 ≤ q ≤ N − 1 and that the assertion is valid for q − 1 . Let m = 0 . First we take care about the tangential derivatives and show
∂ i E ∈ L 2,q (U − 1 ) , || ∂ i E|| L 2,q (U − 1 ) ≤ c||E|| D q (U − 1 )∩ε −1 ∆ q (U − 1 ) (4.1)
for i = 1, . . . , N − 1 . By symmetry it is sufficient to consider i = 1 . We choose some θ ∈ (0, 1) satisfying ̺ + 4θ < 1 and put ̺ j := ̺ + jθ , j = 1, . . . , 4 . For 0 < |h| < θ we introduce the mappings
τ h : R N − −→ R N − x −→ (x 1 + h, x 2 , · · · , x N ) , δ h := 1 h (τ h − id) , where R N − := {x ∈ R N : x N < 0} .
The pullback δ * h of the latter operator acts componentwise as the differential quotient and commutates with d, * and thus also with δ . For all
F, G ∈ L 2,q (U − 1 ) with support in U − ̺ 3 we have with some constant c > 0 independent of h or F δ * h F, G L 2,q (U − 1 ) = − F, δ * −h G L 2,q (U − 1 ) , δ * h εF = εδ * h F + (δ h ε)τ * h F , ||τ * h F || L 2,q (U − 1 ) ≤ c||F || L 2,q (U − 1 ) , (δ h ε)F L 2,q (U − 1 ) ≤ c||F || L 2,q (U − 1 ) , (4.2) where (δ h ε)Φ(x) := δ h ε J,I (x) Φ I (x)dx J with Φ(x) = Φ I (xF ∈ H m,q (U − 1 ) supported in U − ̺ 3 ||δ * h F || H m−1,q (U − 1 ) ≤ ||F || H m,q (U − 1 )
.
To show (4.1) by [1,Theorem 3.15] it suffices to prove
||δ * h E|| L 2,q (U − ̺ 1 ) ≤ c||E|| D q (U − 1 )∩ε −1 ∆ q (U − 1 )
, where c > 0 is independent of h , ̺ or E . In turn this estimate follows by the even stronger estimate
εδ * h E, Φ L 2,q (U − ̺ 1 ) ≤ c||E|| D q (U − 1 )∩ε −1 ∆ q (U − 1 ) ||Φ|| L 2,q (U − ̺ 1 ) (4.3) for all Φ ∈ L 2,q (U − ̺ 1 ) , where c > 0 is independent of h , ̺ , E or Φ . Therefore, let Φ ∈ L 2,q (U − ̺ 1 ) . According to Lemma A.1 we decompose Φ (actually the extension by zero to U − 1 of Φ) orthogonally in L 2,q (U − 1 ) Φ = Φ 1 + ε −1 Φ 2 , where Φ 1 ∈ d • D q−1 (U − 1 ) and Φ 2 ∈ δ∆ q+1 (U − 1 ) closures in L 2,q (U − 1 ) , since H q (U − 1 ) vanishes by [9, Satz 1, Satz 2] and thus ε H q (U − 1 ) = {0} as well. Moreover, by (A.4), (A.5) we may assume Φ 1 = dΨ 1 and Φ 2 = δΨ 2 with Ψ 1 ∈ • D q−1 (U − 1 ) ∩ 0 ∆ q−1 (U − 1 ) and Ψ 2 ∈ ∆ q+1 (U − 1 ) ∩ 0 • D q+1 (U − 1 ) . Furthermore, (A.3) yields a constant c > 0 independent of Φ , Φ ℓ , Ψ ℓ , such that ||Ψ 1 || D q−1 (U − 1 ) + ||Ψ 2 || ∆ q+1 (U − 1 ) ≤ c||Φ|| L 2,q (U − ̺ 1 ) holds. Let χ ∈ • C ∞ (U ̺ 2 ) with χ| U − ̺ 1 = 1 .
Then the assumption of the induction for ε = id yields Ψ 1 , χΨ 1 ∈ H 1,q−1 (U − 1 ) and
||χΨ 1 || H 1,q−1 (U − 1 ) ≤ c||Ψ 1 || H 1,q−1 (U − 1 ) ≤ c||Ψ 1 || D q−1 (U − 1 ) ≤ c||Φ|| L 2,q (U − ̺ 1 )
.
Clearly the form χΨ 2 possesses compact support in U − ̺ 2 ∪ U 0 ̺ 2 and by Lemma A.9 and (A.18) the extension by zero of S δ χΨ 2 to R N is an element of ∆ q+1 (R N ) . Hence we havẽ
Φ 2 := δS δ χΨ 2 ∈ 0 ∆ q (R N ) with suppΦ 2 ⊂ U ̺ 2 andΦ 2 U − ̺ 1 = Φ 2 .
Lemma A.10 yields some (q + 1)-form H ∈ H 1,q+1 (R N ) satisfying δH =Φ 2 and furthermore the estimate
||H|| H 1,q+1 (R N ) ≤ c||Φ|| L 2,q (U − ̺ 1 ) . Using Φ = dχΨ 1 + ε −1 δχH in U − ̺ 1 and (4.2) as well as δ * −h (χΨ 1 ) ∈ • D q−1 (U − 1 ) , E ∈ • D q (U − 1 ) we get εδ * h E, Φ L 2,q (U − ̺ 1 ) = δ * h (εE), Φ L 2,q (U − ̺ 1 ) − (δ h ε)τ * h E, Φ L 2,q (U − ̺ 1 ) = − εE, dδ * −h (χΨ 1 ) L 2,q (U − 1 ) − E, δδ * −h (χH) L 2,q (U − 1 ) − εE, (δ −h ε −1 )τ * −h δχH L 2,q (U − 1 ) − (δ h ε)τ * h E, Φ L 2,q (U − ̺ 1 ) = δεE, δ * −h (χΨ 1 ) L 2,q (U − 1 ) + dE, δ * −h (χH) L 2,q (U − 1 ) − εE, (δ −h ε −1 )τ * −h δχH L 2,q (U − 1 ) − (δ h ε)τ * h E, Φ L 2,q (U − ̺ 1 )
, which immediately implies (4.3). Hence (4.1) is proved. The normal partial derivative ∂ N E may be discussed as follows. By the usual formula
dE = d(E I dx I ) = ∂ j E I dx j ∧ dx I = (± ∂ j E I ) dx I+j we get ± ∂ N E I = (dE) I+N − N −1 I∋j=1 ± ∂ j E I+N −j ∈ L 2 (U − 1 ) (4.4)
for all I ∋ N and thus E τ ∈ H 1,q (U − 1 ) with the decomposition from (A. 19). The usual formula for the co-derivative reads
δH = δ(H I dx I ) = (± ∂ j H I ) * (dx j ∧ * dx I ) = (± ∂ j H I ) dx I−j . By ∂ i (εE) = (∂ i ε)E + ε ∂ i E we obtain ∂ i (εE) ∈ L 2,q (U − 1 ) for i = 1, . . . , N − 1 and hence ± ∂ N (εE) I = (δεE) I−N − N −1 I ∋j=1 ± ∂ j (εE) I−N +j ∈ L 2 (U − 1 ) (4.5)
for all I ∋ N . Therefore, (εE) ρ ∈ H 1,q (U − 1 ) . Now Lemma A.11 yields E ∈ H 1,q (U − 1 ) and the case m = 0 is proved. Let m ≥ 1 and our assertions be valid for m − 1 as well as the assumptions be given
for m . We consider E, εE ∈ H m,q (U − 1 ) with E ∈ • D q (U − 1 ) ∩ ε −1 ∆ q (U − 1 ) , supp E ⊂ U − ̺ and dE ∈ H m,q+1 (U − 1 ) , δεE ∈ H m,q−1 (U − 1 ) .
Moreover, we have the estimate
||E|| H m,q (U − 1 ) ≤ c ||E|| L 2,q (U − 1 ) + || dE|| H m−1,q+1 (U − 1 ) + ||δεE|| H m−1,q−1 (U − 1 )
.
For sufficiently small
h we have δ * h E ∈ • D q (U − 1 ) and δ * h E resp. δ * h dE converges weakly to ∂ 1 E resp. ∂ 1 dE in L 2,q (U − 1 ) resp. L 2,q+1 (U − 1 ) as h → 0 . Thus ∂ 1 E ∈ • D q (U − 1 ) and d∂ 1 E = ∂ 1 dE . Analogously we get ∂ i E ∈ • D q (U − 1 ) and d∂ i E = ∂ i dE for all indices i = 2, . . . , N − 1 . Hence all tangential derivatives ∂ i E ∈ • D q (U − 1 ) ∩ ε −1 ∆ q (U − 1 ) , i = 1, . . . , N − 1 , satisfy d∂ i E = ∂ i dE ∈ H m−1,q+1 (U − 1 ) , δε ∂ i E = ∂ i δεE − δ(∂ i ε)E ∈ H m−1,q−1 (U − 1 ) ,
which implies ∂ i E ∈ H m,q (U − 1 ) and also ∂ i (εE) ∈ H m,q (U − 1 ) by assumption. By (4.4) and (4.5) we obtain ∂ N E τ , ∂ N (εE) ρ ∈ H m,q (U − 1 ) and thus E τ , (εE) ρ ∈ H m+1,q (U − 1 ) as well. Finally we achieve by Lemma A.11 E ∈ H m+1,q (U − 1 ) , which completes the induction and hence the proof for N ≥ 3 .
The only non trivial remaining case is N = 2 , q = 1 . But this case can be proved similarly to the case N ≥ 3 without using Lemma A.10, since then Ψ 2 is even an element of ∆ 2 (U − 1 ) = H 1,2 (U − 1 ) .
Exterior domains
E = 0 , i.e. E ∈ • D q s (Ω) ∩ ε −1 ∆ q s (Ω) , sinceγ t is continuous.
Let us assume for a moment that Theorem 3.2 holds in the special case Ω = R N . Moreover, let η denote a smooth cut-off function, which vanishes near ∂ Ω and equals 1 near infinity. Then by Theorem 3.2 in the whole space case ηE ∈ H m+1,q s (R N ) resp. ηE ∈ H m+1,q s (R N ) . Furthermore, Theorem 3.1 may be applied to the truncated form
(1 − η)E ∈ • D q (Ω b ) ∩ ε −1 ∆ q (Ω b ) with some adequate bounded subdomain Ω b ⊂ Ω yielding (1 − η)E ∈ H m+1,q (Ω b )
. Then extending (1 − η)E by zero into the whole of Ω leads to (1 − η)E ∈ H m+1,q s (Ω) . The estimates follow by induction. Hence, our proof is reduced to the following assertion for the special model case Ω = R N : Lemma 4.2 Let s ∈ R , m ∈ N 0 and ε be some C m+1 -admissible transformation as well as E ∈ D q s (R N ) ∩ ε −1 ∆ q s (R N ) .
||E|| H m+1,q s (R N ) ≤ c ||E|| L 2,q s (R N ) + || dE|| H m,q+1 s (R N ) + ||δεE|| H m,q−1 s (R N )
holds uniformly with respect to E .
(ii) If additionally ε is a 0-C m+1 -admissible transformation of second kind and τ -C 0admissible of first (or second) kind with some τ > 0 then dE ∈ H m,q+1 s+1 (R N ) and δεE ∈ H m,q−1 s+1 (R N ) imply E ∈ H m+1,q s (R N ) and there exists some positive constant c , such that the estimate
||E|| H m+1,q s (R N ) ≤ c ||E|| L 2,q s (R N ) + || dE|| H m,q+1 s+1 (R N ) + ||δεE|| H m,q−1 s+1 (R N )
holds uniformly with respect to E .
Proof Our induction over m starts with m = 0 .
Lemma 4.3 Let ε be C 1 -admissible. Then D q (R N ) ∩ ε −1 ∆ q (R N ) = H 1,q (R N )
holds with equivalent norms depending on ε .
Proof Partial integration, i.e. Stokes' theorem, and the well known formula dδ + δ d = ∆ (Here the Laplacian ∆ acts componentwise with respect to Euclidian coordinates.) yield
∀ Φ ∈ • C ∞,q (R N ) N n=1 || ∂ n Φ|| 2 L 2,q (R N ) = || dΦ|| 2 L 2,q+1 (R N ) + ||δΦ|| 2 L 2,q−1 (R N )
.
D q (R N ) ∩ ∆ q (R N ) = H 1,q (R N ) (4.7)
with equal norms, since
• C ∞,q (R N ) is dense in H 1,q (R N ) . Now let E ∈ D q (R N ) ∩ ε −1 ∆ q (R N )
. By [10, Lemma 1, Lemma 7] See also [14] as well as Appendix A.2 and A.3. we decompose E = dΦ + Ψ according to
L 2,q (R N ) = dD q−1 (R N ) ⊕ 0 ∆ q (R N ) = d D q−1 −1 (R N ) ∩ 0 ∆ q−1 −1 (R N ) ⊕ 0 ∆ q (R N )
observing dΨ = dE and δΨ = 0 . By (4.7) we obtain Ψ ∈ H 1,q (R N ) and the estimate ||Ψ|| H 1,q (R N ) ≤ c||E|| D q (R N ) with some constant c > 0 . Hence εΨ ∈ H 1,q (R N ) and Φ solves the elliptic system
δε dΦ = δεE − δεΨ =: F ∈ L 2,q−1 (R N ) , δΦ = 0 , where ||F || L 2,q (R N ) ≤ c||E|| D q (R N )∩ε −1 ∆ q (R N )
. Using the operators
τ h,i : R N −→ R N x −→ (x 1 , · · · , x i−1 , x i + h, x i+1 , · · · , x N ) , δ h,i := 1 h (τ h,i − id)
for i = 1, . . . , N and h > 0 defined on R N corresponding to τ h = τ h,1 and δ h = δ h,1 defined on R N − from the proof of Theorem 3.1 as well as ||τ * h,i φ|| L 2,q (R N ) = ||φ|| L 2,q (R N ) and the estimates
||δ * h,i φ|| L 2,q (R N ) ≤ || ∂ i φ|| L 2,q (R N ) , || dφ|| L 2,q+1 (R N ) ≤ N n=1 || ∂ n φ|| L 2,q (R N ) we get εδ * h,i dΦ, dφ L 2,q (R N ) = δε dΦ, δ * −h,i φ L 2,q−1 (R N ) − dΦ, (δ −h,i ε)τ * −h,i dφ L 2,q (R N )
and thus by (4.6) uniformly with respect to φ and h
εδ * h,i dΦ, dφ L 2,q (R N ) ≤ c||E|| D q (R N )∩ε −1 ∆ q (R N ) N n=1 || ∂ n φ|| L 2,q−1 (R N ) ≤ c||E|| D q (R N )∩ε −1 ∆ q (R N ) || dφ|| L 2,q (R N ) + ||δφ|| L 2,q−2 (R N ) for all φ ∈ • C ∞,q−1 (R N )
. By this estimate and since
• C ∞,q−1 (R N ) is a dense subset of D q−1 −1 (R N ) ∩ ∆ q−1 −1 (R N ) we obtain ||δ * h,i dΦ|| L 2,q (R N ) ≤ c||E|| D q (R N )∩ε −1 ∆ q (R N ) ,
where the constant c > 0 is independent of h . Therefore, dΦ ∈ H 1,q (R N ) and the esti-
mates || ∂ i dΦ|| L 2,q (R N ) ≤ c||E|| D q (R N )∩ε −1 ∆ q (R N )
, i = 1, . . . , N , hold, which completes the proof.
Now we may proceed with the induction start. Let
E ∈ D q s (R N ) ∩ ε −1 ∆ q s (R N ) . We have ρ s E ∈ L 2,q (R N ) and by (A.9) d(ρ s E) = ρ s d E + sρ s−2 RE ∈ L 2,q+1 (R N ) , δ(ρ s εE) = ρ s δεE + sρ s−2 T εE ∈ L 2,q−1 (R N ) .
Thus, using Lemma 4.3 ρ s E ∈ D q (R N ) ∩ ε −1 ∆ q (R N ) = H 1,q (R N ) follows and ∂ n (ρ s E) = ρ s ∂ n E + sρ s−2 X n E ∈ L 2,q (R N ) yields (i) with the desired estimates. Looking at
E ∈ D q s (R N ) ∩ ε −1 ∆ q s (R N ) ⊂ D q s (R N ) ∩ ε −1 ∆ q s (R N ) we obtain E ∈ H 1,q s (R N ) by (i).
Therefore, it only remains to show ∂ n E ∈ L 2,q s+1 (R N ) for n = 1, . . . , N . We choose a real smooth cut-off function ϕ with ϕ = 1 on (−∞, 1] and ϕ = 0 on [2, ∞) and set η t := ϕ(r/t) . Then we calculate with (4.6) or (4.7) uniformly with respect to t ∈ R +
∂ n (η t E) L 2,q s+1 (R N ) ≤ c ∂ n ( ρ s+1 η t E ∈H 1,q (R N ) ) L 2,q (R N ) + (s + 1)ρ s−1 X n η t E L 2,q (R N ) ≤ c d(ρ s+1 η t E) L 2,q+1 (R N ) + δ(ρ s+1 η t E) L 2,q−1 (R N ) + ||η t E|| L 2,q s (R N ) ≤ c ||η t E|| D q s (R N )∩ε −1 ∆ q s (R N ) + δ(η tε E) L 2,q−1 s+1 (R N ) ≤ c ||η t E|| D q s (R N )∩ε −1 ∆ q s (R N ) + N m=1 ∂ m (η t E) L 2,q s+1−τ (R N )
.
Since τ > 0 and decomposing
R N = U ϑ ∪ A ϑ we get for all ϑ ∈ R + ∂ m (η t E) 2 L 2,q s+1−τ (R N ) ≤ c ϑ ∂ m (η t E) 2 L 2,q s (R N ) + (1 + ϑ 2 ) −τ ∂ m (η t E) 2 L 2,q s+1 (R N ) (4.8)
with some constant c ϑ > 0 depending only on ϑ and τ . Here we have
A ϑ := R N \ U ϑ = x ∈ R N : |x| > ϑ .
A combination of the latter two estimates yields for some sufficient large ϑ and with (i)
N n=1 ∂ n (η t E) L 2,q s+1 (R N ) ≤ c ||η t E|| H 1,q s (R N ) + d(η t E) L 2,q+1 s+1 (R N ) + δ(η t εE) L 2,q−1 s+1 (R N ) ≤ c ||E|| D q s (R N )∩ε −1 ∆ q s (R N ) + ||t −1 r −1 RE|| L 2,q+1 s+1 (Z t,2t ) + ||t −1 r −1 T εE|| L 2,q−1 s+1 (Z t,2t ) , where Z t,T := A t ∩ U T = x ∈ R N : t < |x| < T . Using t −1 ≤ 2r −1 in Z t,2t we finally obtain the estimate N n=1 || ∂ n E|| L 2,q s+1 (Ut) ≤ N n=1 ∂ n (η t E) L 2,q s+1 (R N ) ≤ c||E|| D q s (R N )∩ε −1 ∆ q s (R N )
, which holds uniformly with respect to t . Thus letting t → ∞ the monotone convergence theorem implies E ∈ H 1,q s (R N ) and the desired estimate. Hence (ii) is proved and thus the case m = 0 is completed.
For the induction step we assume ε to be C m+1 -admissible and
d E ∈ H m,q+1 s (R N ) , δεE ∈ H m,q−1 s (R N ) .
The assertion for m − 1 yields E ∈ H m,q s (R N ) and the corresponding estimate. Then for
n = 1, . . . , N we get ∂ n E ∈ L 2,q s (R N ) , d ∂ n E ∈ H m−1,q+1 s (R N ) and δ(ε ∂ n E) = ∂ n δεE − δ (∂ n ε)E ∈ H m−1,q−1 s (R N ) .
Using once again the assumption for m − 1 we obtain ∂ n E ∈ H m,q s (R N ) and
|| ∂ n E|| H m,q s (R N ) ≤ c || ∂ n E|| L 2,q s (R N ) + || d ∂ n E|| H m−1,q+1 s (R N ) + δ(ε ∂ n E) H m−1,q−1 s (R N )
for n = 1, . . . , N . Hence E ∈ H m+1,q s (R N ) and
||E|| H m+1,q s (R N ) ≤ c ||E|| H m,q s (R N ) + N n=1 || ∂ n E|| H m,q s (R N ) ≤ c ||E|| H m,q s (R N ) + || d E|| H m,q+1 s (R N ) + ||δεE|| H m,q−1 s (R N )
.
This shows (i). Similarly we prove (ii) paying attention to the fact that the weights in the || · || H m,q s (R N ) -norms grow with the number of derivatives and that this effect is compensated by the decay properties ofε and its derivatives.
A.1 Density results
Let Ω ⊂ M be a connected open subset with compact closure of the N-dimensional smooth Riemannian manifold M . Using charts and the results and techniques known from the scalar cases, i.e. mollifiers, we get the following assertions for m ∈ N 0 :
C ∞,q (Ω) ∩ H m,q (Ω) is dense in H m,q (Ω) .
If Ω has the 'segment property', i.e. for each x ∈ ∂ Ω there exist a chart (V, h) , some ̺ ∈ (0, 1) and some vector v ∈ R N with h(x) = 0 , h(V ) = U 1 and
U ̺ ∩ h(Ω ∩ V ) + τ v ⊂ h(Ω ∩ V )
for all τ ∈ (0, 1) Please compare to [1, Definition 2.1] for the classical segment property. We note that manifolds with C 1 -boundary possess the segment property. , we can adopt more properties from the scalar cases. For example,
C ∞,q (Ω) is dense in H m,q (Ω) (A.1)
as well as E ∈ • H m,q (Ω) for some E ∈ H m,q (Ω) , if and only if its extension by zero intõ Ω is an element of H m,q (Ω) for any open setΩ with Ω ⊂ Ω ⊂Ω ⊂Ω ⊂ M . The first assertion may be proved analogously to [22,Theorem 3.6] or [1, Theorem 2.1] and the second analogously to [22,Theorem 3.7]. The same techniques yield finally
C ∞,q (Ω) is dense in D q (Ω) resp. ∆ q (Ω) . (A.2)
Especially for Ω = M = R N we also have for all s ∈ R that
• C ∞,q (R N ) is dense in D q s (R N ) , D q s (R N ) , ∆ q s (R N ) , ∆ q s (R N ) , D q s (R N ) ∩ ∆ q s (R N ) , D q s (R N ) ∩ ∆ q s (R N ) , H m,q s (R N ) and H m,q s (R N ) .
A.2 Hodge-Helmholtz decompositions
By the projection theorem, the L 2,q (Ω)-orthogonality of d :
Lemma A.1
The following ε · , · L 2,q (Ω) -orthogonal (denoted by ⊕ ε ) decompositions hold for admissible transformations ε :
(i) L 2,q (Ω) = d • D q−1 (Ω) ⊕ ε ε −1 0 ∆ q (Ω) = 0 • D q (Ω) ⊕ ε ε −1 δ∆ q+1 (Ω) = ε −1 d • D q−1 (Ω) ⊕ ε 0 ∆ q (Ω) = ε −1 0 • D q (Ω) ⊕ ε δ∆ q+1 (Ω) (ii) L 2,q (Ω) = d • D q−1 (Ω) ⊕ ε ε H q (Ω) ⊕ ε ε −1 δ∆ q+1 (Ω) = ε −1 d • D q−1 (Ω) ⊕ ε ε −1 ε −1 H q (Ω) ⊕ ε δ∆ q+1 (Ω)
All closures are taken in L 2,q (Ω) .
Here we introduced the '(harmonic) Dirichlet forms' by
ε H q (Ω) := 0 • D q (Ω) ∩ ε −1 0 ∆ q (Ω)
and we denote them by H q (Ω) , if ε = id . An easy application of the latter lemma shows that the orthogonal projection
π : ν H q (Ω) −→ ε H q (Ω) on ε −1 0 ∆ q (Ω) along d • D q−1 (Ω)
is well defined, linear, continuous and injective. Therefore, by symmetry we obtain dim ν H q (Ω) = dim ε H q (Ω) and hence this dimension is independent of transformations, i.e. are compact for all q .
The MCP and MLCP are a properties of the boundary. We will shortly present some results.
There exists a large amount of literature about the MCP, which can only hold for sub-manifolds Ω with compact closure, which may be assumed by now. The first idea was to use Gaffney's inequality, i.e. to estimate the H 1,q (Ω)-norm by the D q (Ω) ∩ ∆ q (Ω)-norm, and then to apply Rellich's selection theorem. To do this one needs smooth boundaries, which for instance may be seen in [3,Theorem 8.6]. If q = 0 we even have
• D 0 (Ω) ∩ ∆ 0 (Ω) = • D 0 (Ω) = • H 1,0 (Ω) .
In 1972 Weck [18] resp. [19] presented for the first time a proof of the MCP for manifolds with nonsmooth boundaries ('cone-property'). Further proofs of the MCP were given by Picard [13] ('Lipschitz-domains') and in the classical case by Weber [16] (another 'coneproperty') and Witsch [21] ('p-cusp-property'). A proof of the MCP in the classical case for bounded domains handling the largest known class of boundaries was given by Picard, Weck and Witsch in [15]. They combine the techniques from [19], [13] and [21]. We note that the MCP is independent of transformations. More precisely: Let ε q be admissible transformations for all q . Then Ω possesses the MCP, if and only if the embeddings • D q (Ω) ∩ ε −1 q ∆ q (Ω) ֒→ L 2,q (Ω) are compact for all q . Moreover, the MCP yields the finite dimension of the space of Dirichlet forms H q (Ω) . In fact, the dimension is determined by topological properties of Ω , i.e. dim H q (Ω) = β N −q , the (N − q)-th Betti number of Ω . Furthermore, for admissible transformations the MCP implies (by an indirect argument) the existence of a positive constant c , such that the estimate
||E|| L 2,q (Ω) ≤ c || dE|| L 2,q+1 (Ω) + ||δεE|| L 2,q−1 (Ω) (A.3) holds uniformly with respect to E ∈ • D q (Ω) ∩ ε −1 ∆ q (Ω) ∩ ε H q (Ω) ⊥ .
Here we denote the orthogonality with respect to the · , · L 2,q (Ω) -scalar product by ⊥ . This estimate implies the closedness of d • D q (Ω) resp. δ∆ q (Ω) in L 2,q+1 (Ω) resp. L 2,q−1 (Ω) . We even have
d • D q (Ω) = d • D q (Ω) = d • D q (Ω) ∩ ε −1 0 ∆ q (Ω) ∩ ε H q (Ω) ⊥ν , (A.4) δ∆ q (Ω) = δ∆ q (Ω) = δ ∆ q (Ω) ∩ ε −1 0 • D q (Ω) ∩ ε −1 H q (Ω) ⊥ν , (A.5)
which was shown in [10] in the case ε = ν = id . Here we denote the orthogonality with respect to the ν · , · L 2,q (Ω) -scalar product by ⊥ ν and put ⊥ := ⊥ id .
For an exterior domain Ω with the MLCP we have similar results. We will present them in the following. (iv) For all t, s ∈ R with t < s , all q and all admissible transformations ε q the embeddings
• D q s (Ω) ∩ ε −1 q ∆ q s (Ω) ֒→ L 2,q t (Ω)
are compact.
From [10] and [12] we obtain dim H q (Ω) = dim H q −1 (Ω) = β N −q < ∞ . Here we introduced the '(weighted harmonic) Dirichlet forms'
ε H q t (Ω) := 0 • D q t (Ω) ∩ ε −1 0 ∆ q t (Ω)
and again neglect the transformation or the weight in the notation for ε = id or t = 0 . Now let ε be an admissible transformation, which is τ -C 1 -admissible of second kind in A r for an arbitrary r ≥ r 0 with some order of decay τ > 0 (and r 0 from Lemma A.3).
We need a fundamental Poincare-like estimate:
Lemma A.4 There exists some constant c > 0 and some compact set K ⊂ R N , such that ||E|| L 2,q −1 (Ω) ≤ c || dE|| L 2,q+1 (Ω) + ||δεE|| L 2,q−1 (Ω) + ||E|| L 2,q (Ω∩K) holds true for all E ∈ D q −1 (Ω) ∩ ε −1 ∆ q −1 (Ω) .
Proof By a usual cutting technique we may restrict our considerations to the special case Ω = R N and ε is τ -C 1 -admissible of second kind in R N . Picking some E from
D q −1 (R N ) ∩ ε −1 ∆ q −1 (R N ) by Lemma 4.2 (ii) we get E ∈ H 1,q −t (R N )
for all t ≥ 1 and the estimate (with c depending on t but not on E)
||E|| H 1,q −t (R N ) ≤ c ||E|| L 2,q −t (R N ) + || dE|| L 2,q+1 1−t (R N ) + ||δεE|| L 2,q−1 1−t (R N )
.
(A.6)
From [10], Lemma 5 we receive a compact set K , such that
||E|| L 2,q −1 (R N ) ≤ c || dE|| L 2,q+1 (R N ) + ||δE|| L 2,q−1 (R N ) + ||E|| L 2,q (K)
.
Then (A.6) (for t = 1) and the latter estimate yield with id = ε −ε
||E|| H 1,q −1 (R N ) ≤ c || dE|| L 2,q+1 (R N ) + ||δεE|| L 2,q−1 (R N ) + ||E|| L 2,q (K) + ||E|| H 1,q −1−τ (R N )
.
Again utilizing (A.6) (for t = 1+τ ) the term ||E|| H 1,q −1−τ (R N ) may be replaced by ||E|| L 2,q −1−τ (R N ) . Since τ > 0 and using the trick from (4.8) this one can be swallowed by the left hand side, which might produce some other compact setK ⊃ K .
We note that we did not need the MLCP for the proof of this lemma. But this lemma and the MLCP yield directly (by an indirect argument) Corollary A.5 ε H q −1 (Ω) is finite dimensional and there exists some positive constant c , such that
||E|| L 2,q −1 (Ω) ≤ c || dE|| L 2,q+1 (Ω) + ||δεE|| L 2,q−1 (Ω) holds for all E ∈ • D q −1 (Ω) ∩ ε −1 ∆ q −1 (Ω) ∩ ε H q −1 (Ω) ⊥ −1,ν .
Here we denote by ⊥ −1,ν the orthogonality with respect to the νρ −1 · , ρ −1 · Ω -scalar product.
Corollary A.6 With closures taken in L 2,q±1 (Ω) we have
(i) d • D q (Ω) = d • D q −1 (Ω) = d • D q −1 (Ω) ∩ ε −1 0 ∆ q −1 (Ω) ∩ ε H q −1 (Ω) ⊥ −1,ν , (ii) δ∆ q (Ω) = δ∆ q −1 (Ω) = δ ∆ q −1 (Ω) ∩ ε −1 0 • D q −1 (Ω) ∩ ε −1 H q −1 (Ω) ⊥ −1,ν .
Proof The proof is analogous to the one of [10, Lemma 7]. Nevertheless, let us briefly indicate how to prove (i). The other assertion follows similarly. Let (E n ) n∈N ⊂ • D q (Ω) be some sequence with dE n n→∞ − −− → G in L 2,q+1 (Ω) . Using Lemma A.1 we may assume without loss of generality E n ∈ • D q (Ω) ∩ ε −1 0 ∆ q (Ω) . Moreover, by the projection theorem applied in L 2,q −1 (Ω) we may further assume
E n ∈ • D q −1 (Ω) ∩ ε −1 0 ∆ q −1 (Ω) ∩ ε H q −1 (Ω) ⊥ −1,ν .
By Corollary A.5 (E n ) n∈N is a L 2,q −1 (Ω)-Cauchy sequence and the limit E ∈ L 2,q −1 (Ω) even is an element of
• D q −1 (Ω) ∩ ε −1 0 ∆ q −1 (Ω) ∩ ε H q −1 (Ω) ⊥ −1,ν , which completes the proof.
Finally we note an immediate and easy conclusion of Corollary A.6, i.e. an electromagneto static solution theory handling homogeneous tangential boundary data.
Theorem A.7 Let d q := dim ε H q −1 (Ω) continuous linear functionals Φ ℓ ε on the space D q −1 (Ω) ∩ ε −1 ∆ q −1 (Ω) with ε H q −1 (Ω) ∩ d q ℓ=1 N(Φ ℓ ε ) = {0}
be given. Then with Φ ε := (Φ 1 ε , . . . , Φ d q ε ) the linear operator
Max ε :
• D q −1 (Ω) ∩ ε −1 ∆ q −1 (Ω) −→ δ∆ q (Ω) × d • D q (Ω) × C d q E −→ δεE, dE, Φ ε (E)
is a topological isomorphism. Here N(Φ ℓ ε ) denotes the kernel of Φ ℓ ε .
A.4 Linear transformations
ε τ := (−1) q(N −q) * τ * * ε(τ * ) −1
is admissible. In particular id τ = (−1) q(N −q) * τ * * (τ * ) −1 is admissible. Furthermore:
(i) E ∈ D q (Ω) resp. E ∈ • D q (Ω) , if and only if τ * E ∈ D q (Ω) resp. τ * E ∈ • D q (Ω) . Moreover, dτ * E = τ * dE and there exists a constant c > 0 independent of E , such that c −1 ||E|| D q (Ω) ≤ ||τ * E|| D q (Ω) ≤ c||E|| D q (Ω) .
(ii) E ∈ ε −1 ∆ q (Ω) , if and only if τ * E ∈ ε −1 τ ∆ q (Ω) . Moreover, δε τ τ * E = id τ τ * δεE holds and there exists some c > 0 independent of E or ε τ , such that
c −1 ||E|| ε −1 ∆ q (Ω) ≤ ||τ * E|| ε −1 τ ∆ q (Ω) ≤ c||E|| ε −1 ∆ q (Ω) .
A.5 Fourier transformation for differential forms
In the special case M = R N we have some useful operators from the spherical calculus developed in [20]. For Euclidean coordinates {x 1 , . . . , x N } we introduce the pointwise linear operators R , T on q-forms by
RE := x n dx n ∧ E = r dr ∧ E , T E := (−1) (q−1)N * R * E
and recall the formulas
RR = 0 , T T = 0 , RT + T R = r 2 (A.7)
as well as for q-forms E and (q + 1)-forms H
RE ∧ * H = E ∧ * T H , T H ∧ * E = H ∧ * RE , (A.8)
i.e. RE, H q+1 = E, T H q using the pointwise scalar product for differential forms. Then, for example, the differential d resp. δ corresponds to R resp. T in the sense that
C d,ϕ(r) E = ϕ ′ (r)r −1 RE resp. C δ,ϕ(r) E = ϕ ′ (r)r −1 T E (A.9) holds for ϕ ∈ C 1 (R) and E ∈ D q (R N ) resp. E ∈ ∆ q (R N ) .
But there is at least one more connection between these operators. Let us present the componentwise (with respect to Euclidean coordinates) Fourier transformation on q-forms F , which is a unitary mapping on L 2,q (R N ) . With X (x) := x and the well known formula
F(∂ α u) = i |α| X α F(u)
for scalar distributions u we get some formulas for F operating on q-forms E : These formulas may be checked for smooth forms from Schwartz' space and hence remain valid for distributional q-forms, i.e. extend to our weak calculus. We note dδ + δ d = ∆ , where the Laplacian ∆ acts on each Euclidean component of E . Utilizing these formulas some Sobolev spaces can be characterized with the aid of the Fourier transformation. We easily get: In this sense we also may define H s,q (R N ) , if s ∈ R .
F * E = * FE (A.10) F(∂ α E) = i |α| X α F(E) , ∂ α F(E) = (− i) |α| F(X α E) (
A.6 Some technical lemmas Lemma A.9 Let r > 0 , x ′ := (x 1 , · · · , x N −1 ) and
τ : U + r −→ U − r x −→ (x ′ , −x N ) .
Then the mirror operator S d : D q (U − r ) → D q (U r ) defined by S d E| U − r := E and S d E| U + r := τ * E is well defined, linear and continuous. S d commutates with d and ||S d E|| L 2,q (Ur) = √ 2||E|| L 2,q (U − r ) holds. √ 2/2S d even is an isometry. Moreover, if supp E ⊂ U − ̺ for some ̺ < r , then supp S d E ⊂ U ̺ . The dual mirror operator
S δ := (−1) q(N −q) * S d * : ∆ q (U − r ) → ∆ q (U r ) (A.18)
has the corresponding properties.
Proof By density it is enough to show S d E ∈ D q (U r ) and dS d E = S d dE for some E ∈ C ∞,q (U − r ) . The assertions about the continuity and the support follow directly. Let ι : U 0 r ֒→ U − r denote the natural embedding. Observing that τ changes the orientation we get from Stokes' theorem for Φ ∈ • C ∞,q+1 (U r ) (Clearly we identify Φ with its restrictions on U ± r .)
S d E, δΦ L 2,q (Ur) = (−1) q 2
U − r E ∧ (d * Φ) + (−1) q 2 U + r (τ * E) ∧ (d * Φ) = (−1) q U − r E ∧ d * Φ − (τ −1 ) * * Φ = − U − r (dE) ∧ * Φ − (τ −1 ) * * Φ + U 0 r (ι * E) ∧ ι * − ι * (τ −1 ) * * Φ .
By ι − τ −1 • ι = 0 the boundary integral vanishes and we obtain This theorem may be regarded as a generalization to inhomogeneous boundary data of [17], whereas the next theorem represents a new result even in the classical context.
Theorem B.2 Let s ∈ R , m ∈ N 0 , Ω ⊂ R 3 be an exterior domain with C m+2 -boundary and ε be some C m+1 -admissible matrix. Furthermore, let E ∈ H s (curl, Ω) ∩ ε −1 H s (div, Ω) with ν × E ∈ H m+1/2 (∂ Ω) .
we may define L 2,q (Ω) , the Hilbert space of all square integrable q-forms, as the closure of • C ∞,q (Ω) in the corresponding induced norm. Utilizing the weak version ofStokes' theorem dE, H L 2,q+1 (Ω) = − E, δH L 2,q (Ω) ∀ (E, H) ∈ • C ∞,q,q+1 (Ω) (with an obvious notation) we can define weak versions of the exterior derivative and the co-derivative. Hence we can introduce the Hilbert spaces (equipped with their natural graph norms) D q (Ω) := E ∈ L 2,q (Ω) : dE ∈ L 2,q+1 (Ω) , ∆ q+1 (Ω) := H ∈ L 2,q+1 (Ω) : δH ∈ L 2,q (Ω) and their closed subspaces 0 D q (Ω) := E ∈ L 2,q (Ω) : dE = 0 , 0 ∆ q+1 (Ω) := H ∈ L 2,q+1 (Ω) : δH = 0 .
•
D q (Ω) := • D q (Ω) ∩ 0 D q (Ω) .Definition 2.3 Let m ∈ N 0 . We call a transformation ε admissible, if and only if
C
∞,q (Ω) with respect to the corresponding graph norm || · || D q s (Ω) , || · || D q s (Ω) , respectively. The spaces D q s (Ω) , ∆ q s (Ω) and even • D q s (Ω) are invariant under multiplication with bounded smooth functions. As in the last section a subscript 0 at the lower left corner indicates vanishing exterior derivative resp. co-derivative, e.g.
( i )
iThen dE ∈ H m,q+1 s (R N ) and δεE ∈ H m,q−1 s (R N ) imply E ∈ H m+1,q s (R N ) and with some constant c > 0
of this identity and Fourier's transformation, i.e. (A.15)-(A.17) and (A.7), implies
Remark 4. 4
4Lemma 4.3 and Lemma 4.2 as well as an obvious cutting technique easily yield inner regularity results. These even include weighted inner regularity in exterior domains.A AppendixAs before let M be a N-dimensional smooth Riemannian manifold and let Ω ⊂ M denote some connected open subset with compact closure in M or some exterior domain of M = R N . Moreover, throughout this appendix ν denotes some admissible transformation.
(Ω) as well as δ∆ q+1 (Ω) ⊂ 0 ∆ q (Ω) we get the following Hodge-Helmholtz decompositions For details please see[10, Lemma 1],[14, Lemma 1] or in the classical case[11, p. 168],[15, Lemma 3.13].
dim ε H q (Ω) = dim H q (Ω) .A.3 Compact embedding Definition A.2 Ω possesses the (i) 'Maxwell compactness property' (MCP), if and only if the embeddings • D q (Ω) ∩ ∆ q (Ω) ֒→ L 2,q (Ω) ; (ii) 'Maxwell local compactness property' (MLCP), if and only if the embeddings • D q (Ω) ∩ ∆ q (Ω) ֒→ L 2,q loc (Ω)
Lemma A. 3
3The following assertions are equivalent:(i) Ω possesses the MLCP.
(
ii) Ω ∩ U t possesses the MCP for all r ≥ r 0 with R N \ Ω ⊂ U r 0 . (iii) The embeddings • D q s (Ω) ∩ ∆ q s (Ω) ֒→ L 2,q t (Ω)are compact for all t, s ∈ R with t < s and all q .
calculations and estimates show for open subsets Ω ,Ω with compact closure of smooth Riemannian manifolds M ,M : Lemma A.8 Let τ :Ω → Ω be a C 2 -diffeomorphism respecting orientation and ε be a linear transformation. Then the transformation ε is admissible, if and only if the transformation
F
(dE) = i RF(E) , dF(E) = − i F(RE) (A.12) F(δE) = i T F(E) , δF(E) = − i F(T E) (A.13) F(∆E) = −r 2 F(E) , ∆F(E) = −F(r 2 E) (A.14)
H
m,q (R N ) = E ∈ L 2,q (R N ) : F(E) ∈ L 2,q m (R N ) , m ∈ N (A.15) D q (R N ) = E ∈ L 2,q (R N ) : RF(E) ∈ L 2,q+1 (R N ) (A.16) ∆ q (R N ) = E ∈ L 2,q (R N ) : T F(E) ∈ L 2,q−1 (R N ) (A.17)
=
− S d dE, Φ L 2,q+1 (Ur) .Lemma A.10 Let N ≥ 3 and ̺ > 0 . There exists a constant c > 0 , such that for allE ∈ 0 ∆ q (R N ) with supp E ⊂ U ̺ there exists some H ∈ H 1,q+1 (R N ) satisfying δH = E , ||H|| H 1,q+1 (R N ) ≤ c||E|| L 2,q (R N ) .B Translation to the classical electro-magnetic languageFinally we present our results in the classical language of vector analysis, i.e. M = R 3 . By the usual identifications we have to followingtable: deal with the usual Sobolev spaces H m (Ω) and H(curl, Ω) , H(div, Ω) as well as the trace spaces H m−1/2 (∂ Ω) . Moreover, we have the weighted Sobolev spaces H m s (Ω) , H m s (Ω) as well as for ♦ ∈ {curl, div}H s (♦, Ω) := E ∈ L 2 s (Ω) : ♦E ∈ L 2 Ω) H 1 s (Ω) H s (curl, Ω) H s (div, Ω) L 2 s (Ω) ∆ q s (Ω) L 2 s (Ω) H s (div, Ω) H s (curl, Ω) H 1 s (Ω)Theorem B.1 Let m ∈ N 0 , Ω be a bounded domain in R 3 with C m+2 -boundary as well as ε be some C m+1 -admissible matrix. Furthermore, letE ∈ H(curl, Ω) ∩ ε −1 H(div, Ω) with curl E ∈ H m (Ω) , divεE ∈ H m (Ω) , ν × E ∈ H m+1/2 (∂ Ω).Then E ∈ H m+1 (Ω) and there exists a positive constant c independent of E , such that||E|| H m+1 (Ω)≤ c ||E|| L 2 (Ω) + || curl E|| H m (Ω) + ||divεE|| H m (Ω) + ||ν × E|| H m+1/2 (∂ Ω) .
Using charts we may define the usual Sobolev spaces H m,q (Ω) of real order m ≥ 0 . For this we need a finite chart family (V ℓ .h ℓ ) , ℓ = 1, . . . , L , covering the compact set Ω . Then we write E ∈ H m,q (Ω) , if and only if E ℓ I ∈ H m h ℓ (V ℓ ∩ Ω) for all I and||E|| H m,q (Ω) :=
L
ℓ=1 I
||E ℓ
I || 2
)dx I and the matrix entries ε I,J of ε . Following in straight lines[1, Theorem 3.13] we obtain for m ∈ N and all
Proof We have (εE ρ ) ρ = (εE) ρ −(εE τ ) ρ ∈ H m,q (U) . Since the restriction ε ρ,ρ of ε acting on the normal parts, i.e. ε ρ,ρ E ρ = (εE ρ ) ρ , is pointwise invertible with C m (U) entries we obtain E ρ ∈ H m,q (U) .
The authors are particularly indebted to their academic teachers Norbert Weck and Karl-Josef Witsch for introducing them to the field.Proof Let E ∈ 0 ∆ q (R N ) with supp E ⊂ U ̺ . By Fourier's transformation we get for the Euclidian components of E = E I dx I,where λ denotes Lebesgue's measure. Hence, all components of FE are bounded. Let H := r −2 RFE withĤ(0) := 0 . The estimateholds for all x ∈ R N \ {0} and all indices J and implies X nĤ ∈ L 2,q+1 (R N ) . Hence,. Using (A.13) as well as (A.7) we finally obtainTo prepare the final lemma of the appendix let U ⊂ R N and.Lemma A.11 Let U ⊂ R N , m ∈ N , E ∈ L 2,q (U) and ε be a C m -admissible transformation. Furthermore, let E τ and (εE) ρ be elements of H m,q (U) . Then E belongs to H m,q (U) as well.(i) Then curl E ∈ H m s (Ω) and divεE ∈ H m s (Ω) imply E ∈ H m+1 s (Ω) and with some constant c > 0holds uniformly with respect to E .(ii) If additionally ε is 0-C m+1 -admissible of second kind and τ -C 0 -admissible of first (or second) kind with some τ > 0 then curl E ∈ H m s+1 (Ω) and divεE ∈ H m s+1 (Ω) imply E ∈ H m+1 s (Ω) and there exists some positive constant c , such that the estimateholds uniformly with respect to E .Here we denoted by ν the exterior normal unit vector at ∂ Ω .Remark B.3 Similar results hold for kinds of spaces like ε −1 H(curl, Ω) ∩ H(div, Ω) and/or with prescribed normal traces ν · E resp. ν · εE .
Lectures on Elliptic Boundary Value Problems, van Nostrand. Sh Agmon, New YorkAgmon, Sh., Lectures on Elliptic Boundary Value Problems, van Nostrand, New York, (1965).
Die Maxwellgleichung mit wechselnden Randbedingungen. P Kuhn, Shaker VerlagDissertation, EssenKuhn, P., 'Die Maxwellgleichung mit wechselnden Randbedingungen', Dissertation, Essen, (1999), available from Shaker Verlag.
Initial Boundary Value Problems in Mathematical Physics. R Leis, Teubner. Leis, R., Initial Boundary Value Problems in Mathematical Physics, Teubner, Stuttgart, (1986).
Niederfrequenzasymptotik der Maxwell-Gleichung im inhomogenen und anisotropen Außengebiet. D Pauly, Dissertation, Duisburg-EssenPauly, D., 'Niederfrequenzasymptotik der Maxwell-Gleichung im inhomogenen und anisotropen Außengebiet', Dissertation, Duisburg-Essen, (2003), available from http://duepublico.uni-duisburg-essen.de.
Low Frequency Asymptotics for Time-Harmonic Generalized Maxwell Equations in Nonsmooth Exterior Domains. D Pauly, Adv. Math. Sci. Appl. 162Pauly, D., 'Low Frequency Asymptotics for Time-Harmonic Generalized Maxwell Equations in Nonsmooth Exterior Domains', Adv. Math. Sci. Appl., 16 (2), (2006), 591-622.
Generalized Electro-Magneto Statics in Nonsmooth Exterior Domains. D Pauly, Analysis (Munich). 274Pauly, D., 'Generalized Electro-Magneto Statics in Nonsmooth Exterior Domains', Analysis (Munich), 27 (4), (2007), 425-464.
Hodge-Helmholtz Decompositions of Weighted Sobolev Spaces in Irregular Exterior Domains with Inhomogeneous and Anisotropic Media. D Pauly, Math. Methods Appl. Sci. 31Pauly, D., 'Hodge-Helmholtz Decompositions of Weighted Sobolev Spaces in Irregular Exterior Domains with Inhomogeneous and Anisotropic Media', Math. Methods Appl. Sci., 31, (2008), 1509-1543.
Complete Low Frequency Asymptotics for Time-Harmonic Generalized Maxwell Equations in Nonsmooth Exterior Domains. D Pauly, Asymptot. Anal. 603-4Pauly, D., 'Complete Low Frequency Asymptotics for Time-Harmonic Generalized Maxwell Equations in Nonsmooth Exterior Domains', Asymptot. Anal., 60, (3-4), (2008), 125-184.
Zur Theorie der harmonischen Differentialformen. R Picard, Manuscripta Math. 27Picard, R., 'Zur Theorie der harmonischen Differentialformen', Manuscripta Math., 27, (1979), 31-45.
Randwertaufgaben der verallgemeinerten Potentialtheorie. R Picard, Math. Methods Appl. Sci. 3Picard, R., 'Randwertaufgaben der verallgemeinerten Potentialtheorie', Math. Meth- ods Appl. Sci., 3, (1981), 218-228.
On the boundary value problems of electro-and magnetostatics. R Picard, Proc. Roy. Soc. Edinburgh Sect. A. 92Picard, R., 'On the boundary value problems of electro-and magnetostatics', Proc. Roy. Soc. Edinburgh Sect. A, 92, (1982), 165-174.
Ein Hodge-Satz für Mannigfaltigkeiten mit nicht-glattem Rand. R Picard, Math. Methods Appl. Sci. 5Picard, R., 'Ein Hodge-Satz für Mannigfaltigkeiten mit nicht-glattem Rand', Math. Methods Appl. Sci., 5, (1983), 153-161.
An Elementary Proof for a Compact Imbedding Result in Generalized Electromagnetic Theory. R Picard, Math. Z. 187Picard, R., 'An Elementary Proof for a Compact Imbedding Result in Generalized Electromagnetic Theory', Math. Z., 187, (1984), 151-164.
Some decomposition theorems and their applications to non-linear potential theory and Hodge theory. R Picard, Math. Methods Appl. Sci. 12Picard, R., 'Some decomposition theorems and their applications to non-linear po- tential theory and Hodge theory', Math. Methods Appl. Sci., 12, (1990), 35-53.
Time-Harmonic Maxwell Equations in the Exterior of Perfectly Conducting, Irregular Obstacles. R Picard, N Weck, K J Witsch, Analysis (Munich). 21Picard, R., Weck, N., Witsch, K. J., 'Time-Harmonic Maxwell Equations in the Exterior of Perfectly Conducting, Irregular Obstacles', Analysis (Munich), 21, (2001), 231-263.
A local compactness theorem for Maxwell's equations. C Weber, Math. Methods Appl. Sci. 2Weber, C., 'A local compactness theorem for Maxwell's equations', Math. Methods Appl. Sci., 2, (1980), 12-25.
Regularity Theorems for Maxwell's Equations. C Weber, Math. Methods Appl. Sci. 3Weber, C., 'Regularity Theorems for Maxwell's Equations', Math. Methods Appl. Sci., 3, (1981), 523-536.
Eine Lösungstheorie für die Maxwellschen Gleichungen auf Riemannschen Mannigfaltigkeiten mit nicht-glattem Rand. N Weck, Habilitationsschrift, BonnWeck, N., 'Eine Lösungstheorie für die Maxwellschen Gleichungen auf Riemannschen Mannigfaltigkeiten mit nicht-glattem Rand', Habilitationsschrift, Bonn, (1972).
Maxwell's boundary value problems on Riemannian manifolds with nonsmooth boundaries. N Weck, J. Math. Anal. Appl. 46Weck, N., 'Maxwell's boundary value problems on Riemannian manifolds with non- smooth boundaries', J. Math. Anal. Appl., 46, (1974), 410-437.
Generalized Spherical Harmonics and Exterior Differentiation in Weighted Sobolev Spaces. N Weck, K J Witsch, Math. Methods Appl. Sci. 17Weck, N., Witsch, K. J., 'Generalized Spherical Harmonics and Exterior Differentia- tion in Weighted Sobolev Spaces', Math. Methods Appl. Sci., 17, (1994), 1017-1043.
A Remark on a Compactness Result in Electromagnetic Theory. K J Witsch, Math. Methods Appl. Sci. 16Witsch, K. J., 'A Remark on a Compactness Result in Electromagnetic Theory', Math. Methods Appl. Sci., 16, (1993), 123-129.
. J Wloka, Partielle Differentialgleichungen, Teubner, StuttgartWloka, J., Partielle Differentialgleichungen, Teubner, Stuttgart, (1982).
| []
|
[
"Pseudospin Transfer Torques in Semiconductor Electron Bilayers",
"Pseudospin Transfer Torques in Semiconductor Electron Bilayers"
]
| [
"Youngseok Kim \nDepartment of Electrical and Computer Engineering\nDepartment of Physics\nUniversity of Illinois\n61801 *Urbana, Il\n",
"Matthew J Gilbert \nDepartment of Electrical and Computer Engineering\nDepartment of Physics\nUniversity of Illinois\n61801 *Urbana, Il\n",
"A H Macdonald \nUniversity of Texas at Austin\n78712AustinTexas\n"
]
| [
"Department of Electrical and Computer Engineering\nDepartment of Physics\nUniversity of Illinois\n61801 *Urbana, Il",
"Department of Electrical and Computer Engineering\nDepartment of Physics\nUniversity of Illinois\n61801 *Urbana, Il",
"University of Texas at Austin\n78712AustinTexas"
]
| []
| We use self-consistent quantum transport theory to investigate the influence of electron-electron interactions on interlayer transport in semiconductor electron bilayers in the absence of an external magnetic field. We conclude that, even though spontaneous pseudospin order does not occur at zero field, interaction-enhanced quasiparticle tunneling amplitudes and pseudospin transfer torques do alter tunneling I-V characteristics, and can lead to time-dependent response to a dc bias voltage. | 10.1103/physrevb.85.165424 | [
"https://arxiv.org/pdf/1201.5569v2.pdf"
]
| 9,329,417 | 1201.5569 | 007e9002ab3d3a7c6727988f8392d8bff2675328 |
Pseudospin Transfer Torques in Semiconductor Electron Bilayers
27 Jan 2012
Youngseok Kim
Department of Electrical and Computer Engineering
Department of Physics
University of Illinois
61801 *Urbana, Il
Matthew J Gilbert
Department of Electrical and Computer Engineering
Department of Physics
University of Illinois
61801 *Urbana, Il
A H Macdonald
University of Texas at Austin
78712AustinTexas
Pseudospin Transfer Torques in Semiconductor Electron Bilayers
27 Jan 2012(Dated: December 21, 2013)
We use self-consistent quantum transport theory to investigate the influence of electron-electron interactions on interlayer transport in semiconductor electron bilayers in the absence of an external magnetic field. We conclude that, even though spontaneous pseudospin order does not occur at zero field, interaction-enhanced quasiparticle tunneling amplitudes and pseudospin transfer torques do alter tunneling I-V characteristics, and can lead to time-dependent response to a dc bias voltage.
I. INTRODUCTION
In this paper we address interlayer transport in separately contacted nanometer length scale semiconductor bilayers, with a view toward the identification of possible interaction-induced collective transport effects. When interaction effects are neglected a nanoscale conductor always reaches a steady state 1 in which current increases smoothly with bias voltage. Nanoscale transport theory has at its heart the evaluation of the density-matrix of non-interacting electrons in contact with two or more reservoirs whose chemical potentials differ. This problem is efficiently solved using Green's function techniques, for example by using the non-equilibrium Green's function (NEGF) method 1 . Real electrons interact, of course, and the free-fermion degrees of freedom which appear in this type of theory should always be thought of as Fermi liquid theory 2-4 quasiparticles. The effective singleparticle Hamiltonian therefore depends on the microscopic configuration of the system. In practice the quasiparticle Hamiltonian is often 5;6 calculated from a selfconsistent mean-field theory like Kohn-Sham densityfunctional-theory (DFT). DFT, spin-density functional theory, current-density functional theory, Hartree theory, and Hartree-Fock theory all function as useful fermion self-consistent-field theories. Since a bias voltage changes the system density matrix, it inevitably changes the quasiparticle Hamiltonian. Determination of the steady state density matrix therefore requires a self-consistent calculation.
Self-consistency is included routinely in NEGF simulations [7][8][9] of transport at the Hartree theory level in nanoscale semiconductor systems and in simulations [10][11][12][13][14] of molecular transport. The collective behavior captured by self-consistency can sometimes change the currentvoltage relationship in a qualitative way, leading to an I-V curve that is not smooth or even to circumstances in which there is no steady state response to a time-independent bias voltage. Some of the most useful and interesting examples of this type of effect occur in ferromagnetic metal spintronics. The quasiparticle Hamiltonian is a ferromagnetic metal has a large spin-splitting term which lowers the energy of quasipar-ticles whose spins are aligned with the magnetization (majority-spin quasiparticles) relative to those quasiparticles whose spins are aligned opposite to the magnetization (minority-spin quasiparticles). When current flows in a ferromagnetic metal the magnetization direction is altered. 15 The resulting change in the quasiparticle Hamiltonian 15 is responsible for the rich variety of so-called spin-transfer torque collective transport effects which occur in magnetic metals and semiconductors. These spin-transfer torques 16;17 are often understood macroscopically 18 as the reaction counterpart of the torques which act on the quasiparticle spins that carry current through a non-collinear ferromagnet. Spin-transfer torques 16 can lead to discontinuous I-V curves and to oscillatory 19 or chaotic response to a time-independent bias voltage. Similar phenomena 20 occur in semiconductor bilayers for certain ranges of external magnetic field over which the ground state has spontaneous 21-23 interlayer phase coherence or (equivalently) exciton condensation. Indeed when bilayer exciton condensate ordered states are viewed as pseudospin ferromagnets, the spectacular transport anomalies 23-27 they exhibit have much in common with those of ferromagnetic metals.
In this article we consider semiconductor bilayers at zero magnetic field, using a pseudospin language 28;29 in which top layer electrons are said to have pseudospin up (| ↑ ) and bottom layer electrons are said to have pseudospin down (| ↓ ). Although inter-layer transport in semiconductor bilayers has been studied extensively in the strong field quantum Hall regime, work on the zero field limit has been relatively sparse and has focused on studies of interlayer drag, 30-33 counterflow, 34 and on speculations about possible broken symmetry states. 35;36 We concur with the consensus view that pseudospin ferromagnetism is not expected in conduction band twodimensional electron systems 37;38 except possibly 39 at extremely low carrier densities. There are nevertheless pseudospin-dependent interaction contributions to the quasiparticle Hamiltonian. When a current flows the pseudospin orientation of transport electrons is altered, just as in metal spintronics, and some of the same phenomena can occur. The resulting change in the quasi- Al0.9Ga0.1As barrier thickness is 60 nm, GaAs quantum well thickness is 15 nm, and Al0.9Ga0.1As barrier thickness is 1 nm. A channel is 1.2 µm long and 7.5 µm wide. An assumed electron density in 2 dimensional electron gas (2DEG) is 2.0× 10 10 cm −2 .
particle Hamiltonian is responsible for a current-induced pseudospin-torque which alters the state of non-transport electrons well away from the Fermi energy. Indeed although the spin-splitting field appears spontaneously in ferromagnetic metals, spintronics phenomena usually depend on an interplay between the spontaneous exchange field and effective magnetic fields due to magnetic-dipole interactions, spin-orbit coupling, and external magnetic fields. In semiconductor bilayers the pseudospin external field is due to single-particle interlayer tunneling amplitude and is typically of the same order 40 as the interaction contribution to the pseudospin-splitting field.
The paper is organized in the following manner. We start in Section II with a qualitative discussion which explains the role played in transport by exchange correlation enhancement of the quasiparticle interlayer tunneling amplitude. We also briefly outline the methods and the approximations that we have employed in the illustrative numerical calculations which are the main subject of this paper. In Section III, we describe the results of self-consistent quantum transport calculation for an ideal disorder-free system with two GaAs quantum wells separated by a small AlGaAs barrier. For balanced double-layer electron systems, the inter-layer current has a resonant peak near zero-bias. Because the current is a non-monotonic function of inter-layer bias voltage, it is not immediately obvious that it is possible to drive a current that is large enough to induce exchange related instabilities. The calculations in Section III demonstrate that interactions alter the conductance peak near zero bias which occurs in weakly disordered systems in the absence of a strong magnetic field, and that exchange interactions can destabilize transport steady states. Finally, in Section IV, we summarize our results.
II. BILAYER PSEUDOSPIN TORQUES
A. Interacting Bilayer Model
We begin this section by briefly describing the approximations that we use to model semiconductor bilayer tunneling I-V characteristics. We consider an Al 0.9 Ga 0.1 As/GaAs bilayer heterostructure with 60 nm top and bottom Al 0.9 Ga 0.1 As barriers which act to isolated the coupled quantum wells from electrostatic gates, as illustrated in Fig. 1. The bilayer consists of two 15nm deep GaAs quantum wells with an assumed 2DEG electron density of 2.0 × 10 10 cm −2 separated by a 1nm Al 0.9 Ga 0.1 As barrier. The quantum wells are 1.2µm long and 7.5µm wide with the splitting between the symmetric and antisymmetric states set to a small value ∆ SAS = 2t = 2µeV . We define the z-axis as the growth direction, the x-axis as the longitudinal (transport) direction, and the y-axis as the direction across the transport channel as shown schematically in Fig. (1). We connect ideal contacts attached to the inputs and outputs of both layers. The contacts inject and extract current and enter into the Hamiltonian via appropriate self-energy terms. 1 We construct the system Hamiltonian from a model with a single band effective mass Hamiltonian for top and bottom layers and a phenomenological single-particle inter-layer tunneling term:
H = H T L 0 0 H BL + µ=x,y,zμ · ∆ ⊗ σ µ .(1)
The first term on the right hand side of Eq. (1) is the single-particle non-interacting term while the second term is a mean-field interaction term. In the second term in right hand side, σ µ represents the Pauli spin matrices in each of the three spatial directions µ = x, y, z, ⊗ represents the Kronecker product, and ∆ is a pseudospin effective magnetic field which will be discussed in more detail later in this section. To explore interaction physics in bilayer transport qualitatively, we use a local density approximation in which the interaction contribution to the quasiparticle Hamiltonian is proportional to the pseudospin-magnetization at each point in space. If we take the top layer as the pseudospin up state (| ↑ ) and the bottom layer as the pseudospin down state (| ↓ ), the single particle interlayer tunneling term contributes a pseudospin effective field H with magnitude t = ∆ SAS /2 and directionx. In real spin ferromagnetic systems, interactions between spin-polarized electrons lead to an effective magnetic field in the direction of spin-polarization. Bilayers with pseudospinpolarization due to tunneling have a similar interaction contribution to the quasiparticle Hamiltonian. Including both single-particle and many-body interaction contributions, the pseudospin effective field ∆ term in the quasiparticle Hamiltonian is, 28;41
∆ = (t + U m x ps )x + U m y psŷ(2)
where the pseudospin-magnetization m ps is defined by
m ps = 1 2 Tr[ρ ps τ ].(3)
In Eq. (3), τ = σ x , σ y , σ z is the vector of Pauli spin matrices, and ρ ps is the 2 × 2 Hermitian pseudospin density matrix which we define as,
ρ ps = ρ ↑↑ ρ ↑↓ ρ ↓↑ ρ ↓↓ .(4)
The diagonal terms of pseudospin density matrix (ρ ↑↑ , ρ ↓↓ ) are the electron densities of top and bottom layers. In Eq. (2), we have dropped the exchange potential associated with theẑ component of pseudospin because it is dominated by the electric potential difference between layers induced by the inter-layer bias voltage (See below). From the definition of Eq. (1), the pseudospinmagnetization ofx,ŷ,ẑ direction is defined in terms of the density matrix as
m x ps = 1 2 (ρ ↑↓ + ρ ↓↑ ),(5)m y ps = 1 2 (−iρ ↑↓ + iρ ↓↑ ),(6)m z ps = 1 2 (ρ ↑↑ − ρ ↓↓ ).(7)
As a result, we may express the system Hamiltonian in terms of pseudospin field contributions,
H = H T L + ∆ z ∆ x − i∆ y ∆ x + i∆ y H BL − ∆ z .(8)
Here ∆ z is the electric potential difference between the two layers which we evaluate in a Hartree approximation, disregarding its exchange contribution. The planar pseudospin angle which figures prominently in the discussion below is defined by
φ ps = tan −1 m y ps m x ps .(9)
This angle corresponds physically to the phase difference between electrons in the two layers.
B. Enhanced Interlayer Tunneling
With the quasiparticle Hamiltonian for our semiconductor bilayer defined, we now address the strength of the interlayer interactions present in the system. The interaction parameter U in Eq. (2) is chosen so that the local density approximation for interlayer exchange reproduces a prescribed value for the interaction enhancement of the interlayer tunneling amplitude. In equilibrium the pseudospin magnetization will be oriented in thex direction and the quasiparticle will have either symmetric or antisymmetric bilayer states with pseudospins in thex and
t ef f = t + U N s − N a 2 .(10)
The population difference between symmetric and antisymmetric differences may be evaluated from the differences in their Fermi radii illustrated in Fig. 2:
N s − N a 2 = ν 0 t ef f .(11)
where ν 0 is the density-of-states of a single layer. Combining Eq. (10) and Eq. (11) we can relate U to S, the interaction enhancement factor for the interlayer tunneling amplitude:
t ef f = t 1 − U ν 0 ≡ S t.(12)
The physics of S is similar to that responsible for the interaction enhancement of the Pauli susceptibility in metals. According to microscopic theory 40 a typical value for S is around 2. We choose to use S rather than U as a parameter in our calculations and therefore set
U = 1 − S −1 ν 0 ≃ 1 2ν 0 .(13)
C. Pseudospin Transfer Torques
In the following section we report on simulations in which we drive an interlayer current by keeping the top left and top right contacts grounded and applying identical interlayer voltages, V IN T , at the bottom left and bottom right contacts.
We choose this bias configuration so as to focus on interlayer currents that are relatively uniform. The transport properties depend only on the quasiparticle Hamiltonian and on the chemical potentials in the leads. Because the pseudospin effective field is the only term in the quasiparticle Hamiltonian which does not conserve theẑ component of the pseudospin, it follows that every quasiparticle wavefunction in the system must satisfy 42
∂ t m z ps = −∇ · j z − 2 (m ps × ∆) z = 0(14)
where j z is theẑ component of the pseudospin current contribution from that orbital, i.e. the difference between bottom and top layer number currents, and m ps is the pseudospin magnetization of that orbital. For steady state transport, the quasiparticles satisfy timeindependent Schröedinger equations so that, summing over all quasiparticle orbitals we find,
2|m ps ||∆| sin(φ ps − φ ∆ ) = 2tm y ps = ∇ · j z .(15)
In Eq. (15), φ ∆ is the planar orientation of ∆. The first equality in Eq.(15) follows from Eq.(2). The pseudospin orbitals do not align with the effective field they experience because they must precess between layers as they transverse the sample. The realignment of transport orbital pseudospin orientations alters the total pseudospin and therefore the interaction contribution to ∆. The change in m ps × ∆ due to transport currents is referred to here as the pseudospin transfer torque, in analogy with the terminology commonly found in metal spintronics. Integrating Eq. (15) across the sample from left to right and accounting for spin degeneracy, we find that 4etA m y ps = I L + I R = I
where A is the 2D layer area, the angle brackets denote a spatial average, and I L and I R are the currents flowing from top to bottom at the left and right contacts.
(The pseudospin current j z flows to the right on the right and is positive on the right side of the sample, but flows to the left and is negative on the left side of the sample.) If the bias voltage can drive an interlayer current I larger than 4etA m y ps / , it will no longer be possible to achieve a transport steady state. Under these circumstances the interlayer current will oscillate in sign and the time-averaged current will be strongly reduced. In the next section we use a numerical simulation to assess the possibility of achieving currents of this size.
A similar conclusion can be reached following a different line of argument. The microscopic operatorĴ describing net current flowing top layer(↑) to bottom layer(↓) given by 33;43
J = −iet k,σ c † k,σ,↑ c k,σ,↓ − c † k,σ,↓ c k,σ,↑ = −2et i k c † k,↑ c k,↓ − c † k,↓ c k,↑ = 4et m y ps ,(17)
where k, σ are the momentum and spin indices respectively. Here we have introduced a common notatition ∆ SAS = 2t for the pseudospin splitting between symmetric and antisymmetric states in the absence of interactions and added a factor of 2 to account for spindegeneracy. The pseudospin density operator m y ps in Eq. (17) is given by
m y ps = −i 2 k c † k,↑ c k,↓ − c † k,↓ c k,↑ .(18)
As a result, current density flowing in the given system may be written as
Ĵ = 4et m y ps = 4et |m xy | sin φ ≤ 4et |m xy | = J c ,(19)
where |m xy | = m x 2 + m y 2 , φ ps is defined in Eq. (9) and J c is defined critical current density. Assuming the current flow is uniform across the device, it is possible to obtain the critical current simply by multiplying Eq. (19) by the system area A to obtain
I c = J c × A = 4etA |m xy |.(20)
D. Simulation detail
Because we neglect disorder in this simulation we may write the Hamiltonian in the form of decoupled 1D longitudinal channels in the transport (x) direction, taking proper account of the eigenenergy of transverse (ŷ) direction motion 44 . The calculation strategy follows a standard self-consistent field procedure. Given the density matrix of the 2D system, we can evaluate the mean-field quasiparticle Hamiltonian. For the interlayer tunneling part of the Hamiltonian we use the local-density approximation outlined in subsection A. The electrostatic potential in each layer is calculated from the charge density in each of the layers by solving a 2D Poisson equation using an alternating direction implicit method 45 with appropriate boundary conditions. The boundary conditions employed in this situation were hard wall boundaries on the top and sides of the simulation domain and Neumann boundaries at the points where current is injected to insure charge neutrality 46 . Given the mean-field quasiparticle Hamiltonian and voltages in the leads, we can solve for the steady state density matrix of the two-dimensional bilayer using the NEGF method with a real-space basis. The density-matrix obtained from the quantum transport calculation is updated at each state in the iteration process. The update density is then fed back into the Poisson solver and on-site potential is updated using the Broyden method 47 to accelerate self-consistency. The effective interlayer tunneling amplitudes are also updated and the loop proceeds until a desired level of self- In the inset, we plot of the height of the interlayer conductance normalized to the noninteracting conductance (S=1) as a function of S with the same x axis. We clearly see that the height of the interlayer conductance follows an S 2 dependence. The single particle tunneling amplitude t = ∆SAS/2 = 1 µeV .
consistency is achieved. The transport properties are calculated after self-consistency is achieved by applying the Landauer formula, i.e. by using
I(V sd ) = 2e h T (E)[f s (E) − f d (E)](21)
III. NUMERICAL RESULTS
A. Linear response
In Fig. (3), we plot the interlayer conductance at temperature T = 0 as a function of the interlayer exchange enhancement, S by setting V B = V IN T and V T = 0 in Fig. (1). The S 2 dependence demonstrates that the conductance in our nanodevice is proportional to the square of the quasiparticle tunneling amplitude, as in bulk samples 30-33;48;49 , and that the quasiparticle tunneling amplitude is approximately uniformly enhanced even though the finite size system is not perfectly uniform. The range of S used in this figure corresponds to the relatively modest enhancement factors that we expect in bilayer systems in which the individual layers have the same sign of mass. For systems in which the quasiparticle masses are opposite in the two layers we expect values of S that are significantly larger. Spontaneous interlayer coherence, which occurs in a magnetic field 24;50-53 but is not expected in the absence of a field would be signaled by a divergence in S. The physics of the results illustrated in Fig. (3) can be understood qualitatively by ignoring the finite-sizerelated spatial inhomogeneities present in our simulations and considering the simpler case in which there is a single bottom-layer source contact and a single top-layer drain contact on opposite ends of the transport channels. The interlayer tunneling is then diagonal in transverse channel, and the transmission probability from top layer to bottom layer in channel k is
T interlayer = sin 2 δ k L 2 .(22)
In Eq. (22), δ k is the difference between the current direction wavevectors of the symmetric and antisymmetric states and L is the system length in the transport direction. We may simplify the expression in Eq. (22) by rewriting it in terms of the Fermi velocity, v f in the transport-direction and making use of the small angle expansion of the sin function to obtain
T interlayer = t ef f L v f 2 ,(23)
where t ef f is the quasiparticle tunneling amplitude proportional to the interlayer exchange enhancement S resulting in the power law dependence seen in Fig. (3). As this approximate expression suggests, we find that transport at low interlayer bias voltages is dominated by the highest energy transverse channel, which has the smallest transport direction Fermi velocity. The points in this plot correspond to the same points that appear in the inset of current vs. bias voltage in Fig.(4). For zero-current, the pseudospin is in thex direction and has a value close to m0 = ν0St. mx decreases monotonically, mz increases monotonically, and my increases nearly-monotonically before saturating at the largest field values. The arrow in the plot indicates the direction in which a bias voltage increases.
B. Transport beyond linear response
We have seen that at small bias voltages the interlayer current is enhanced by interlayer exchange interactions. In Fig. (4) , we compare our interlayer transport result with the S = 1 case. We see that all of the curves are sharply peaked near zero bias, as in bulk 2D to 2D tunneling 30-33;48;49 . In all cases the decrease and change in sign of the differential conductance at higher bias voltages is due to the build-up of a Hartree potential difference (∆ z ) between the layers which moves the bilayer away from its resonance condition. The peak near zero bias is sharper for the enhanced interlayer tunneling (filled triangle and circle in Fig. (4)) cases. To get a clearer picture of S > 1 transport properties in the nonlinear regime, we examine the influence of bias voltage on steady-state pseudospin configurations. Initially, all orbitals are aligned with the external pseudospin field, the inter-layer tunneling, and it follows from Eq. (16) that φ ps = 0. The enhanced magnitude of this equilibrium pseudospin polarization is m 0 = ν 0 St = ν 0 t ef f . When current flows between there must be aŷ-component pseudospin, as we have explained previously, whose spatially averaged value is proportional to the current. If it is possible to drive a current that is larger than allowed by Eq. ( 20), it will no longer be possible to sustain a timeindependent steady state.
In Figs. (5) and (6) y, andẑ directed pseudospin fields, along with |m xy | = m 2 x + m 2 y , evaluated at the center of the device, as a function of current and bias voltage, respectively. The pseudospin densities are plotted in units of their equilibrium values. The fifteen data points for each of these curves correspond to the 15 data points in the current vs. bias potential plot in Fig. (4), so that the maximum current corresponds to a bias voltage of ∼ 20µV and the last data point corresponds to a bias voltage of ∼ 100µV. The first thing to notice in this plot is that theẑ pseudospin component increases monotonically with bias voltage, as a potential difference between layers builds up. This is the effect which eventually causes the current to begin to decrease. Thex component of pseudospin increases very slowly with bias voltage in the regime where the current is increasing, but drops rapidly when current is decreasing. At the same time theŷ component of pseudospin evaluated at the device center rises steadily with current until the maximum current is reached and then remains approximately constant. When the two effects are combined |m xy | first increases slowly with bias voltage and then decreases slightly more rapidly. The relatively weak dependence of |m xy | on current can be understood as a competition between two effects. Because of the Coulomb potential which builds up and lifts the degeneracy between states localized in opposite layers, the direction of the pseudospin tilts toward theẑ direction, decreasing thex −ŷ plane component of each state. At the same time the total magnitude of the pseudospin field increases, increasing the pseudospin polarization. In a uniform system the two effects cancel when aẑ-direction pseudospin field is added to a uniform system. Our simulations suggest the following scenario for how the critical current might be reached in bilayers. The width of the linear response regime is limited by the lifetime of Bloch states which is set by disorder in bulk systems and in our finite-size system by the time for escape into the contacts. In the linear-response regime the current is enhanced by a factor of S 2 by inter-layer exchange interactions. The maximum current which can be supported in the steady-state is however proportional to the bare inter-layer tunneling and to thex −ŷ pseudospin polarization and is therefore enhanced only by a factor of S. From this comparison we can conclude that more strongly enhanced inter-layer tunneling quasiparticle tunneling amplitudes (larger S) increases the chances of reaching a critical current beyond which the system current response is dynamic. For the parameters of our simulation, the maximum current of around 4nA is reached at a bias voltage of around 10µV. At this bias voltage the averagex −ŷ plane angle of the pseudospin field is still less than 90 • , and steady state response occurs. At the bias voltage increases further the total current decreases, but the pseudospins become strongly polarized in theẑ direction. We illustrate this behavior in Fig.(7) by plotting
σ = ∆ z / ∆ 2 z + ∆ 2 c(24)
vs. bias voltage, where ∆ z isẑ directional pseudospin field and ∆ c is a constant beyond which charge imbalance of the system becomes significant 54 . The magnitude of thex−ŷ direction pseudospin polarization consequently decreases. In our simulations the critical current decreases more rapidly than the current in this regime. As shown in Fig. (5), φ ps approaches π/2 (m x → 0) at the largest bias voltages (∼ 100µV) for which we are able to obtain a steady-state solution of the non-equilibrium self-consistent equations.
IV. DISCUSSIONS AND CONCLUSIONS
For a balanced double quantum well system, the equilibrium electronic eigenstates have symmetric or antisymmetric bilayer wavefunctions. When the bilayer is described using a pseudospin language, the difference between symmetric and antisymmetric populations corresponds to ax-direction pseudospin polarization When the bilayer system is connected to reservoirs that drive current between layers, it is easy to show that the nonequilibrium pseudospin polarization must tilt toward thê y-direction. The total current that flows between layers is in fact simply related to the totalŷ-direction pseudospin polarization. These properties suggest the possibility that electron-electron interactions can qualitatively alter interlayer transport under some circumstances. In equilibrium interactions enhance the quasiparticle interlayer tunneling amplitude, but not the total current that can be carried between layers for a givenŷ-direction pseudospin polarization. If the inter-layer quasiparticle current can be driven to a value that is larger than can be supported by the inter-layer tunneling amplitude, the self-consistent equations for the transport steady state have no solution and time-dependent current response is expected. This effect has not yet been observed, but is partially analogous to spin-transfer-torque oscillators in circuits containing magnetic metals. We have demonstrated by explicit calculation for a model bilayer system that it is in principle to induce this dynamic instability in semiconductor bilayers.
The current-voltage relationship in semiconductor bilayers is characterized by a sharp peak in dI/dV at small bias voltages, followed by a regime of negative differential conductance at larger bias voltages. In our simulations the pseudospin instability occurs in the regime of negative differential conductance where dynamic responses might also occur simply due to normal electrical instabilities. It should be possible to distinguish these two effects experimentally, by varying the circuit resistance that is in series with the bilayer system. The interaction effect might also be more easily realized in bilayers in which the resonant interlayer tunneling conductance is broadened by intentionally adding disorder.
FIG. 1 .
1Device cross-section in x, z direction. Top and bottom
FIG. 2 .
2Schematic of the Fermi surfaces of the bilayer system containing populations of symmetric and antisymmetric state separated in energy by a single particle tunneling term, ∆SAS. −x directions respectively. The majority and minority pseudospin states differ in energy at a given momentum by 2t ef f where
FIG. 3 .
3Plot of the height of the interlayer conductance as a function of the interlayer exchange enhancement S at interlayer bias VT − VB = 10 nV .
FIG. 4 .
4Plot of the differential conductance of the bilayer system at S = 4 (orange triangle), S = 2 (blue circle), and S = 1 (opened rectangular) as a function of interlayer bias VB − VT at 0K. In the inset, the same plot in negative interlayer bias with same y axis is plotted with x axis in log scale. In small bias window, almost constant interlayer transconductance is observed. After the interlayer bias reaches VINT ≈ 10µV , an abrupt drop in transconductance can be seen in the interacting electron cases of S = 2, 4. We plot the current vs. bias voltage for each enhancement factor in the inset.
FIG. 5 .
5Pseudospin polarization vs. current. The quantities are expressed in units of the value of mx at zero bias (m0).
FIG. 6 .
6we plot the magnitudes of thex, Pseudospin polarization vs. bias voltage. The quantities are expressed in units of the value of mx at zero bias (m0). The inset of this plot clearly shows that mz increases monotonically with bias voltage which eventually causes the current to decrease. The |mxy| decreases only slowly with bias voltage because of the compensating effects of pseudospin rotation toward theẑ direction and an increase in the overall pseudospin polarization. There is no steady state solution to the self-consistent transport equations beyond the largest bias voltage plotted here, because the pseudospin orientation φps has reached π/2.
FIG. 7 .
7Quasiparticle layer-polarization parameter σ as a function of interlayer bias. When the source-drain bias exceeds around 40 µV , the mismatch match between subband energy levels exceeds the width of the conductance peak, σ increases rapidly, and thex −ŷ-plane pseudospin polarization and critical current decrease. The inset shows the same data in log scale to emphasize the low bias behavior.
ACKNOWLEDGMENTSWe acknowledge support for the Center for Scientific Computing from the CNSI, MRL: an NSF MRSEC (DMR-1121053) and NSF CNS-0960316 and Hewlett-Packard. YK is supported by Fulbright Science and Technology Award. YK and MJG are supported by the ARO under contract number W911NF-09-1-0347. AHM was supported by the NRI SWAN program and by the Welch Foundation under grant TB-F1473.
. * Micro, Nanotechnology Laboratory, 61801Urbana, IlUniversity of Illinois* Micro and Nanotechnology Laboratory, University of Illi- nois, Urbana, Il 61801
. S Datta, Superlattices and Microstructures. 28S. Datta, Superlattices and Microstructures 28, 253 (2000), ISSN 0749-6036.
. R Shankar, Rev. Mod. Phys. 66129R. Shankar, Rev. Mod. Phys. 66, 129 (1994).
D Pines, P Nozieres, The Theory Of Quantum Liquid. New YorkAddison-WesleyD. Pines and P. Nozieres, The Theory Of Quantum Liquid (Addison-Wesley, New York, 1966).
G F Giuliani, G Vignale, Quantum Theory of the Electron Liquid. CambridgeCambridge Universiry PressG. F. Giuliani and G. Vignale, Quantum Theory of the Electron Liquid (Cambridge Universiry Press, Cambridge, 2005).
. C A Ullrich, G Vignale, Phys. Rev. B. 65245102C. A. Ullrich and G. Vignale, Phys. Rev. B 65, 245102 (2002).
. M , Di Ventra, N D Lang, Phys. Rev. B. 6545402M. Di Ventra and N. D. Lang, Phys. Rev. B 65, 045402 (2001).
. M J Gilbert, D K Ferry, Journal of Applied Physics. 957954M. J. Gilbert and D. K. Ferry, Journal of Applied Physics 95, 7954 (2004).
. M J Gilbert, R Akis, D K Ferry, Journal of Applied Physics. 9894303M. J. Gilbert, R. Akis, and D. K. Ferry, Journal of Applied Physics 98, 094303 (2005).
. J Wang, E Polizzi, M Lundstrom, Journal of Applied Physics. 962192J. Wang, E. Polizzi, and M. Lundstrom, Journal of Applied Physics 96, 2192 (2004).
. N Sergueev, A A Demkov, H Guo, Phys. Rev. B. 75233418N. Sergueev, A. A. Demkov, and H. Guo, Phys. Rev. B 75, 233418 (2007).
. J Wu, B Wang, J Wang, H Guo, Phys. Rev. B. 72195324J. Wu, B. Wang, J. Wang, and H. Guo, Phys. Rev. B 72, 195324 (2005).
. D Waldron, P Haney, B Larade, A Macdonald, H Guo, Phys. Rev. Lett. 96166804D. Waldron, P. Haney, B. Larade, A. MacDonald, and H. Guo, Phys. Rev. Lett. 96, 166804 (2006).
. C Toher, S Sanvito, Phys. Rev. Lett. 9956801C. Toher and S. Sanvito, Phys. Rev. Lett. 99, 056801 (2007).
. A R Rocha, V M Garcia-Suarez, S W Bailey, C J Lambert, J Ferrer, S Sanvito, Nature Materials. 4335A. R. Rocha, V. M. Garcia-suarez, S. W. Bailey, C. J. Lambert, J. Ferrer, and S. Sanvito, Nature Materials 4, 335 (2005).
A S Nunez, A H Macdonald, Solid State Communications. 139A. S. Nunez and A. H. MacDonald, Solid State Communi- cations 139, 31 (2006), ISSN 0038-1098.
. P Haney, R Duine, A Nunez, A Macdonald, 0304-8853Journal of Magnetism and Magnetic Materials. 3201300P. Haney, R. Duine, A. Nunez, and A. MacDonald, Journal of Magnetism and Magnetic Materials 320, 1300 (2008), ISSN 0304-8853.
. P M Haney, D Waldron, R A Duine, A S Nunez, H Guo, A H Macdonald, Phys. Rev. B. 7624404P. M. Haney, D. Waldron, R. A. Duine, A. S. Nunez, H. Guo, and A. H. MacDonald, Phys. Rev. B 76, 024404 (2007).
The microscopic and macroscopic pictures are essentially equivalent in ferromagnets because the spin-splitting term in much larger than other spin-dependent terms in the quasiparticle hamiltonian. The microscopic and macroscopic pictures are essentially equivalent in ferromagnets because the spin-splitting term in much larger than other spin-dependent terms in the quasiparticle hamiltonian.
. W H Rippard, M R Pufall, S E Russek, Phys. Rev. B. 74224409W. H. Rippard, M. R. Pufall, and S. E. Russek, Phys. Rev. B 74, 224409 (2006).
. E Rossi, O G Heinonen, A H Macdonald, Phys. Rev. B. 72174412E. Rossi, O. G. Heinonen, and A. H. MacDonald, Phys. Rev. B 72, 174412 (2005).
. X G Wen, A Zee, Phys. Rev. Lett. 691811X. G. Wen and A. Zee, Phys. Rev. Lett. 69, 1811 (1992).
. K Moon, H Mori, K Yang, S M Girvin, A H Macdonald, L Zheng, D Yoshioka, S.-C Zhang, Phys. Rev. B. 515138K. Moon, H. Mori, K. Yang, S. M. Girvin, A. H. MacDon- ald, L. Zheng, D. Yoshioka, and S.-C. Zhang, Phys. Rev. B 51, 5138 (1995).
. J P Eisenstein, A H Macdonald, Nature. 432691J. P. Eisenstein and A. H. MacDonald, Nature 432, 691 (2004).
. I B Spielman, J P Eisenstein, L N Pfeiffer, K W West, Phys. Rev. Lett. 845808I. B. Spielman, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 84, 5808 (2000).
. O Gunawan, Y P Shkolnikov, E P D Poortere, E Tutuc, M Shayegan, Phys. Rev. Lett. 93246603O. Gunawan, Y. P. Shkolnikov, E. P. D. Poortere, E. Tu- tuc, and M. Shayegan, Phys. Rev. Lett. 93, 246603 (2004).
. R D Wiersma, J G S Lok, S Kraus, W Dietsche, K Klitzing, D Schuh, M Bichler, H.-P Tranitz, W Wegscheider, Phys. Rev. Lett. 93266805R. D. Wiersma, J. G. S. Lok, S. Kraus, W. Dietsche, K. von Klitzing, D. Schuh, M. Bichler, H.-P. Tranitz, and W. Wegscheider, Phys. Rev. Lett. 93, 266805 (2004).
. M Kellogg, I B Spielman, J P Eisenstein, L N Pfeiffer, K W West, Phys. Rev. Lett. 88126804M. Kellogg, I. B. Spielman, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 88, 126804 (2002).
. A H Macdonald, Physica B: Condensed Matter. 298A. H. MacDonald, Physica B: Condensed Matter 298, 129 (2001), ISSN 0921-4526.
. T Jungwirth, A H Macdonald, Phys. Rev. Lett. 87216801T. Jungwirth and A. H. MacDonald, Phys. Rev. Lett. 87, 216801 (2001).
. S Misra, N C Bishop, E Tutuc, M Shayegan, Phys. Rev. B. 77161301S. Misra, N. C. Bishop, E. Tutuc, and M. Shayegan, Phys. Rev. B 77, 161301 (2008).
. J P Eisenstein, T J Gramila, L N Pfeiffer, K W West, Phys. Rev. B. 446511J. P. Eisenstein, T. J. Gramila, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 44, 6511 (1991).
. J P Eisenstein, L N Pfeiffer, K W West, Applied Physics Letters. 581497J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Applied Physics Letters 58, 1497 (1991).
. L Zheng, A H Macdonald, Phys. Rev. B. 4710619L. Zheng and A. H. MacDonald, Phys. Rev. B 47, 10619 (1993).
. J.-J Su, A H Macdonald, Nat. Phys. 4799J.-J. Su and A. H. MacDonald, Nat. Phys. 4, 799 (2008).
. A H Macdonald, Phys. Rev. B. 374792A. H. MacDonald, Phys. Rev. B 37, 4792 (1988).
. P P Ruden, Z Wu, Applied Physics Letters. 592165P. P. Ruden and Z. Wu, Applied Physics Letters 59, 2165 (1991).
Pseudospin ferromagnetism (or equivalently exciton condensation) is, however, predicted in systems with equal densities of conduction band electrons and valence band holes in separate quantum wells. Pseudospin ferromagnetism (or equivalently exciton con- densation) is, however, predicted in systems with equal densities of conduction band electrons and valence band holes in separate quantum wells.
G Senatore, S D Palo, Contributions to Plasma Physics. 43G. Senatore and S. D. Palo, Contributions to Plasma Physics 43, 363 (2003), ISSN 1521-3986.
. A Stern, S D Sarma, M P A Fisher, S M Girvin, Phys. Rev. Lett. 84139A. Stern, S. D. Sarma, M. P. A. Fisher, and S. M. Girvin, Phys. Rev. Lett. 84, 139 (2000).
. L Swierkowski, A H Macdonald, Phys. Rev. B. 5516017L. Swierkowski and A. H. MacDonald, Phys. Rev. B 55, R16017 (1997).
. M J Gilbert, Phys. Rev. B. 82165408M. J. Gilbert, Phys. Rev. B 82, 165408 (2010).
A S Nunez, A H Macdonald, 0038-1098Solid State Communications. 139A. S. Nunez and A. H. MacDonald, Solid State Com- munications 139, 31 (2006), ISSN 0038-1098, URL http://www.sciencedirect.com/science/article/pii/S00381098060
. K Park, S. Das Sarma, Phys. Rev. B. 7435338K. Park and S. Das Sarma, Phys. Rev. B 74, 035338 (2006).
. R Venugopal, Z Ren, S Datta, M S Lundstrom, D Jovanovic, Journal of Applied Physics. 923730R. Venugopal, Z. Ren, S. Datta, M. S. Lundstrom, and D. Jovanovic, Journal of Applied Physics 92, 3730 (2002).
. H L Stone, SIAM Journal on Numerical Analysis. 5530H. L. Stone, SIAM Journal on Numerical Analysis 5, 530 (1968).
A Rahman, J Guo, S Datta, M Lundstrom, Electron Devices. 50A. Rahman, J. Guo, S. Datta, and M. Lundstrom, Electron Devices, IEEE Transactions on 50, 1853 (2003), ISSN 0018-9383.
. D D Johnson, Phys. Rev. B. 3812807D. D. Johnson, Phys. Rev. B 38, 12807 (1988).
. N Turner, J T Nicholls, E H Linfield, K M Brown, G A C Jones, D A Ritchie, Phys. Rev. B. 5410614N. Turner, J. T. Nicholls, E. H. Linfield, K. M. Brown, G. A. C. Jones, and D. A. Ritchie, Phys. Rev. B 54, 10614 (1996).
. S K Lyo, Phys. Rev. B. 618316S. K. Lyo, Phys. Rev. B 61, 8316 (2000).
. H Min, R Bistritzer, J.-J Su, A H Macdonald, Phys. Rev. B. 78121401H. Min, R. Bistritzer, J.-J. Su, and A. H. MacDonald, Phys. Rev. B 78, 121401 (2008).
. J A Seamons, D R Tibbetts, J L Reno, M P Lilly, Applied Physics Letters. 90352103J. A. Seamons, D. R. Tibbetts, J. L. Reno, and M. P. Lilly, Applied Physics Letters 90, 052103 (pages 3) (2007).
. C.-H Zhang, Y N Joglekar, Phys. Rev. B. 77233405C.-H. Zhang and Y. N. Joglekar, Phys. Rev. B 77, 233405 (2008).
. A V Balatsky, Y N Joglekar, P B Littlewood, Phys. Rev. Lett. 93266801A. V. Balatsky, Y. N. Joglekar, and P. B. Littlewood, Phys. Rev. Lett. 93, 266801 (2004).
. J Bourassa, B Roostaei, R Côté, H A Fertig, K Mullen, Phys. Rev. B. 74195320J. Bourassa, B. Roostaei, R. Côté, H. A. Fertig, and K. Mullen, Phys. Rev. B 74, 195320 (2006).
| []
|
[
"Spin relaxation due to the Bir-Aronov-Pikus mechanism in intrinsic and p-type GaAs quantum wells from a fully microscopic approach",
"Spin relaxation due to the Bir-Aronov-Pikus mechanism in intrinsic and p-type GaAs quantum wells from a fully microscopic approach"
]
| [
"J Zhou ",
"M W Wu ",
"\nHefei National Laboratory for Physical Sciences at Microscale\nDepartment of Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n",
"\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina\n"
]
| [
"Hefei National Laboratory for Physical Sciences at Microscale\nDepartment of Physics\nUniversity of Science and Technology of China\n230026HefeiAnhuiChina",
"University of Science and Technology of China\n230026HefeiAnhuiChina"
]
| []
| We study the electron spin relaxation in intrinsic and p-type (001) GaAs quantum wells by constructing and numerically solving the kinetic spin Bloch equations. All the relevant scatterings are explicitly included, especially the spin-flip electron-heavy hole exchange scattering which leads to the Bir-Aronov-Pikus spin relaxation. We show that, due to the neglection of the nonlinear terms in the electron-heavy hole exchange scattering in the Fermi-golden-rule approach, the spin relaxation due to the Bir-Aronov-Pikus mechanism is greatly exaggerated at moderately high electron density and low temperature in the literature. We compare the spin relaxation time due to the Bir-Aronov-Pikus mechanism with that due to the D'yakonov-Perel' mechanism which is also calculated from the kinetic spin Bloch equations with all the scatterings, especially the spin-conserving electronelectron and electron-heavy hole scatterings, included. We find that, in intrinsic quantum wells, the effect from the Bir-Aronov-Pikus mechanism is much smaller than that from the D'yakonov-Perel' mechanism at low temperature, and it is smaller by no more than one order of magnitude at high temperature. In p-type quantum wells, the spin relaxation due to the Bir-Aronov-Pikus mechanism is also much smaller than the one due to the D'yakonov-Perel' mechanism at low temperature and becomes comparable to each other at higher temperature when the hole density and the width of the quantum well are large enough. We claim that unlike in the bulk samples, the Bir-Aronov-Pikus mechanism hardly dominates the spin relaxation in two-dimensional samples. | 10.1103/physrevb.77.075318 | [
"https://arxiv.org/pdf/0705.0216v3.pdf"
]
| 119,320,653 | 0705.0216 | 22910f21cf6e921f15cb746479358200dee0d4a5 |
Spin relaxation due to the Bir-Aronov-Pikus mechanism in intrinsic and p-type GaAs quantum wells from a fully microscopic approach
28 Dec 2007 (Dated: February 1, 2008)
J Zhou
M W Wu
Hefei National Laboratory for Physical Sciences at Microscale
Department of Physics
University of Science and Technology of China
230026HefeiAnhuiChina
University of Science and Technology of China
230026HefeiAnhuiChina
Spin relaxation due to the Bir-Aronov-Pikus mechanism in intrinsic and p-type GaAs quantum wells from a fully microscopic approach
28 Dec 2007 (Dated: February 1, 2008)numbers: 7225Rb7110-w6757Lm7847+p
We study the electron spin relaxation in intrinsic and p-type (001) GaAs quantum wells by constructing and numerically solving the kinetic spin Bloch equations. All the relevant scatterings are explicitly included, especially the spin-flip electron-heavy hole exchange scattering which leads to the Bir-Aronov-Pikus spin relaxation. We show that, due to the neglection of the nonlinear terms in the electron-heavy hole exchange scattering in the Fermi-golden-rule approach, the spin relaxation due to the Bir-Aronov-Pikus mechanism is greatly exaggerated at moderately high electron density and low temperature in the literature. We compare the spin relaxation time due to the Bir-Aronov-Pikus mechanism with that due to the D'yakonov-Perel' mechanism which is also calculated from the kinetic spin Bloch equations with all the scatterings, especially the spin-conserving electronelectron and electron-heavy hole scatterings, included. We find that, in intrinsic quantum wells, the effect from the Bir-Aronov-Pikus mechanism is much smaller than that from the D'yakonov-Perel' mechanism at low temperature, and it is smaller by no more than one order of magnitude at high temperature. In p-type quantum wells, the spin relaxation due to the Bir-Aronov-Pikus mechanism is also much smaller than the one due to the D'yakonov-Perel' mechanism at low temperature and becomes comparable to each other at higher temperature when the hole density and the width of the quantum well are large enough. We claim that unlike in the bulk samples, the Bir-Aronov-Pikus mechanism hardly dominates the spin relaxation in two-dimensional samples.
I. INTRODUCTION
Much attention has been given to semiconductor spintronics both theoretically and experimentally due to great prospect of the potential applications. 1,2, 3 The study of the spin relaxation/dephasing (R/D) in semiconductors contains rich physics and is of great importance for the device application. Three spin R/D mechanisms have long been proposed in Zinc-blend semiconductors, i.e., the Elliott-Yafet (EY) mechanism, 4 caused by the spin-flip electron-impurity scattering due to the spinorbit coupling; the D'yakonov-Perel' (DP) mechanism 5 which is due to the momentum-dependent spin splitting in crystal without a center of symmetry; and the Bir-Aronov-Pikus (BAP) mechanism 6 which originates from the spin-flip electron-hole exchange interaction. Previous researches have shown that, in bulk systems, the EY mechanism is important in narrow-band-gap and high impurity semiconductors; the DP mechanism is dominant in n-type semiconductors; and the BAP mechanism can have significant effect in p-doped semiconductors. 7,8,9 It is known that, in heavily p-doped bulk samples, the BAP mechanism is dominant at low temperature whereas the DP mechanism is dominant at high temperature with the crossover temperature determined by the doping level. In bulk samples with low hole density, the BAP mechanism has been shown to be irrelevant. 7,8,9 In addition, the hyperfine interaction induced spin relaxation is another possible mechanism. 10 In contrast to the bulk systems, the relative importance of the BAP and DP mechanisms for the electron spin R/D in two dimensional (2D) systems, especially in p-type 2D systems, is still not very clear, sometimes even confusing. In Ref. [11], extremely long spin relaxation time (SRT), which is two orders of magnitude longer than that in the bulk sample with corresponding acceptor concentrations, was reported by Wagner et al. in p-type GaAs quantum wells (QWs). The authors argued that the BAP mechanism is dominant at low temperature. However in Ref. [12], the SRT in p-type QWs was reported to be a factor of 4 shorter than that in comparably bulk GaAs by Damen et al. at low temperature. The authors also referred the BAP mechanism as a cause for the decrease of the SRT. Hence, two opposite experimental results arrive at the same conclusions regarding the importance of the BAP mechanism. Moreover, Gotoh et al. further pointed out that the BAP mechanism should not be ignored even at room temperature. 13 They investigated the electric field dependence of SRT and found that the SRT decreases with the increase of the bias. They concluded that the decrease is from the BAP mechanism as the SRT due to the DP mechanism does not change with electric field. Actually, they overlooked the fact that the Rashba spin-orbit coupling 14 can also lead to the spin R/D due to the DP mechanism. Therefore, we believe that the decrease of SRT in their experiment cannot be a proof of the importance of the BAP mechanism. Very recently it was shown that the SRT at room temperature can be increased at the (100) GaAs surface due to the relatively lower concentration of holes at the surface and the mechanism for the SRT was referred to as the BAP mechanism. 15 Theoretically, Maialle 16 pointed out that the effect of the BAP mechanism in 2D systems is a little smaller than that of the DP mechanism at zero temperature by using the Fermi golden rule to calculate the SRT in which the elastic scattering approximation was applied and consequently the nonlinear terms of the electron-hole Coulomb scattering were neglected. The SRT due to the DP mechanism (τ DP ) was also calculated by using the single particle approach. 1,5 The author compared τ DP and τ BAP for different electron momentums (kinetic energies), and showed that these two SRTs have nearly the same order of magnitude in heavily doped QWs. However, τ DP calculated in Refs. [16] is quite cursory, because, under the framework of single particle theory the carriercarrier Coulomb scattering, which is very important to spin R/D, 17,18,19,20,21 is not included. Also the counter effect of the scattering to the spin R/D is also not fully accounted. 18,19,20,22,23 Moreover, it is also important to calculate the spin-flip electron-hole exchange scattering explicitly in order to find out the effect of the nonlinear terms ignored in the Fermi golden rule approach by Maialle et al. 16,26,27 We also want to find out the temperature dependence of the relative importance of both mechanisms in 2D systems, which to the best of our knowledge is still absent in the literature..
In order to accurately investigate the relative importance of the DP and the BAP mechanisms beyond the single-particle Fermi golden rule approach, we use the fully microscopic approach established by Wu et al. 24 by constructing and numerically solving the kinetic spin Bloch equations. 17,18,19,20,22,23,25 In this approach, all the corresponding scatterings such as the electronacoustic (AC) phonon, electron-longitudinal optical (LO) phonon, electron-nonmagnetic impurity, and electronelectron Coulomb scatterings are explicitly included. The results/predictions obtained from this approach are in very good agreement with varies experiments. 20,28,29,30 It was previously pointed out that, in the presence of inhomogeneous broadening, any type of scattering, including the Coulomb scattering, can give rise to the spin R/D. 17,18,19,20,23 In this paper, in addition to the all the above mentioned scatterings in n-type QWs as considered in Ref. [20], we further add the spin-conserving and spin-flip electron-heavy hole Coulomb scatterings, both contributing to the DP mechanism and the latter further leading to the spin R/D due to the BAP mechanism. By solving the kinetic spin Bloch equations self-consistently, we obtain the SRT from the BAP mechanism from a fully microscopic fashion. We further investigate the relative importance of the BAP and DP mechanisms in 2D systems.
This paper is organized as follows. In Sec. II, we construct the kinetic spin Bloch equations and present the scattering terms from the spin-conserving and spin-flip electron-hole Coulomb scatterings. We also discuss the SRTs due to the BAP mechanism from different approaches. Then we present our numerical results in Sec. III. We study the SRT due to both the DP and the BAP mechanisms under various conditions such as temperatures, electron/hole densities, impurity densities, and well widths. We conclude in Sec. VI.
II. KINETIC SPIN BLOCH EQUATIONS
We construct the kinetic spin Bloch equations in intrinsic and p-type (001) GaAs QWs by using the nonequilibrium Green's function method: 31
ρ k,σσ ′ =ρ k,σσ ′ | coh +ρ k,σσ ′ | scatt ,(1)
with ρ k,σσ ′ representing the single particle density matrix elements. The diagonal and off-diagonal elements of ρ k,σσ ′ give the electron distribution functions f kσ and the spin coherence ρ k,σ−σ , respectively. The coherent termsρ k,σσ ′ | coh describe the precession of the electron spin due to the effective magnetic field from the Dresselhaus term 32 Ω(k) and the Hartree-Fock Coulomb interaction. The expression of the coherent terms can be found in Appendix A (and also Ref. [18]). The Dresselhaus term can be written as:
33
Ω x (k) = γk x (k 2 y − k 2 z ),(2)Ω y (k) = γk y ( k 2 z − k 2 x ),(3)Ω z (k) = 0 ,(4)
in which k 2 z represents the average of the operator −(∂/∂z) 2 over the electronic state of the lowest subband, 20 and γ is the spin splitting parameter 1 which is chosen to be 11.4 eV·Å 3 all through the paper. 34ρ k,σσ ′ | scatt in Eq. (1) denote the electron-LOphonon, electron-AC-phonon, electron-nonmagnetic impurity, and the electron-electron Coulomb scatterings whose expressions are given in detail in Appendix A (see also Refs. [18,19,20]). All these scattering are calculated explicitly without any relaxation time approximation. Moreover, we further include the spin-conserving and spin-flip electron-heavy hole scatterings as what follows.
The Halmitonian of electron-heavy hole interaction is given by
H eh = k,k ′ ,q,σ=±1,σ ′ =±1 V eh,q c † k+q, σ 2 c k, σ 2 b † k ′ −q, 3σ ′ 2 b k ′ , 3σ ′ 2 ,(5)
where c (c † ) and b (b † ) are the annihilation (creation) operators of electrons in conduction (heavy-hole valence) band respectively. We denote σ (σ ′ ) to be ±1 throughout the paper. The screend Coulomb potential under the random-phase approximation reads 31
V eh,q = qz v Q f eh (q z ) ǫ(q) ,(6)
with the bare Coulomb potential v Q = 4πe 2 /Q 2 and
ǫ(q) = 1 − qz v Q f e (q z ) k,σ f k+q,σ − f k,σ ε e k+q − ε e k − qz v Q f h (q z ) k ′ ,σ f h k ′ +q,σ − f h k ′ ,σ ε h k ′+q − ε h k ′ (7)
is the electron-hole plasma screening. 35 In these equations Q 2 = q 2 + q 2 z and f h k,σ (f k,σ ) denotes the heavy hole (electron) distribution function with spin 3 2 σ ( 1 2 σ). The form factors can be written as:
f e (q z ) = dzdz ′ ξ c (z)ξ c (z ′ )e iqz (z−z ′ ) ξ c (z ′ )ξ c (z) , (8) f h (q z ) = dzdz ′ η h (z)η h (z ′ )e iqz (z−z ′ ) η h (z ′ )η h (z) , (9) f eh (q z ) = dzdz ′ ξ c (z)η h (z ′ )e iqz (z−z ′ ) η h (z ′ )ξ c (z) ,(10)
where ξ c (z) (η h (z)) is the envelope function of the electron (heavy hole) along the growth direction z. 20 The scattering term of this spin-conserving electron-hole Coulomb scattering can be written as:
∂f k,σ ∂t eh = −2π k ′ ,q,σ ′ δ(ε e k−q − ε e k + ε h k ′ − ε h k ′ −q )V 2 eh,q (1 − f h k ′ ,σ ′ )f h k ′ −q,σ ′ ×[f k,σ (1 − f k−q,σ ) − Re(ρ k ρ * k−q )] − f h k ′ ,σ ′ (1 − f h k ′ −q,σ ′ )[f k−q,σ (1 − f k,σ ) − Re(ρ k ρ * k−q )] , (11) ∂ρ k ∂t eh = −π k ′ ,q,σ,σ ′ δ(ε e k−q − ε e k + ε h k ′ − ε h k ′ −q )V 2 eh,q (1 − f h k ′ ,σ ′ )f h k ′ −q,σ ′ ×[(1 − f k−q,σ )ρ k − f k,σ ρ k−q ] + f h k ′ ,σ ′ (1 − f h k ′ −q,σ ′ )[f k−q,σ ρ k − (1 − f k,σ )ρ k−q ] ,(12)
where
ρ k ≡ ρ k, 1 2 − 1 2 ≡ ρ * k,− 1 2 1 2
. This spin-conserving scattering only enhances the total scattering strength moderately and contributes to the spin R/D due to the DP mechanism.
The Hamiltonian of the spin-flip electron-heavy hole exchange interaction reads
H BAP = k,k ′ ,q,σ M σ (k, k ′ )c † k+q, σ 2 b † k ′ −q,− 3σ 2 c k,− σ 2 b k ′ , 3σ 2 .(13)M σ (k, k ′ ) = 3 8 ∆E LT |φ 3D (0)| 2 qz f ex (q z )(k 2 σ + k ′2 σ ) q 2 z + |k + k ′ | 2 ,(14)
where ∆E LT is the longitudinal-transverse splitting in bulk, |φ 3D (0)| 2 = 1/(πa 3 0 ) is the 3D exciton state at zero relative distance, and k σ = k x + iσk y . For GaAs, ∆E LT = 0.08 meV and a 0 = 146.1Å respectively. 37 The form factor can be written as:
f ex (q z ) = dzdz ′ ξ c (z ′ )η h (z ′ )e iqz (z−z ′ ) η h (z)ξ c (z) . (15)
The scattering term from this Hamiltonian reads
∂f k,σ ∂t BAP = −2π k ′ ,q δ(ε e k−q − ε e k + ε h k ′ − ε h k ′ −q )M σ (k − q, k ′ )M −σ (k, k ′ − q) ×[(1 − f h k ′ ,σ )f h k ′ −q,−σ f k,σ (1 − f k−q,−σ ) − f h k ′ ,σ (1 − f h k ′ −q,−σ )(1 − f k,σ )f k−q,−σ ],(16)∂ρ k ∂t BAP = −π k ′ ,q,σ δ(ε e k−q − ε e k + ε h k ′ − ε h k ′ −q )M σ (k − q, k ′ )M −σ (k, k ′ − q) ×[(1 − f h k ′ ,σ )f h k ′ −q,−σ (1 − f k−q,−σ )ρ k + f h k ′ ,σ (1 − f h k ′ −q,−σ )f k−q,σ ρ k ] .(17)
If we denote K = k + k ′ as the center-of-mass momentum of the electron-hole pair, the product of the matrix elements in Eqs. (16) and (17) can be reduced to:
|M (K − q)| 2 = M σ (k − q, k ′ )M −σ (k, k ′ − q) = 9∆E 2 LT 16|φ 3D (0)| 4 [ qz f ex (q z )(K − q) 2 q 2 z + (K − q) 2 ] 2 .(18)
It is noted that the spin R/D of the photo-excited holes is very fast 23 and the electron-hole recombination is very slow compared to the electron spin R/D. Therefore, we take the hole distribution in equilibrium Fermi distribution and
f h kσ = f h k−σ ≡ f h k . Further, by subtracting ∂f k,−1 ∂t BAP from ∂f k,+1 ∂t BAP
in Eq. (16), one obtains:
∂∆f k ∂t BAP = ∂(f k,+1 − f k,−1 ) ∂t BAP = −2π k ′ ,q δ(ε e k−q − ε e k + ε h k ′ − ε h k ′ −q )|M (K − q)| 2 ∆f k [(1 − f h k ′ )f h k ′ −q + 1 2 (f h k ′ − f h k ′ −q )(f k−q,+1 + f k−q,−1 )] + ∆f k−q [f h k ′ (1 − f h k ′ −q ) − 1 2 (f h k ′ − f h k ′ −q )(f k,+1 + f k,−1 )] .(19)
In above equation, the terms ∆f
k [(1−f h k ′ )f h k ′ −q + 1 2 (f h k ′ − f h k ′ −q )(f k−q,+1 )+ f k−q,−1 )
] describe the forward scattering and correspondingly the terms
∆f k−q [f h k ′ (1−f h k ′ −q )− 1 2 (f h k ′ −f h k ′ −q )(f k,+1 +f k,−1 )
] describe the backward scattering. The SRT due to the BAP mechanism from the Fermi golden rule 16 can be recovered from Eq. (19) by applying the elastic scattering approximation:
ε e k−q ≈ ε e k and ε h k ′ ≈ ε h k ′ −q .
Under this approximation, the nonlinear terms (in the sense of the electron distribution func-
tion) 1 2 ∆f k (f h k ′ − f h k ′ −q )(f k−q,+1 + f k−q,−1 ) in the for- ward scattering and 1 2 ∆f k−q (f h k ′ − f h k ′ −q )(f k,+1 + f k,−1 ) in the backward scattering tend to zero. In the remain- ing linear terms, ∆f k = − ∂f 0k ∂ε k (φ 1/2 − φ −1/2 ) ≈ ∆f k−q with f 0k = 1 e β(ε k −µ) +1 by choosing f k,σ = 1 e β(ε k −µ−φσ ) +1 .
Therefore, one recovers the SRT due to the BAP mechanism from the Fermi golden rule approach: 16
1 2τ 1 BAP (k) = 2π k ′ ,q δ(ε e k−q − ε e k + ε h k ′ − ε h k ′ −q ) × |M (K − q)| 2 [(1 − f h k ′ )f h k ′ −q ] .(20)
In the next section, we will discuss the applicability of above equation which relies on the elastic scattering approximation.
In this work, we do not use the SRTs from the singleparticle approach for both the BAP and DP mechanisms.
Instead, we solve the kinetic spin Bloch equations selfconsistently with all the scattering explicitly included. The detail of the numerical scheme is given in Refs. [19,20]. The spin relaxation and dephasing times can be obtained from the temporal evolutions of the electron distribution functions f k,σ and the spin coherence ρ k,σ−σ respectively. 25, 36 We will show that the SRT due to the BAP mechanism obtained from the kinetic spin Bloch approach can give markedly different results compared to the one calculated from Eq. (20) by using the elastic scattering approximation, similar to the situation of the SRT due to the DP mechanism which has been discussed in great detail in our previous works. 18,19,20
III. NUMERICAL RESULTS AND ANALYSIS
The SRTs calculated from the kinetic spin Bloch equations are plotted in Figs. 1 to 6. In these figures, the solid curves represent the SRTs due to the BAP mechanism (τ BAP ) which are calculated from the kinetic spin Bloch equations by setting the DP term Ω(k) = 0; the dashed curves are the SRTs due to the DP mechanism (τ DP ) which are calculated by setting ∂ρ k,σσ ′ /∂t| BAP = 0; and the dash-dotted curves represent the total SRTs (τ total ) obtained from Eq. (1) with all the terms explicitly included. We always use different color and width of curves for different conditions. We first discuss the SRT in an intrinsic GaAs QW confined by Al 0.4 Ga 0.6 As barriers. In Fig. 1(a), we plot the temperature dependence of the SRT for a QW with well width a = 20 nm. The electron (heavy hole) density n (p) is 2 × 10 11 cm −2 and the impurity density n i = n. It is seen from the figure that the SRT due to the BAP mechanism is much larger than that due to the DP mechanism. Moreover, τ BAP decreases dramatically with T at low temperature, followed by a more moderate decrease at high temperature. The temperature dependence of τ BAP can be understood as follows. When the temperature increases, more electrons and holes tend towards the lager momentum, hence the larger center-ofmass momentum K. This leads to a larger the matrix element in Eq. (18), and consequently a larger scattering rate. Furthermore, the Pauli blocking which suppresses the scattering decreases with the increase of temperature. Both leads to the decrease of the SRT due to the BAP mechanism. The temperature dependence of the SRT due to the DP mechanism has been well discussed in Refs. [18,19,20,22]. Therefore we will not discuss the DP mechanism in detail in this paper. In order to see the difference of the SRT due to the BAP mechanism calculated from the full spin-flip scattering [Eq. (17)] and the one from the Fermi golden rule [Eq. (20)], i.e., neglecting the nonlinear terms in Eq. (17), we plot the BAP SRT calculated from the Bloch equations with only the linear terms in the spin-flip scattering as dotted curves for two different electron (hole) densities in Fig. 1(b). It is noted that for high electron density, the SRT due to the BAP mechanism from the Fermi golden rule is much smaller than τ BAP at low temperature. Furthermore, the lower the temperature and/or the larger the electron density, the larger the difference is due to the "breakdown" of the elastic scattering approximation at low temperature and/or high density. This is in good agreement with the condition for the elastic scattering. The difference can be very small when the electron density is smaller than 5 × 10 10 cm −2 according our calculation. Consequently the SRT for high electron density obtained in Ref. [16] at zero temperature is much smaller than the actual one. Therefore, the effect of the BAP mechanism for high electron density at very low temperature is smaller than that claimed by Maialle et al. In fact, it can even be ignored. We further stress that the effect of the BAP mechanism at low temperature and high electron density is far exaggerated in the literature due to the neglection of the nonlinear terms in the spin-flip electron-hole exchange scattering.
In addition, in the presence of inhomogeneous broadening, any scattering can give rise to spin R/D. 17,18,19,23,24 It is intuitive that the SRTs should satisfy:
1 τ total = 1 τ ′ DP + 1 τ BAP = 1 τ DP + 1 τ BAP + 1 τ differ ,(21)
where τ BAP is directly caused by the spin-flip electronhole exchange interaction, τ DP is from the inhomogeneous broadening when there is no spin-flip electron-hole exchange interaction, and τ ′ DP corresponds to case with the presence of the spin-flip electron-hole exchange scattering. The difference between 1 τDP and 1 τ ′ DP is noted as 1 τ differ . In our calculation we found 1 τ differ is so small that can be totally ignored. This is because the spin-flip electron-hole scattering is much smaller than the other scatterings. Then, we discuss the temperature dependence for different electron densities in intrinsic QWs in Fig. 2. One can see that τ BAP decreases with increasing densites at high temperature but it behaves oppositely at low temperature. On the other hand, τ DP decreases with increasing densities at all temperatures. We again interpret the density dependence of BAP mechanism by using the previous arguments: at low temperature regime, i.e., in the degenerate limit, the Pauli blocking is enhanced by increasing the carrier density and/or lowering the temperature. Therefore, the scattering can be suppressed by increasing density. This causes an increase of τ BAP . At high temperature regime, i.e. in the nondegenerate case, higher momentum states are occupied for larger density. This leads to a stronger scattering and hence τ BAP decreases with electron density. From this, we find that the relative importance of the DP and the BAP mechanisms does not change so much by changing the electron density.
In Fig. 3, we plot the temperature dependence of the SRTs in intrinsic QWs for different impurity densities and well widths. It is clear that τ BAP does not depend on impurity density, in other words, the curves corresponding to different impurities concentrations exactly coincide. However, τ DP can be enhanced due to the increased impurity scattering strength. If we enlarge the well width, both τ DP and τ BAP become larger. This is because the electron-hole exchange strength is weakened by the form factor Eq. (15) in the scattering matrix elements in the BAP mechanism for wider QWs. The leading term (linear term) of the Dresselhause spin-orbit coupling in Eqs. (2-4) is smaller for wider QWs in the DP mechanism. The variation of τ DP is larger than τ BAP that is to say the relative influence of the BAP mechanism becomes more important for wider QWs. From our detailed investigations, we conclude that τ BAP in intrinsic GaAs QWs is always larger than τ DP . At very low temperatures, the BAP mechanism can be ignored. However, it should be considered at higher temperatures for accurate calculating. Moreover, the relative importance of the BAP mechanism is increased by raising the impurity density and the well width.
We now turn to study the SRT in p-type QWs. In Fig. 4, we choose the well width a = 20 nm, n = 0.5 × 10 11 cm −2 , p = n + p 0 = n + 4 × 10 11 cm −2 , and n i = n. One can see that the magnitudes of τ DP and τ BAP are very close around T = 150 K. In p-type QWs, both the spin-conserving and spin-flip electron-hole scatterings are greatly enhenced by increasing the hole density. The former gives rise to the increase of τ DP in the strong scattering limit 20,23 and the latter gives rise to the decrease of τ BAP . Therefore both SRTs are getting closer for larger hole concentration. In the case of Fig. 4, the contributions from the DP and BAP mechanisms are nearly the same around 150 K, and at lower and higher temperatures, the contribution from the DP mechanism is no more than one order of magnitude larger than the BAP one. In addition, 1/τ differ is still very small and can be totally ignored.
We now analyze the temperature dependence of the SRT for different electron and hole densities in p-type QWs. In Fig. 5, the calculated SRT for different electron and hole densities are shown. In Fig. 6, a similar analysis is made for different well widths and impurity densities. The general features can be understood from the following. When the electron density becomes larger, both τ DP and τ BAP become smaller with similar amplitude. (Note that n = 0.5n 0 and n 0 are both within the nondegenerate limit.) When hole density gets larger, both τ DP and τ BAP become smaller with the amplitude of the latter being larger than the former (i.e., the importance of the BAP mechanism gets increased). This is because the electron-heavy hole scattering is markedly enhanced with the hole density. As the BAP mechanism is determined by the hole density, τ BAP is very sensitive to the hole density. Nevertheless, τ DP is less sensitive as it is also determined by all the other scatterings. When the well width gets larger, τ DP is enhanced with a larger amplitude at low temperature and with a small amplitude at high temperature, whereas τ BAP becomes larger moderately. These results are similar to Fig. 3. Consequently the BAP mechanism becomes more important, especially around T = 150 K in the present case. When the impurity density gets larger, τ DP becomes larger and τ BAP does not change. This makes the relative effect of BAP mechanism become larger.
From above features, we emphasize that the BAP mechanism is important in p-type QWs, especially for large well width and/or large hole densities (i.e. heavily doped) and large impurity densities. It is very different from the bulk systems in which the BAP mechanism is absolutely dominant at low temperature. Therefore, both the BAP and the DP mechanisms should be considered to get the right SRT in QWs.
IV. SUMMARY
In summary, we have investigated the SRT due to both the DP and BAP mechanisms in intrinsic and ptype GaAs (001) QWs by constructing and numerically solving the fully microscopic kinetic spin Bloch equations. We consider all the relevant scatterings such as the electron-AC phonon, electron-LO phonon, electronnonmagnetic impurity, and electron-electron Coulomb scattering. Furthermore, the spin-conserving electronheavy hole scattering, which enhances the total scattering strength and therefore τ DP , and the spin-flip electronhole exchange scattering, which induces the BAP SRT, are also included.
We stress it is very important to calculate the SRT from our fully microscopic approach, especially at high electron density and low temperatures where the nonlinear terms in the electron-hole exchange scattering becomes very important. The SRT obtained from our fully microscopic approach is much larger than that from the Fermi golden rule. This means that the BAP mechanism is negligible at very low temperature and high electron density. We speculate this is also true in the bulk case. This is very different from the predictions in the literature.
We investigate the temperature dependence of the SRTs: The SRT due to the BAP mechanism τ BAP decreases rapidly with increasing temperature at very low temperature and slowly at higher temperature for both intrinsic and p-type QWs. It also decreases with electron density for both intrinsic and p-type QWs. For p-type semiconductors, it further decreases with hole density. We also compare the relative importance of the SRTs from the BAP and DP mechanisms. The SRT from the DP mechanism is also calculated from the kinetic spin Bloch equations which give the SRT also quite different from that from the single-particle approach as discussed extensively in our previous works. 18,19,20,24,28 We find in intrinsic QWs, the effect of the BAP mechanism is much smaller than that from the DP mechanism at low temperature and it is smaller by nearly one order of magnitude at higher temperature; In p-type QWs, the SRT from the BAP mechanism is comparable with the one from the DP mechanism around certain temperature (such as 150 K in the case we study), especially when the hole density and/or the width of the QWs are large. For both the intrinsic and p-type QWs, the contribution from the BAP mechanism at very low temperature are negligible. We conclude that the spin R/D in QWs is very different from the bulk samples. In 2D case the BAP mechanism hardly dominates the spin relaxation. Instead, it is either smaller or comparable to the DP mechanism. The coherent terms can be written as
∂f k,σ ∂t coh = −σ Ω x (k)Imρ k + Ω y (k)Reρ k + 2σIm q V ee,q ρ * k+q ρ k ,(A1)∂ρ k ∂t coh = 1 2 iΩ x (k) + Ω y (k) (f k,+1 − f k,−1 ) + i q V ee,q (f k+q,+1 − f k+q,−1 )ρ k − ρ k+q (f k,+1 − f k,−1 ) ,(A2)
where V ee,q = P qz vQfe(qz ) ǫ(q)
, The electron-impurity scattering terms read
∂f k,σ ∂t im = −2πn i q U 2 q δ(ε e k − ε e k−q ) f k,σ (1 − f k−q,σ ) − Re(ρ k ρ * k−q ) − k ↔ k − q ,(A3)∂ρ k ∂t im = πn i q U 2 q δ(ε e k − ε e k−q ) (f k,+1 + f k,−1 )ρ k−q − (2 − f k−q,+1 − f k−q,−1 )ρ k − k ↔ k − q ,(A4)
in which k ↔ k − q stands for the same terms previously in but interchanging k ↔ k − q. In these equations U 2 q = qz (Z i v Q /ǫ(q)) 2 f e (q z ) with Z i (assumed to be 1 in our calculation) the charge number of the impurity. The electron-phonon scattering terms are
∂f k,σ ∂t ph = −2π qqz,λ g 2 qqz,λ δ(ε e k − ε e k−q − Ω qqz,λ )[N qqz,λ (f k,σ − f k−q,σ ) + f k,σ (1 − f k−q,σ ) −Re(ρ k ρ * k−q )] − k ↔ k − q ,(A5)
∂ρ k ∂t ph = π qqz,λ g 2 qqz,λ δ(ε e k − ε e k−q − Ω qqz,λ )[ρ k−q (f k,+1 + f k,−1 ) + (f k−q,+1 + f k−q,−1 − 2)ρ k −2N qqz,λ (ρ k − ρ k−q )] − k ↔ k − q ,
where λ represents the phonon mode. For the electron-longitudinal-optic-phonon (LO) scattering, the matrix element g 2 Q,LO = {2π 2 Ω LO /[(q 2 + q 2 z )]}(κ −1 ∞ − κ −1 0 )f e (q z ); for electron-acoustic-phonon scattering due to the deformation potential, g 2 Q,def = Ξ 2 Q 2dv sl f e (q z ); and for that due to the piezoelectric coupling, g 2 Q,pl = 32π 2 e 2 e 2 14 κ 2 0 (3qxqyqz ) 2
dv sl Q 7 f e (q z ) for the longitudinal phonon and g 2 Q,pt = 32π 2 e 2 e 2 14 κ 2 0 1 dvstQ 5 (q 2 x q 2 y + q 2 y q 2 z + q 2 z q 2 x − (3qxqy qz ) 2 Q 2 )f e (q z ) for the transverse phonon. Here Ξ = 8.5 eV is the deformation potential; d = 5.31 g/cm 3 is the mass density of the crystal; v sl = 5.29 × 10 3 m/s (v st = 2.48 × 10 3 m/s) is the velocity of the longitudinal (transverse) sound wave; κ 0 = 12.9 denotes the static dielectric constant and κ ∞ = 10.8 denotes the optical dielectric constant; and e 14 = 1.41 × 10 9 V/m represents the piezoelectric constant. Ω LO = 35.4 meV is the LO phonon frequency, and the AC phonon spectra Ω Qλ are given by Ω Ql = v sl Q for the longitudinal mode and Ω Qt = v st Q for the transverse mode. 38 N qqz,λ = [exp(βΩ qqz ,λ ) − 1] −1 represents the Bose distribution.
The spin-conserving electron-electron Coulomb scattering terms are given by
∂f k,σ ∂t ee = −2π q,k ′ ,σ ′ V 2 ee,q δ(ε e k−q − ε e k + ε e k ′ − ε e k ′ −q ) (1 − f k−q,σ )f k,σ (1 − f k ′ ,σ ′ )f k ′ −q,σ ′ + 1 2 ρ k ρ * k−q (f k ′ ,σ ′ − f k ′ −q,σ ′ ) + 1 2 ρ k ′ ρ * k ′ −q (f k−q,σ − f k,σ ) − k ↔ k − q, k ′ ↔ k ′ − q , (A7) ∂ρ k ∂t ee = −π q,k ′ ,σ ′ V 2 ee,q δ(ε e k−q − ε e k + ε e k ′ − ε e k ′ −q ) (f k−q,+1 ρ k + f k,−1 ρ k−q )(f k ′ ,σ ′ − f k ′ −q,σ ′ ) + ρ k [(1 − f k ′ ,σ ′ )f k ′ −q,σ ′ − Re(ρ k ′ ρ * k ′ −q )] − ρ k−q [f k ′ ,σ ′ (1 − f k ′ −q,σ ′ ) − Re(ρ * k ′ ρ k ′ −q )] − k ↔ k − q, k ′ ↔ k ′ − q .(A8)
FIG. 1 :
1(a) SRT due to the BAP (solid curve) and DP (dashed curve) mechanisms and the total SRT (dash-dotted curve) vs. temperature T in intrinsic QW when a = 20 nm, electron and hole densities n = p = 2n0, and impurity density ni = n. (b) (color online) SRT due to the BAP mechanism with full spin-flip electron-hole exchange scattering (solid curves) and with only the linear terms in the spin-flip electron-hole exchange scattering (dotted curves) at different electron densities against temperature T . n0 = 10 11 cm −2 .
FIG. 2 :
2(color online) SRT due to the BAP (solid curves) and DP (dashed curves) mechanisms and the total SRT (dashdotted curves) vs. temperature T in intrinsic QWs at different densities (n = p = 2, 4, 6n0) when a = 20 nm and ni = n. n0 = 10 11 cm −2 .
FIG
. 3: (color online) SRT due to the BAP (solid curves) and DP (dashed curves) mechanisms and the total SRT (dashdotted curves) vs. temperature T in intrinsic QWs for different well widths (a = 10 and 20 nm). n = p = 2n0 and impurity densities (ni = 0.5n and n). Note that the solid curves with the same well width but different impurity densities exactly coincides with each other. n0 = 10 11 cm −2 .
FIG. 4 :
4SRT due to the BAP (solid curve) and DP (dashed curve) mechanisms and the total SRT (dash-dotted curve) vs. temperature T in p-type QW when a = 20 nm, n = 0.5n0, p0 = 4n0, and ni = n. n0 = 10 11 cm −2 .
FIG. 5 :
5(color online) SRT due to the BAP (solid curves) and DP (dashed curves) mechanisms and the total SRT (dashdotted curves) vs. temperature T in p-type QWs with a = 20 nm at different electron densities (n = 0.5 and 1n0) and hole densities p0 = 2 and 4n0. ni = n. n0 = 10 11 cm −2 .
10 nm, n i = n FIG. 6: (color online) SRT due to the BAP (solid curves) and DP (dashed curves) mechanisms and the total SRT (dashdotted curves) vs. temperature T in p-type QWs with n = 0.5n0, p0 = 4n0 at different well widths (a = 10 and 20 nm) and impurity densities (ni = n and 2n). n0 = 10 11 cm −2 .
The matrix elements in the Hamiltonian are given by27
Author to whom all correspondence should be addressed. Electronic address: [email protected]. † Mailing address* Author to whom all correspondence should be addressed; Electronic address: [email protected]. † Mailing address.
Optical Orientation. F. Meier and B. P. ZakharchenyaNorth-Holland, AmsterdamOptical Orientation, edited by F. Meier and B. P. Za- kharchenya, (North-Holland, Amsterdam, 1984).
. I Zutic, J Fabian, S. Das Sarma, Rev. Mod. Phys. 76323and references thereinI. Zutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. 76, 323 (2004), and references therein.
. Y Yafet, Phys. Rev. 85478Y. Yafet, Phys. Rev. 85, 478 (1952);
. R J Elliot, Phys. Rev. 96266R. J. Elliot, Phys. Rev. 96, 266 (1954).
. M I , V I Perel, Zh.Éksp. Teor. Fiz. 601053Sov. Phys. JEPTM. I. D'yakonov and V. I. Perel', Zh.Éksp. Teor. Fiz. 60 1954 (1971). [Sov. Phys. JEPT 33, 1053 (1971)].
. G L Bir, A G Aronov, G E Pikus, Zh.Éksp. Teor. Fiz. 69705Sov. Phys. JETPG. L. Bir, A. G. Aronov, and G. E. Pikus, Zh.Éksp. Teor. Fiz. 69, 1382 (1975) [Sov. Phys. JETP 42, 705 (1975)].
. P H Song, K M Kim, Phys. Rev. B. 6635207P. H. Song and K. M. Kim, Phys. Rev. B 66, 035207 (2002).
. A G Aronov, G E Pikus, A N Titkov, Zh.Éksp. Teor. Fiz. 84680Sov. Phys. JETPA. G. Aronov, G. E. Pikus, and A. N. Titkov, Zh.Éksp. Teor. Fiz. 84, 1170 (1983) [Sov. Phys. JETP 57, 680 (1983)].
. K Zerrouati, F Fabre, G Bacquet, J Bandet, J Frandon, G Lampel, D Paget, Phys. Rev. B. 371334K. Zerrouati, F. Fabre, G. Bacquet, J. Bandet, J. Frandon, G. Lampel, and D. Paget, Phys. Rev. B 37, 1334 (1987).
. Y V Pershin, V Privman, Nano Lett, 3695Y. V. Pershin and V. Privman, Nano Lett. 3, 695 (2003).
. J Wagner, H Schneider, D Richards, A Fischer, K Ploog, Phys. Rev. B. 474786J. Wagner, H. Schneider, D. Richards, A. Fischer, and K. Ploog, Phys. Rev. B 47, 4786 (1992).
. T C Damen, L Viña, J E Cunningham, J Shah, L J Sham, Phys. Rev. Lett. 673432T. C. Damen, L. Viña, J. E. Cunningham, J. Shah, and L. J. Sham, Phys. Rev. Lett. 67, 3432 (1991).
. H Gotoh, H Ando, T Sogawa, H Kamada, T Kagawa, H Iwamura, J. Appl. Phys. 873394H. Gotoh, H. Ando, T. Sogawa, H. Kamada, T. Kagawa, and H. Iwamura, J. Appl. Phys. 87, 3394 (1999).
. Y A Bychkov, E Rashba, Zh.Éksp. Teor. Fiz. 39Sov. Phys. JETPY. A. Bychkov and E. Rashba, Zh.Éksp. Teor. Fiz. 39, 66 (1984) [Sov. Phys. JETP 39, 78 (1984)].
. H C Schneider, J P Wüstenberg, O Andreyev, K Hiebbner, L Guo, J Lange, L Schreiber, B Beschoten, M Bauer, M Aeschlimann, Phys. Rev. B. 7381302H. C. Schneider, J. P. Wüstenberg, O. Andreyev, K. Hiebb- ner, L. Guo, J. Lange, L. Schreiber, B. Beschoten, M. Bauer, and M. Aeschlimann, Phys. Rev. B 73, 081302 (2006).
. M Z Maialle, Phys. Rev. B. 541967M. Z. Maialle, Phys. Rev. B 54, 1967 (1995).
. M W Wu, C Z Ning, Eur. Phys. J. B. 18373M. W. Wu and C. Z. Ning, Eur. Phys. J. B 18, 373 (2000).
. M Q Weng, M W Wu, Phys. Rev. B. 68E75312M. Q. Weng and M. W. Wu, Phys. Rev. B 68, 075312 (2003); 71, 199902(E) (2005).
. M Q Weng, M W Wu, L Jiang, Phys. Rev. B. 69245320M. Q. Weng, M. W. Wu, and L. Jiang, Phys. Rev. B 69, 245320 (2004).
. J Zhou, J L Cheng, M W Wu, Phys. Rev. B. 7545305J. Zhou, J. L. Cheng, and M. W. Wu, Phys. Rev. B 75, 045305 (2007).
. M M Glazov, E L Ivchenko, JETP Lett. 75403M. M. Glazov and E. L. Ivchenko, JETP Lett. 75, 403 (2002).
. M Q Weng, M W Wu, Phys. Rev. B. 70195318M. Q. Weng and M. W. Wu, Phys. Rev. B 70, 195318 (2004).
. C Lü, J L Cheng, M W Wu, Phys. Rev. B. 73125314C. Lü, J. L. Cheng, and M. W. Wu, Phys. Rev. B 73, 125314 (2006).
Chemistry and Application of Nanostructures: Reviews and Short Notes to Nanomeeting. M W Wu, M Q Weng, J L Cheng, World Scientific. V. E. Borisenko, V. S. Gurin, and S. V. Gaponenko14Physics. and references thereinM. W. Wu, M. Q. Weng, and J. L. Cheng, in Physics, Chemistry and Application of Nanostructures: Reviews and Short Notes to Nanomeeting 2007, eds. V. E. Borisenko, V. S. Gurin, and S. V. Gaponenko (World Sci- entific, Singapore, 2007), pp. 14, and references therein.
. M W Wu, H Metiu, Phys. Rev. B. 612945M. W. Wu and H. Metiu, Phys. Rev. B 61, 2945 (2000);
. M W Wu, J. Supercond. 14245M. W. Wu, J. Supercond. 14, 245 (2001).
. M Z Maialle, M H Degani, Phys. Rev. B. 5513371M. Z. Maialle and M. H. Degani, Phys. Rev. B 55, 13371 (1996).
. M Z Maialle, D A De Andrada E Silva, L J Sham, Phys. Rev. B. 4715776M. Z. Maialle, D. A. de Andrada e Silva, and L. J. Sham, Phys. Rev. B 47, 15776 (1993).
. M Q Weng, M W Wu, Chin. Phys. Lett. 22671M. Q. Weng and M. W. Wu, Chin. Phys. Lett. 22, 671 (2005).
. D Stich, J Zhou, T Korn, R Schulz, D Schuh, W Wegscheider, M W Wu, C Schüller, Phys. Rev. Lett. 98176401D. Stich, J. Zhou, T. Korn, R. Schulz, D. Schuh, W. Wegscheider, M. W. Wu, and C. Schüller, Phys. Rev. Lett. 98, 176401 (2007);
. Phys. Rev. B. 76205301Phys. Rev. B 76, 205301 (2007).
. D Stich, J H Jiang, T Korn, R Schulz, D Schuh, W Wegscheider, M W Wu, C Schüller, Phys. Rev. B. 7673309D. Stich, J. H. Jiang, T. Korn, R. Schulz, D. Schuh, W. Wegscheider, M. W. Wu, and C. Schüller, Phys. Rev. B 76, 073309 (2007).
H Haug, A P Jauho, Quantum Kinetics in Transport and Optics of Semiconductor. BerlinSpinger-VerlagH. Haug and A. P. Jauho, Quantum Kinetics in Trans- port and Optics of Semiconductor (Spinger-Verlag, Berlin, 1996).
. G Dresselhaus, Phys. Rev. 100580G. Dresselhaus, Phys. Rev. 100, 580 (1955).
. M I , V Y Kachorovskii, Fiz. Tekh. Poluprovodn. 20Sov. Phys. Semicond.M. I. D'yakonov and V. Y. Kachorovskii, Fiz. Tekh. Poluprovodn. 20, 178 (1986) [Sov. Phys. Semicond. 20, 110 (1986)].
See the discussion of the spin-splitting parameter in Ref. 20See the discussion of the spin-splitting parameter in Ref. [20].
It is noted that the screening in the Hartree-Fock terms in ∂ρ k,σσ ′ /∂t| coh in Eq. (1) is also updated by the current one with the contributions from the heavy holes. It is noted that the screening in the Hartree-Fock terms in ∂ρ k,σσ ′ /∂t| coh in Eq. (1) is also updated by the current one with the contributions from the heavy holes.
. C Lü, J L Cheng, M W Wu, I C Da Cunha, Lima, Phys. Lett. A. 365501C. Lü, J. L. Cheng, M. W. Wu, and I. C. da Cunha Lima, Phys. Lett. A 365, 501 (2007);
. J L Cheng, M W Wu, J. Appl. Phys. 9983704J. L. Cheng and M. W. Wu, J. Appl. Phys. 99, 083704 (2006).
. W Ekardt, K Lösch, D Bimberg, Phys. Rev. B. 203303W. Ekardt, K. Lösch, and D. Bimberg, Phys. Rev. B 20, 3303 (1979).
. Semiconductors, Landolt-Börnstein, New Series. O. Madelung17SpringerSemiconductors, Landolt-Börnstein, New Series, Vol. 17a, edited by O. Madelung (Springer, Berlin, 1987).
| []
|
[
"Arbitrary p-Gradient Values",
"Arbitrary p-Gradient Values"
]
| [
"Nathaniel Pappas \nUniversity of Virginia\n\n"
]
| [
"University of Virginia\n"
]
| []
| For any prime number p and any positive real number α, we construct a finitely generated group Γ with p-gradient equal to α. This construction is used to show that there exist uncountably many pairwise non-commensurable groups that are finitely generated, infinite, torsion, non-amenable, and residually-p. | 10.1515/jgt-2013-0001 | [
"https://arxiv.org/pdf/1207.4650v2.pdf"
]
| 14,011,166 | 1207.4650 | 412a1674b90ac21ebc7df98055bcdb196ed8ef00 |
Arbitrary p-Gradient Values
19 Jan 2013 May 2, 2014
Nathaniel Pappas
University of Virginia
Arbitrary p-Gradient Values
19 Jan 2013 May 2, 2014rank gradientp-gradientmod-p homology gradientfinitely generated groupsprofinite groups
For any prime number p and any positive real number α, we construct a finitely generated group Γ with p-gradient equal to α. This construction is used to show that there exist uncountably many pairwise non-commensurable groups that are finitely generated, infinite, torsion, non-amenable, and residually-p.
Introduction
Let G be a finitely generated group and d(G) denote the minimum number of generators of G. Recall the Schreier index formula: Let H be a finite index subgroup of a finitely generated group G. The rank gradient of a finitely generated group is, in a sense, a measure of how far the Schreier index formula is from being an equality rather than an inequality. Though this is an interesting question from a group-theoretic standpoint, Mark Lackenby first introduced the rank gradient as a means to study 3-manifold groups [5].
The absolute rank gradient of G is defined by
RG(G) = inf [G:H]<∞ d(H) − 1 [G : H]
where the infimum is taken over all finite index subgroups H of G.
This paper is to appear in J. Group Theory It will be evident later that the rank gradient of a group is sometimes difficult both to work with and to calculate. It is often more convenient to compute the rank gradient of the pro-p completion G p of the group G for some fixed prime p. When dealing with profinite groups we use the notion of topologically finitely generated instead of (abstractly) finitely generated. The p-gradient of the group G, denoted RG p (G), can be defined as the rank gradient of G p . The p-gradient is also referred to in the literature as the mod-p rank gradient or mod-p homology gradient. A more explicit definition of p-gradient is provided in Section 2.
Since Lackenby first defined rank gradient of a finitely generated group [5], the following conjecture has remained open:
Conjecture. For every real number α > 0 there exists a finitely generated group Γ such that RG(Γ) = α.
The aim of this paper is to prove the analogous question for p-gradient:
Main Result. For every real number α > 0 and any prime p, there exists a finitely generated group Γ such that RG p (Γ) = α.
Given a prime p and an α > 0 ∈ R, consider a free group F of finite rank greater than α + 1. Take the set of all residually-p groups that are homomorphic images of F that have p-gradient greater than or equal to α and partially order this set by G 1 G 2 if G 1 surjects onto G 2 . Then by a Zorn's Lemma argument this set has a minimal element, Γ. We show RG p (Γ) = α by contradiction by constructing an element which is less than Γ with respect to the partial order. To construct this new smaller element, Theorem 3.2 is used, which was proved using slightly different language and a different method by Barnea and Schlage-Puchta in [3], but is formulated and proved independently here as well.
The methods used to prove this result require and are similar to those used by Schlage-Puchta in his work on p-deficiency and p-gradient [10] and Osin in his work on rank gradient [7]. To prove the above set has a minimal element, we will use direct limits of groups and show the relationship between the p-gradient of each group in the direct limit and the p-gradient of the limit group. This idea (Lemma 3.5) was inspired by Pichot's similar result for L 2 -Betti numbers [8]. It is known that for a finitely generated, residually finite, infinite group the rank gradient is always greater than or equal to the L 2 -Betti number, which provides a useful relationship between these two group invariants [7].
One of the primary goals of Osin's [7] and Schlage-Puchta's [10] papers was to provide a simple construction of non-amenable, torsion, residually finite groups. The construction given in this paper shows that there exist such groups with arbitrary p-gradient (Theorem 3.7). A simple consequence of this result is that there exist uncountably many pairwise noncommensurable groups that are finitely generated, infinite, torsion, nonamenable, and residually-p. The fact that the groups are non-commensurable uses the p-gradient and is almost immediate from the construction, which shows another way in which the p-gradient can be a useful tool.
Rank Gradient and p-Gradient
In this section, some useful results concerning rank gradient and p-gradient are collected, which will be used to prove the main result. If G is finite, then using H = {1} implies RG(G) = −1 |G| . As the following proposition shows, it is not difficult to produce groups with rational rank gradient. Whether an irrational number can be the rank gradient of some finitely generated group remains an open question. We will show later that for every prime p, every positive real number is the p-gradient for some finitely generated group.
Proposition 2.2. Let q > 0 ∈ Q. There exists a finitely presented group G such that RG(G) = q.
Proof. Write q = m n . Let F m+1 be a non-abelian free group of rank m + 1 and let A be any group of order n. Consider G = F m+1 × A. Let ϕ : G → A be the projection onto the second component and let H = ker ϕ. Then As stated earlier, we can define RG p (G) = RG(G p ). However, a more explicit definition of the p-gradient can be stated.
Definition. Let p be a prime. The p-gradient (also called mod-p homology gradient) of G is defined by
RG p (G) = inf d p (H) − 1 [G : H]
where d p (G) = d (G/[G, G]G p ) and the infimum is taken over all normal subgroups H such that [G : H] = p k for some k ∈ Z ≥0 .
We will prove that a group and its pro-p completion have the same pgradient, which will then be used to show the p-gradient of a group equals the rank gradient of its pro-p completion. To get this result, some facts about profinite groups must be presented.
Let G be a finitely generated group. The pro-p completion of G for some prime p will be denoted by G p . Let d(G) denote the minimal number of abstract generators of a group G if the group is not profinite and the minimal number of topological generators if the group is profinite. If a group is profinite, the term "finitely generated" will be used to mean topologically finitely generated. The reader is referred to any standard text in profinite groups for the basic results used in this section [4], [11].
When dealing with pro-p completions of a group, it is often convenient to assume that the group is residually-p since the group will imbed in its pro-p completion. To show why this type of assumption will not influence any result about the p-gradient, the following lemma is given.
Definition. Let G be a group and p a prime. Let N , the p-residual of G, be the intersection of all normal subgroups of p-power index in G. The p-residualization of G is the quotient G/N . Note that the p-residualization of G is isomorphic to the image of G in its pro-p completion G p and is residually-p.
Lemma 2.3. Let G be a group and p a prime number. Let
G be the p- residualization of G. Then RG p (G) = RG p ( G) and G p ≃ G p .
Proof. There is a bijective correspondence between normal subgroups of ppower index in G and normal subgroups of p-power index in G. Let the
correspondence be H ⇆ H with H ≤ G and H ≤ G. Then it is easy to show that [ G : H] = [G : H] and d p ( H) = d p (H). Thus, RG p ( G) = RG p (G).
By the inverse limit definition of pro-p completion and the fact that
G/ H ≃ (G/N )/(H/N ) ≃ G/H it follows that G p ≃ G p .
The following fact is well known. With the following proposition, we will be able to prove that a group and its pro-p completion have the same p-gradient. Parts of this proposition can be found in an exercise in [4].
Proposition 2.5. Let G be a finitely generated group and p a prime. Let ϕ : G → G p be the natural map from G to its pro-p completion. Let H be a normal subgroup of p-power index of G. The following hold: 4) For notational simplicity, assume G is residually-p and thus ϕ is injective. The case of G not residually-p is proved similarly. It is only necessary to show that the pro-p topology on G induces the pro-p topology on the subspace H of G. By Proposition 2.4, subnormal subgroups of p-power index in G form a base for the pro-p topology on G. If K is subnormal of p-power index in H it implies that K is subnormal of p-power index in G. This implies that the subspace topology and the pro-p topology on H are the same. Therefore, H ≃ H p as pro-p groups. (2) and (4)
1. ϕ(H) = ϕ(G) ∩ ϕ(H).
5) By
(H) = d(H/Φ(H)) = d(H). Therefore, RG p (G) = inf H G [G:H]=p k d p (H) − 1 [G : H] = inf [G:H]≤∞ d(H) − 1 [G : H] = RG(G).
It is now possible to prove the relationship between the p-gradient of a group and its pro-p completion.
Theorem 2.7. Let G be a finitely generated group and p a fixed prime. Let G p be the pro-p completion of G. Then RG p (G) = RG p (G p ) = RG(G p ).
Proof. Assume that G is residually-p. By Proposition 2.5 the natural injective map ϕ : G → G p induces an index preserving bijection H → H ≃ H p between the normal subgroups of p-power index in G and the normal subgroups of p-power index in G p . Proposition 2.5 also implies that d p (H) = d p (H p ) for all p-power index normal subgroups H G. It is not difficult to show that d p (K) = d p (K p ) holds for any finitely generated group K. Therefore, RG p (G) = RG p (G p ). If G is not residually-p, let G be the p-residualization of G. By Lemma 2.3, RG p (G) = RG p ( G) and G p ≃ G p . The first equality follows.
The fact that RG p (G) = RG(G p ), where G p is the pro-p completion of G, follows by the above remarks and Theorem 2.6.
Remark. Nikolov and Segal proved Serre's conjecture on finitely generated profinite groups. That is, in a finitely generated profinite group all finite index subgroups are open [6].
The above two theorems provide some useful corollaries.
Corollary 2.8. If G is a finite group, then RG p (G) = − 1 |G p | .
Proof. If G is finite, then so is G p and thus RG p (G) = RG p (G p ) = RG(G p ) = − 1 |G p | by Theorem 2.1. Proof. If H in normal in G we are done by Proposition 2.5.5. If H is not normal, then induct on the subnormal length.
Groups With Arbitrary p-Gradient Values
In this section we will prove the main result, that is, we construct a finitely generated group Γ with RG p (Γ) = α for each α > 0 ∈ R. To prove this, we need some technical results.
The following lemma is similar to Lemma 2.3 of Osin [7] concerning deficiency of a finitely presented group. 1. This is a standard computation. m . A lower bound for the p-gradient when taking the quotient by the normal subgroup generated by an element raised to a p-power follows by the above lemma.
If T is a right transversal for
x H in G, then x m G = tx m t −1 | t ∈ T H . 2. If H = Y | R , then π(H) = Y | R ∪ {tx m t −1 | t ∈ T } .) p ) = Y | R, C, w p for all w ∈ F (Y ), tx m t −1 for all t ∈ T .
Therefore, a presentation for π(H)/([π(H), π(H)]π(H) p ) is obtained from a presentation for H/([H, H]H p ) by adding in
Theorem 3.2. Let G be a finitely generated group, p some fixed prime, and x ∈ G. Then RG p (G/ x p k ) ≥ RG p (G) − 1 p k . Proof. Case 1: There exists a normal subgroup H 0 of p-power index such that the order of x in G/H 0 is at least p k .
Since H 0 is a normal subgroup of p-power index, then without loss of generality we may assume that the order of x in G/H 0 is exactly p k . Let H be a normal subgroup of p-power index in G = G/ x p k . Let H ≤ G be the full preimage of H. Then H is a p-power index normal subgroup in G which contains x p k . Let L H = H ∩ H 0 . Then L H is a normal subgroup in G such that x p k ∈ L H , L H ⊆ H, and the order of x in G/L H is p k . Note that L H is normal and of p-power index in G since both H and H 0 are normal and of p-power index. Thus by Lemma 3.1, q(H) ≥ q(L H ) ≥ q(L H ) − 1 p k , which by definition is greater than or equal to RG p (G) − 1 p k . Therefore, It will be shown that RG p (G/ x p k ) = RG p (G) in this case. There exists an ℓ < k such that x p ℓ ∈ H for every normal subgroup H of ppower index in G. Then x p ℓ is in the kernel of natural map from G to its pro-p completion ϕ : G → G p . Therefore,
q(H) ≥ RG p (G) − 1 p k . Thus RG p (G/ x p k ) ≥ RG p (G) − 1 p k .x p k = (x p ℓ ) p k−ℓ ∈ ker ϕ. Let M = x p k .(G/ x p k ) = RG p (G/M ) = RG p ((G/M ) p ) = RG p (G p ) = RG p (G).
Remark. The above theorem was independently stated and proved using slightly different language and a different method by Barnea and Schlage-Puchta (Theorem 3 in [3]). Corollary 3.3. Let G be a finitely generated group, p a fixed prime, and let x ∈ G. Then RG p (G/ x ) ≥ RG p (G) − 1.
p-Gradient and Direct Limits
Let (I, ≤) be a totally ordered set with smallest element 0 and let {G i | π ij } be a direct system of finitely generated groups with surjective homomorphisms π ij : G i → G j for every j ≥ i ∈ I.
Let G ∞ = lim − → G i be the direct limit of this direct system. Let π i : G i → G ∞ be the map obtained from the direct limit. Because all the maps in the direct system are surjective, then so are the π i . Let G = G 0 .
Another direct system {M i | µ ij } can be defined over the same indexing set I, where M i = G for each i and µ ij is the identity map. The direct limit of this set is clearly G = lim − → M i and the map obtained from the direct limit
µ i : M i → G ∞ is the identity map. A homomorphism Φ : {M i | µ ij } → {G i | π ij } is by definition a family of group homomorphisms ϕ i : M i → G i such that ϕ j • µ ij = π ij • ϕ i whenever i ≤ j. Then Φ defines a unique homomorphism ϕ = lim where M i = [H ′ , H ′ ](H ′ ) p ker ϕ i /[H ′ , H ′ ](H ′ ) p . Since ker ϕ i ⊆ ker ϕ j for each j ≥ i then M i ⊆ M j for each j ≥ i. Now, Q(H ′ )
is finitely generated abelian and torsion and therefore is finite. Thus Q(H ′ ) can only have finitely many non-isomorphic subgroups. Since {M i } is an ascending set of subgroups, there must exist an n ∈ I such that M i = M n for every i ≥ n. Since ker ϕ i ⊆ ker ϕ j for each j ≥ i and ker ϕ i = ker ϕ, we know that M i ⊆ M j for every j ≥ i and
M i = M . Therefore, M = M i = M n . Thus for each i ≥ n, M = M i . Therefore, Q(K) ≃ Q(H ′ i ) for each i ≥ n which implies d p (K) = d p (H ′ i ) for each i ≥ n. Thus, d p (K) = lim i∈I d p (H ′ i ).
The following lemma is similar to Pichot's related result for L 2 -Betti numbers where convergence is in the space of marked groups [8].
Lemma 3.5. For each prime p, lim sup RG p (G i ) ≤ RG p (G ∞ ).
Proof. Fix a prime p. Let K G ∞ be a normal subgroup of p-power index. By Lemma 3.4 we obtain the subgroups H ′ and H ′ i for each i. Now,
lim sup RG p (G i ) = lim sup inf N G i p-power d p (N ) − 1 [G i : N ] ≤ lim sup d p (H ′ i ) − 1 [G i : H ′ i ]
and by Lemma 3.4
lim sup d p (H ′ i ) − 1 [G i : H ′ i ] = lim i∈I d p (H ′ i ) − 1 [G i : H ′ i ] = d p (K) − 1 [G ∞ : K] .
Therefore, for each K G ∞ of p-power index, lim sup RG p (G i ) ≤ dp(K)−1
[G∞:K] . This implies lim sup RG p (G i ) ≤ RG p (G ∞ ).
The Main Result
It is now possible to prove the main result that every nonnegative real number is realized as the p-gradient of some finitely generated group. Proof. Fix a prime p and a real number α > 0. Let F be the free group on ⌈α⌉ + 1 generators. Let Λ = {G | F surjects onto G, G is residually-p, and RG p (G) ≥ α}.
Since for any free group d(F ) = d p (F ) it is clear that RG p (F ) = rank(F )− 1 and therefore, Λ is not empty since F is in Λ. Λ can be partially ordered by G 1 G 2 if there is an epimorphism from G 1 to G 2 , denoted G 1 ։ G 2 . This order is antisymmetric since each group in this set is Hopfian.
Let C = {G i } be a chain in Λ. Each chain forms a direct system of groups over a totally ordered indexing set. Any chain can be extended so that it starts with the element
F = G 0 . Let G ∞ = lim − → G i . By Lemma 3.5, RG p (G ∞ ) ≥ lim sup RG p (G i ) ≥ α. Let G ∞ be the p- residualization of G ∞ . By Lemma 2.3, RG p ( G ∞ ) = RG p (G ∞ ). Therefore, RG p ( G ∞ ) ≥ α and G ∞ is residually-p. Moreover, for each i, G i ։ G ∞ and in particular F ։ G ∞ ։ G ∞ .
Thus G ∞ ∈ Λ and G i G ∞ for each i. Thus, each chain C in Λ has a lower bound in Λ and therefore by Zorn's Lemma, Λ has a minimal element, call it Γ.
Since Γ and its p-residualization Γ have the same p-gradient and Γ surjects onto Γ, it implies that Γ ∈ Λ and Γ Γ. Thus Γ must be residually-p, otherwise Γ contradicts the minimality of Γ. Note: Γ does not have finite exponent.
If Γ had finite exponent then since Γ is finitely generated and residually finite it must be finite by the positive solution to the Restricted Burnside Problem [12]. This would imply RG p (Γ) < 0 by Corollary 2.8. This contradicts that Γ is in Λ.
Therefore, Γ is a finitely generated residually-p group with infinite exponent such that RG p (Γ) ≥ α. Claim: RG p (Γ) = α.
Assume not. Then there exists a k ∈ N such that RG p (Γ) − 1 p k ≥ α. Since Γ is residually-p, the order of every element is a power of p and since Γ has infinite exponent, there exists an x ∈ Γ whose order is greater than p k .
Consider Γ ′ = Γ/ x p k . Since x p k = 1 it implies that Γ ′ ≃ Γ. By Theorem 3.2, RG p (Γ ′ ) ≥ RG p (Γ) − 1 p k ≥ α.
If Γ ′ is not residually-p, replace it with its p-residualization, which will have the same p-gradient. Then Γ ′ ∈ Λ and Γ Γ ′ , which contradicts the minimality of Γ.
The result of Theorem 3.6 can be strengthened without much effort.
Theorem 3.7. Fix a prime p. For every real number α > 0 there exists a finitely generated residually-p torsion group Γ such that RG p (Γ) = α.
Proof. Barnea and Schlage-Puchta showed in Corollary 4 of [3], that for any α > 0 there exists a torsion group G with RG p (G) ≥ α. Applying the construction in Theorem 3.6, replacing the free group F with the presidualization of G, will result in a group Γ that is torsion, residually-p, and RG p (Γ) = α.
Y. Barnea and J.C. Schlage-Puchta [3] proved a result similar to Theorem 3.7 (inequality instead of equality) albeit in a slightly different way.
Applications
The construction given in Theorem 3.6 has a few immediate applications. First, it is noted that Theorem 3.7 gives a known counter example to the General Burnside Problem. The second application is more general and shows that there exist uncountably many pairwise non-commensurable groups that are finitely generated, infinite, torsion, non-amenable, and residuallyp.
This paper has been concerned with the (absolute) rank gradient and pgradient. There is, however, a related notion of rank gradient and p-gradient of a group with respect to a lattice of subgroups. A set of subgroups, {H i }, is called a lattice if the intersection of any two subgroups in the set is also in the set. In particular, a descending chain of subgroups is a lattice.
Definition.
1. The rank gradient relative to a lattice {H i } of finite index subgroups is defined as RG(G,
{H i }) = inf i d(H i ) − 1 [G : H i ] .
2. The p-gradient relative to a lattice {H i } of normal subgroups of p-
power index is defined as RG p (G, {H i }) = inf i d p (H i ) − 1 [G : H i ] .
The following theorem was proved by Abert, Jaikin-Zapirain, and Nikolov in [1]. Lackenby first proved the result for finitely presented groups in [5]. As a simple corollary, we provide a corresponding, albeit weaker, result concerning p-gradient. Proof. Let G be a finitely generated group with RG p (G) > 0. Let G be the p-residualization of G. Then 0 < RG p (G) = RG p ( G). Let {H i } be a descending chain of normal subgroups of p-power index in G which intersect in the identity. Then,
0 < RG p ( G) ≤ inf i d p (H i ) − 1 [ G : H i ] ≤ inf i d(H i ) − 1 [ G : H i ] = RG( G, {H i }).
Therefore, G is not amenable by Theorem 4.1. This implies that G is not amenable since a quotient of an amenable group is amenable.
The application of the construction used in Theorem 3.6 concerning commensurable groups is given below.
Definition. Two groups are called commensurable if they have isomorphic subgroups of finite index.
The following lemma is straightforward. Theorem 4.4. There exist uncountably many pairwise non-commensurable groups that are finitely generated, infinite, torsion, non-amenable, and residuallyp.
Proof. Let p be a fixed prime number. By Theorem 3.7 it is known that for every real number α > 0 there exists a finitely generated residually-p infinite torsion group, Γ, such that RG p (Γ) = α. By Corollary 4.2 these groups are all non-amenable. Since each of these groups is residually-p and torsion, they are all p-torsion. Thus, every subgroup of finite index in these groups is subnormal of p-power index.
By Theorem 2.9 if any two of these groups are commensurable, then the p-gradient of each group is a rational multiple of the other. Since there are uncountably many positive real numbers that are not rational multiples of each other, the result can be concluded.
Then d(H)−1 ≤ (d(G)−1)[G : H] and if G is free of finite rank, then H is free and d(H)−1 = (d(G)−1)[G : H].
Theorem 2 . 1 .
21Let G be a finitely generated group and let H be a finite index subgroup. Then RG(G) = RG(H) [G:H] . If G is finite, then RG(G) = − 1 |G| . Proof. By the Schreier index formula, d(K)−1 [G:K] ≥ d(L)−1 [G:L] for any subgroups L ≤ K of G such that L is finite index in G. This fact is used in the first equality given below. Fix a finite index subgroup H ≤ G. Since any finite index subgroup K of G contains the finite index subgroup K ∩ H of H and any finite index subgroup L of H is L = K ∩ H for some finite index subgroup K of G, then RG(G)
F m+1 ≃ H and [G : H] = n. By the Schreier index formula for free groups RG(H) = m. Therefore, RG(G) = RG(H) [G:H] = m n by Theorem 2.1.
Proposition 2 . 4 .
24Let G be a group and p a prime number. The set of subnormal subgroups of p-power index form a base of neighborhoods of the identity for the pro-p topology on G.
2. ϕ : G/H → G p /ϕ(H) given by ϕ(xH) = ϕ(x)ϕ(H) is an isomorphism. 3. There exists an index preserving bijection between normal subgroups of p-power index in G and open normal subgroups of G p . 4. ϕ(H) ≃ H p as pro-p groups. 5. RG(G p ) = RG(H p ) [G : H] .Proof. Parts (1)-(3) are proved in Proposition 3.2.2 of Ribes and Zalesskii[9].
Theorem 2. 9 .
9Fix a prime p and let G be a finitely generated group. Assume H ≤ G is a p-power index subnormal subgroup. Then RG p (G) = RGp(H) [G:H] .
Lemma 3. 1 .
1Let G be a finitely generated group and fix a prime p. Let x be some non-trivial element of G. Let H be a finite index normal subgroup of G such that x m ∈ H, but no smaller power of x is in H. Let π : G → G/ x m G be the standard projection homomorphism.
3 ..
3|T If q(H) = d p (H) [G : H] , then q(π(H)) ≥ q(H) − 1 m . Proof. Since x m is in H, then [π(G) : π(H)] = [G : H].
3 .
3Since H ⊆ x H ⊆ G, then [G : H] = [G : x H][ x H : H]. Therefore, |T | = [G : x H] = [G:H] [ x H:H] . Since x m ∈ H but no smaller power of x is in H, then V = {1, x, x 2 , . . . , x m−1 } is a transversal for H in x H and thus [ x H : H] = m. Therefore, |T | = [G:H] m . 4. First, note that (2) and (3) imply that a presentation for π(H) is obtained from a presentation for H by adding in [G:H] m relations. Now, q(π(H)) ≥ q(H) − 1 m if and only if d p (π(H)) ≥ d p (H) − [G:H] m . If H has presentation H = Y | R then π(H) has presentation π(H) = Y | R ∪ {tx m t −1 for all t ∈ T } . For notational simplicity let C = {[y 1 , y 2 ] | y 1 , y 2 ∈ Y }. Then, H/([H, H]H p ) = Y | R, C, w p for all w ∈ F (Y ) where F (Y ) is the free group on Y and π(H)/([π(H), π(H)]π(H
[G:H] m relations. Note: For any group G, G/([G, G]G p ) can be considered as a vector space over F p and therefore d p (G) is the dimension of this vector space. Therefore, π(H)/([π(H), π(H)]π(H) p ) is a vector space satisfying [G:H] m more equations than the vector space H/([H, H]H p ). Thus d p (π(H)) ≥ d p (H) − [G:H]
Case 2 :
2For every normal subgroup H of p-power index, the order of x in G/H is less than p k .
Then M ⊆ ker ϕ. This implies that there is a bijective correspondence between all normal subgroup of p-power index in G and G/M given by N → N/M . Since G/N ≃ (G/M )/(N/M ) for all such N , then by the inverse limit definition of pro-p completions G p ≃ (G/M ) p as prop groups. Therefore, RG p
Theorem 3.6. (Main Result) For every real number α > 0 and any prime p, there exists a finitely generated group Γ such that RG p (Γ) = α.
Theorem 4.1. (Abert, Jaikin-Zapirain, Nikolov) Finitely generated infinite amenable groups have rank gradient zero with respect to any normal chain with trivial intersection.
Corollary 4 . 2 .
42If RG p (G) > 0 for some prime p, then G is not amenable.
Lemma 4 . 3 .
43Fix a prime p. Let G be a p-torsion group (every element has order a power of p). Then every finite index subgroup H ≤ G is subnormal of p-power index.
we know G/H ≃ G p /H p and therefore [G : H] = [G p : H p ]. Proof. In a finitely generated pro-p group all finite index normal subgroups are open normal subgroups and have index a power of p [4]. Moreover, if H is a finite index subgroup of G, then H is also a finitely generated pro-p group. The Frattini subgroup of a finitely generated pro-p group H is Φ(H) = [H, H]H p and by standard facts about finitely generated pro-p groups, d pThus, by Theorem 2.1 RG(G p ) =
RG(H p )
[G p :H p ] =
RG(H p )
[G:H] .
Theorem 2.6. If G is a (topologically) finitely generated pro-p group, then
RG p (G) = RG(G).
. This holds by (1) and the fact that π(H) = H/(H∩ x m G ) = H/ x m G , since x m ∈ H and H is normal in G.
Acknowledgments. The author would like to thank his advisor Mikhail Ershov for his help with the present material and earlier drafts of this paper. The author would also like to thank the anonymous referee for pointing out a minor mathematical issue and helpful comments which improved the exposition.The surjection ϕ i : G → G i is the map π 0i in this case. It is clear that ϕ = lim − → ϕ i . Since each ϕ i is surjective, it implies that ker ϕ i ⊆ ker ϕ j for every j ≥ i. In this situation,Lemma 3.4. Keep the notation defined above. Fix a prime p. For each K G ∞ of p-power index, there exists an H ′ G of p-power index such that:Proof. Let K G ∞ be a p-power index normal subgroup. Since ϕ : G → G ∞ is surjective then G ∞ ≃ G/ ker ϕ. Let H ′ = ϕ −1 (K). Then H ′ is normal in G and since K ≃ H ′ / ker ϕ then [G ∞ : K] = [G : H ′ ] and so H ′ is of p-power index.2. Since each ϕ i : G → G i is surjective, G i ≃ G/ ker ϕ i and since H ′ contains ker ϕ, then H ′ contains ker ϕ i for each i. Thus, H ′ i ≃ H ′ / ker ϕ i . Therefore for every i,For any group
The rank gradient from a combinatorial viewpoint. M Abert, A Jaikin-Zapirain, N Nikolov, Groups Geom. Dyn. 5M. Abert, A. Jaikin-Zapirain, and N. Nikolov, The rank gradient from a combinatorial viewpoint, Groups Geom. Dyn. 5 (2011), 213-230.
Introduction to commuative algebra. M Atiyah, I Macdonald, Westview PressM. Atiyah and I. MacDonald, Introduction to commuative algebra, Westview Press, 1969.
Y Barnea, J C Schlage-Puchta, arxiv.org/abs/1106On p-deficieny in groups. Y. Barnea and J.C. Schlage-Puchta, On p-deficieny in groups, arxiv.org/abs/1106.3255v1 (2012).
J D Dixon, M P Du Sautoy, A Mann, D Segal, Analytic pro-p groups. Cambridge University PressJ.D. Dixon, M.P.F du Sautoy, A. Mann, and D. Segal, Analytic pro-p groups, Cambridge University Press, 1991.
Expanders, rank and graphs of groups. M Lackenby, Israel J. Math. 1461M. Lackenby, Expanders, rank and graphs of groups, Israel J. Math. 146 (2005), no. 1, 357-370.
Finite index subgroups in profinite groups. N Nikolov, D Segal, C. R. Math. Acad. Sci. Paris. 3375N. Nikolov and D. Segal, Finite index subgroups in profinite groups, C. R. Math. Acad. Sci. Paris 337 (2003), no. 5, 303-308.
Rank gradient and torsion groups. D Osin, Bull. Lond. Math. Soc. D. Osin, Rank gradient and torsion groups, Bull. Lond. Math. Soc. (2010).
Semi-continuity of the first ℓ 2 -betti number on the space of finitely generated groups. M Pichot, Comment. Math. Helv. 81M. Pichot, Semi-continuity of the first ℓ 2 -betti number on the space of finitely generated groups, Comment. Math. Helv. 81 (2006), 643-652.
L Ribes, P Zalesskii, A series of modern surveys in mathematics. Springer40Profinite groupsL. Ribes and P. Zalesskii, Profinite groups, 2 ed., A series of modern surveys in mathematics, vol. 40, Springer, 2010.
A p-group with positive rank gradient. J-C Schlage-Puchta, J. Group Theory. 152J-C. Schlage-Puchta, A p-group with positive rank gradient, J. Group Theory 15 (2012), no. 2, 261-270.
Profinite groups. J S Wilson, Oxford Science PublicationsJ. S. Wilson, Profinite groups, Oxford Science Publications, 1998.
Zel'manov, Solution of the restricted burnside problem for groups of odd exponent. E I , Izv. Math. 541E. I. Zel'manov, Solution of the restricted burnside problem for groups of odd exponent, Izv. Math. 54 (1990), no. 1, 42-59.
| []
|
[
"Numerical and experimental studies of resonators with reduced resonant frequencies and small electrical sizes",
"Numerical and experimental studies of resonators with reduced resonant frequencies and small electrical sizes"
]
| [
"T Hao \nDepartment of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK\n",
"J Zhu \nDepartment of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK\n",
"D J Edwards \nDepartment of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK\n",
"C J Stevens \nDepartment of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK\n"
]
| [
"Department of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK",
"Department of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK",
"Department of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK",
"Department of Engineering Science\nUniversity of Oxford Parks Road\nOX1 3DROxfordUK"
]
| []
| Methods on reducing resonant frequencies and electrical sizes of resonators are reported in this paper. Theoretical and numerical analysis has been used and the results for the broadside-coupled resonators from both studies exhibit good agreement. Initial fabrication techniques are proposed and measurement results are compared with simulations. Further high resolution techniques have been envisaged to enhance the performance of the resonators. This class of small resonators with low resonant frequencies indicates a variety of applications in the design of microwave devices. | null | [
"https://arxiv.org/pdf/0810.2490v1.pdf"
]
| 118,765,948 | 0810.2490 | 451341f27f90e94e5376928a0419d558f7bff3b5 |
Numerical and experimental studies of resonators with reduced resonant frequencies and small electrical sizes
T Hao
Department of Engineering Science
University of Oxford Parks Road
OX1 3DROxfordUK
J Zhu
Department of Engineering Science
University of Oxford Parks Road
OX1 3DROxfordUK
D J Edwards
Department of Engineering Science
University of Oxford Parks Road
OX1 3DROxfordUK
C J Stevens
Department of Engineering Science
University of Oxford Parks Road
OX1 3DROxfordUK
Numerical and experimental studies of resonators with reduced resonant frequencies and small electrical sizes
Methods on reducing resonant frequencies and electrical sizes of resonators are reported in this paper. Theoretical and numerical analysis has been used and the results for the broadside-coupled resonators from both studies exhibit good agreement. Initial fabrication techniques are proposed and measurement results are compared with simulations. Further high resolution techniques have been envisaged to enhance the performance of the resonators. This class of small resonators with low resonant frequencies indicates a variety of applications in the design of microwave devices.
Introduction
Since late 1990s, intensive research has been carried out throughout the world after Pendry's work [1][2] on metamaterials which have both negative permittivity and permeability, where split ring resonators (SRR) were selected as resonant atoms to achieve negative permeabilities. More recent research is focused on increasing resonant frequencies into the optical region, and one of the highest frequencies achieved to date (~200THz) using the C-shaped ring structure is by Enkrich [3] where the current fabrication techniques reached their limit (50nm).
Heading an opposite direction, our research has been focussed on reducing resonant frequencies whilst also achieving small sizes by investigating different variations of C-shaped structures [4], followed Marqués work on the broadside-coupled SRR (BC-SRR) [5]. It has been verified that BC-SRR has a smaller electrical size than the traditional edge-coupled SRR (EC-SRR) (which is ~1/10), whilst VS-SRR can reduce it further. As is well known [2], a smaller electrical size is critical and must be satisfied before applying a continuous medium approach on metamaterial media, thus metamaterial devices with smaller physical sizes (in an order of millimeters) and lower frequencies (less than 1GHz) are of interest.
BC-SRR model
A model of BC-SRR is shown in Fig. 1. The upper ring and lower 180° oriented ring have identical dimensions, which are, the inner radius r, the width of rings c, the thickness of rings h, and the thickness of the insulator layer between rings t, the thickness of the substrate d, and the width of the gap g. Unlike edge-coupled resonators, the capacitance of BC-SRR is mainly contributed by the series capacitance of the upper and the lower rings, relatively less E field is concentrated in the split gap, thus the width of the gap is not the most important factor in determining the resonances. When fixing the thickness and dielectric constant of the insulator layer, the resonant frequency of the BC-SRR structure is mainly determined by the inner radius (r) and width (c) of the ring. As not pointed in [5] but could be calculated using [6], when fixing the oversize of the structure and varying c, one can achieve the lowest resonance if
3 / 2 / = r c .
The normalised electrical size of BC-SRR can be written as 0 / ) ( 2 λ c r + [5], where 0 λ is the free space wavelength at resonance. Marqués et al showed that the normalised electrical size of the EC-SRR remains almost constant (~1/10) for small spacing between inner and outer rings. However, by tuning the thickness of insulator between rings t, and the dielectric constant of the insulator ε , smaller normalised electrical size can be achieved in the order of 1/300 at the resonance below 400MHz [7].
Results and discussion
Full-wave electromagnetic field analysis method is used in our simulation analysis. The commercial electromagnetic package used is MicroStripes® [8], which uses a 3D transmission line method (TLM) and takes as input a spatially discretised model of the object in order to determine the electromagnetic response of the model, in both time and frequency domains. A series of simulations have been conducted on varying the thickness and dielectric constant of the insulator layer for a relatively small resonator (r=0.9mm, c=0.6mm, and g=0.2mm). The results are summarised and compared with analytical calculations [5] in Table 1. The simulation results are generally larger than calculations since the theory doesn't take account of the small amount of capacitance in the split gaps. Fig. 1(c), several resonator prototypes have been fabricated, measured and reported in [7]. The techniques adopted are the standard PCB etching technique, and cleanroom photolithography techniques employing S1805 photoresist and electroplating. Fig. 2 gives the resonances of several prototypes with various thicknesses of insulator layers.
It can be seen in Fig. 2
Conclusions
Numerical and theoretical calculations on BC-SRR structures with the insulator layer thickness below microns show a way to reduce the resonant frequencies, whilst increasing the dielectric constant of the layer will further lower the resonances. The reported techniques in fabricating BC-SRR with insulator layer thicknesses below 1um are not satisfactory, although a resonance below 400MHz has been achieved with the layer thickness of 0.5um and the normalised electrical size around 1/300. Different fabrication techniques with different insulator materials (e.g., nanolithography and pure PMMA [9]) can make the layer smoother and more uniform for the fabrication of 1D and 2D arrays of BC-SRR. And materials with higher dielectric constants (e.g., TiO 2 and LiNiO 3 ) are promising in making resonators at even lower resonances.
Fig. 1
1A model of BC-SRR. a): Marqués's BC-SRR model with thick substrate (~mm); b) BC-SRR model with very a thin insulator layer (~um) for simulations; c) plan view of BC-SRR (the gap part) made in the clean room; d) cross-section of BC-SRR
Table 1 :
1Comparison between simulated and theoretical predicted resonant frequencies in MHz (the BC-SRRdimensions: r=0.9mm, c=0.6mm, and g=0.2mm, h=30um, d=1.5mm)
r
ε
t (um)
Method
2.4
4
5
6
Theory
390.0
302.0
0.5
Simulation
391.6
322.1
N/A
N/A
Theory
1227
951.0
851.0
777.0
5
Simulation
1364
1028
920.0
873.1
Theory
1726
1340
10
Simulation
1902
1465
N/A
N/A
As illustrated in
Extremely low frequency plasmons in metallic mesostructures. J B Pendry, A J Holden, W J Stewart, I Youngs, Physical Review Letters. 76J. B. Pendry, A. J. Holden, W. J. Stewart, and I. Youngs, "Extremely low frequency plasmons in metallic mesostructures," Physical Review Letters, vol. 76, pp. 4773-6, 1996.
Magnetism from conductors and enhanced nonlinear phenomena. J B Pendry, A J Holden, D J Robbins, W J Stewart, IEEE Transactions on Microwave Theory and Techniques. 47J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, "Magnetism from conductors and enhanced nonlinear phenomena," IEEE Transactions on Microwave Theory and Techniques, vol. 47, pp. 2075-84, 1999.
Magnetic metamaterials at Telecommunication and visible frequencies. -C Enkrich, -M Wegener, -S Linden, -S Burger, -L Zschiedrich, -F Schmidt, Zhou-Jf, -T Koschny, Soukoulis-Cm , Physical Review Letters. 95Enkrich-C, Wegener-M, Linden-S, Burger-S, Zschiedrich-L, Schmidt-F, Zhou-Jf, Koschny-T, and Soukoulis-Cm, "Magnetic metamaterials at Telecommunication and visible frequencies," Physical Review Letters, vol. 95, pp. 203901/1-4, 2005.
Optimisation of metamaterials by Q factor. T Hao, C J Stevens, D J Edwards, Electronics Letters. 41T. Hao, C. J. Stevens, and D. J. Edwards, "Optimisation of metamaterials by Q factor," Electronics Letters, vol. 41, pp. 653-4, 2005.
Comparative analysis of edge-and broadside-coupled split ring resonators for metamaterial design -theory and experiments. R Marques, F Mesa, J Martel, F Medina, IEEE Transactions on Antennas and Propagation. 5110R. Marques, F. Mesa, J. Martel, and F. Medina, "Comparative analysis of edge-and broadside-coupled split ring resonators for metamaterial design -theory and experiments," IEEE Transactions on Antennas and Propagation, vol. 51(10) pt. 2, pp. 2572-81, 2003.
A realisation of resonators with small physical and electrical sizes. T Hao, C J Stevens, D J Edwards, Electronics Letters. submittedT. Hao, C. J. Stevens, and D. J. Edwards, "A realisation of resonators with small physical and electrical sizes," Electronics Letters (submitted).
Nanolithography on thin layers of PMMA using atomic force microscopy. -C Martin, -G Rius, -X Borrise, Perez-Murano-F , Nanotechnology. 16Martin-C, Rius-G, Borrise-X, and Perez-Murano-F, "Nanolithography on thin layers of PMMA using atomic force microscopy," Nanotechnology, vol. 16, pp. 1016-22, 2005.
| []
|
[
"On the spectral characterization of pineapple graphs",
"On the spectral characterization of pineapple graphs"
]
| [
"Hatice Topcu [email protected] ",
"Sezer Sorgun ",
"Willem H Haemers [email protected] "
]
| []
| []
| The pineapple graph K q p is obtained by appending q pendant edges to a vertex of a complete graph Kp (q ≥ 1, p ≥ 3). Zhang and Zhang [Some graphs determined by their spectra, Linear Algebra and its Applications, 431(2009) 1443-1454] claim that the pineapple graphs are determined by their adjacency spectrum. We show that their claim is false by constructing graphs which are cospectral and non-isomorphic with K q p for every p ≥ 4 and various values of q. In addition we prove that the claim is true if q = 2, and refer to the literature for q = 1, p = 3, and (p, q) = (4, 3). | 10.1016/j.laa.2016.06.018 | [
"https://arxiv.org/pdf/1511.08674v3.pdf"
]
| 119,178,571 | 1511.08674 | f7b536d145be434c8a3dc366e9c16c7114d79b0b |
On the spectral characterization of pineapple graphs
10 Jun 2016
Hatice Topcu [email protected]
Sezer Sorgun
Willem H Haemers [email protected]
On the spectral characterization of pineapple graphs
10 Jun 2016Cospectral graphs, Spectral characterization AMS subject classification 05C50
The pineapple graph K q p is obtained by appending q pendant edges to a vertex of a complete graph Kp (q ≥ 1, p ≥ 3). Zhang and Zhang [Some graphs determined by their spectra, Linear Algebra and its Applications, 431(2009) 1443-1454] claim that the pineapple graphs are determined by their adjacency spectrum. We show that their claim is false by constructing graphs which are cospectral and non-isomorphic with K q p for every p ≥ 4 and various values of q. In addition we prove that the claim is true if q = 2, and refer to the literature for q = 1, p = 3, and (p, q) = (4, 3).
Introduction
The pineapple graph K q p is the coalescence of the complete graph K p (at any vertex) with the star K 1,q at the vertex of degree q. Thus K q p can be obtained from K p by appending q pendant edges to a vertex of K p . Clearly K q p has n = p + q vertices, p 2 + q edges and p 3 triangles. In order to exclude the complete graphs and the stars, we assume p ≥ 3 and q ≥ 1. See Figure 1 for a drawing of K 4 4 . Alternatively, K q p can be defined by its adjacency matrix
A = 0 1 ⊤ 1 ⊤ 1 J p−1 − I O 1 O O ,
where 1 is the all-ones vector (of appropriate size), and J ℓ denotes the ℓ × ℓ all-ones matrix.
Proposition 1.1. The characteristic polynomial p(x) = det(xI − A) of the pineapple graph K q p equals p(x) = x q−1 (x + 1) p−2 (x 3 − x 2 (p − 2) − x(p + q − 1) + q(p − 2)).
Proof: The adjacency matrix A has q identical rows, so rank(A) is at most p + 1, and therefore p(x) has a factor x q−1 . Similarly, A + I has p − 1 identical rows and so (x + 1) p−2 is another factor of p(x). The given partition of A is equitable with quotient matrix
Q = 0 p − 1 q 1 p − 2 0 1 0 0
(this means that each block of A has constant row sums, which are equal to the corresponding entry of Q). The characteristic polynomial of Q equals q(
x) = det(xI − Q) = x 3 − x 2 (p − 2) − x(p + q − 1) + q(p − 2)
, and it is well known (see for example [3], or [1]) that q(x) is a divisor of p(x). ✷
In this paper we deal with the question whether K q p is the only graph with characteristic polynomial p(x). In other words, is K q p determined by its spectrum? Note that the complete graph K n is determined by its spectrum, but for the star K 1,n−1 this is only the case when n = 2, or n − 1 is a prime. In [7] it is stated that every pineapple graph is determined by its spectrum. The presented proof, however, is incorrect. Even worse, the result is false. In the next section we shall construct graphs with the same spectrum as K q p for every p ≥ 4 and several values of q.
When q = 1, the pineapple graph K 1 p can be obtained from the complete graph K p+1 by deleting the edges of the complete bipartite graph K 1,p−1 . Graphs constructed in this way are known to be determined by their spectra, see [2].
Zhang and Zhang [7] proved (correctly this time) that the graph obtained by adding q pendant edges to a vertex of an odd circuit is determined by the spectrum of the adjacency matrix. When the odd circuit is a triangle we obtain that K q 3 is determined by its spectrum. Godsil and McKay [6] generated by computer all pairs of non-isomorphic cospectral graphs with seven vertices. Since K 3 4 is not in their list, it is determined by its spectrum. In Section 3 we prove that the spectrum determines K q p when q = 2. The proof uses the classification of graphs with least eigenvalue greater than −2. Graphs with the same spectrum are called cospectral. An important property of a pair of cospectral graphs is that they have the same number of closed walks of any given length. In particular, they have the same number of vertices, edges and triangles.
For a graph G with n vertices the eigenvalues are denoted by λ 1 (G) ≥ · · · ≥ λ n (G). If H is an induced subgraph of G with m vertices, then the eigenvalue of H interlace those of G, which means that λ i (G) ≥ λ i (H) ≥ λ n−m+i (G) for i = 1, . . . , m. For these and other results on graph spectra we refer to [3] or [1].
Graphs cospectral with the pineapple graphs
Proposition 2.1. Let G be the graph of order 3k (k ≥ 2) with adjacency matrix B = O J k J k J k J k − I O J k O J k − I .
Then the characteristic polynomial of G is equal to
x k−1 (x + 1) 2k−2 (x − k + 1)(x 2 − (k − 1)x − 2k 2 ).
Proof: Similar to the proof of Proposition 1.1, we find that B has at least k − 1 times an eigenvalue 0, and at least 2k − 2 times an eigenvalue −1. The remaining three eigenvalues are the roots of the characteristic polynomial q(x) of the quotient matrix with respect to the given partition. It is straightforward that
q(x) = (x − k + 1)(x 2 − (k − 1)x − 2k 2 ).
✷ By Proposition 1.1, we have that the characteristic polynomial of K k 2 2k equals
x k 2 −1 (x + 1) 2k−2 (x − k + 1)(x 2 − (k − 1)x − 2k 2 ).
So, if we add k(k − 1) isolated vertices to G we obtain a graph with the same characteristic polynomial, and therefore with the same spectrum as K k 2 2k . For k = 2, the two cospectral graphs are drawn in Figure 1.
(x + 1) 2 (x + 2)(x 2 − x − 8)
In the next result we obtain more graphs which are nonisomorphic but cospectral with the pineapple graph. It makes use of the graph K n \K m (m < n), which can be obtained from the complete graph K n be deleting the edges of a subgraph K m (in other words, K n \K m is the complete multipartite graph K m,1,...,1 with n vertices). The disjoint union of two graphs G and H is denoted by G + H.
Proposition 2.2. Let k ≥ 2 and p be integers such that r = k(k − 1)/(p − k − 1) is a positive integer. Then K p+r \K k+r + K k + k(k − 2)K 1 and K r(p−k)
p are cospectral. The common characteristic polynomial equals
x r(p−k)−1 (x + 1) p−2 (x − k + 1)(x 2 − x(p − k − 1) − (k + r)(p − k)).
Proof: It is well known (see for example [2]) and easily proved that the characteristic )). From this and Proposition 1.1 we find that K p+r \K k+r + K k + k(k − 2)K 1 and K r(p−k) p both have the characteristic polynomial given above. ✷
polynomial of K n \K m is equal to x m−1 (x + 1) n−m−1 (x 2 − (n − m − 1)x − m(n − m
For fixed k ≥ 2 the above condition is fulfilled for several values of p. Every k ≥ 6, gives at least eight values of p for which the condition is satisfied. In particular it follows that K q p is not determined by its spectrum when q = 2(p − 2)(p − 3), p ≥ 4, and when q = 3(p − 3)(p − 4)/2, p ≥ 5 (including K 3 5 ). Moreover, p = 2k ≥ 4, q = k 2 satisfies the requirements of both propositions above. Thus we have: Corollary 2.3. If p is even, p ≥ 4 and q = (p/2) 2 , then there exist at least three nonisomorphic graphs with the spectrum of K q p .
3 The spectral characterization of K 2 p
The main tool in this section is the classification of graphs with least eigenvalue strictly greater than −2, due to Doob and Cvetković [5]. Below we present this classification. We inserted results from [4] that relate the different cases with the so-called discriminant d G of a graph G, defined by d G = |p(−2)|, where p(x) is the characteristic polynomial of G. The classification uses the not so well-known notion of a generalized line graph of type (1, 0, . . . , 0), so we will recall it here. Suppose G is a graph, then the line graph L(G) is the graph whose vertex set consists of the edges of G, where two edges are adjacent in L(G) whenever they intersect in G. Assume the vertices of G are labeled 1, 2, . . . , n, then the generalized line graph of G of type (1, 0, . . . , 0) consists of L(G) extended with two nonadjacent new vertices which are adjacent to all vertices of L(G) that correspond to an edge of G incident with the vertex with label 1. For example, K 2 p is the generalized line graph of K 1,p of type (1, 0, . . . , 0), provided a vertex of degree 1 has label 1. Proof: Suppose that G is a graph cospectral with K 2 p . Then G has p + 2 vertices, p 2 + 2 edges and p 3 triangles, and the characteristic polynomial p(x) of G is equal to
x(x + 1) p−2 (x 3 − (p − 2)x 2 − (p + 1)x + 2(p − 2)).
From this it follows that λ 2 (G) < 1, and λ p+2 (G) > −2.
Suppose G is disconnected. If there exist two components with at least one edge, then 2K 2 is an induced subgraph of G. Eigenvalue interlacing gives 1 = λ 2 (2K 2 ) < λ 2 (G) < 1, contradiction. So only one component of G has edges. The multiplicity of the eigenvalue 0 equals 1, therefore G has two components, one of which is an isolated vertices. The larger component G ′ has characteristic polynomial p(x)/x, so d G ′ = |p(−2)/2| = 2. Hence, by Lemma 3.1, G ′ has seven vertices. We know that G ′ has 17 edges and 20 triangles. One straightforwardly checks that there are exactly ten connected graphs on seven vertices with 17 edges, and none has 20 triangles. This proves that G is connected and d G = p(−2) = 4. So we are dealing with case (iv) or (v) of Lemma 3.1.
In case (iv), G is the line graph of a unicyclic graph H with p + 2 vertices. Let v be a vertex of H of maximum degree k and put ℓ = p + 2 − k. When v is not in a triangle of H, every edge not incident with v is incident with at most one neighbor of v. Therefore there are at least ℓ(k − 1) pairs of disjoint edges in H. If v is in a triangle {u, v, w} then the edge {u, w} is incident with u and w and disjoint from k − 2 edges through v. So we obtain at least ℓ(k − 1) − 1 pairs of disjoint edges in H. Disjoint edges in H correspond to pairs of nonadjacent vertices in G. This implies that G has at most p+2 2 − ℓ(k − 1) + 1 edges. Thus we find p 2 + 2 ≤ p+2 2 − ℓ(k − 1) + 1, which leads to (ℓ − 2)(k − 3) ≤ 2. This implies that k ≤ 3, k ≥ p, or (p, k) ∈ {(5, 4), (6,4), (6,5)}.
If k ≤ 3, then H has at most (p + 2)/2 vertices of degree 3 (indeed, the average degree equals 2, so the number of vertices of degree 1 equals the number of vertices of degree 3). Therefore G has at most 1 + (p + 2)/2 triangles, and therefore p 3 ≤ 1 + (p + 2)/2, which gives p ≤ 4. The case p = 3 is solved in [7], and when p = 4, G has 8 edges and 4 triangles and no K 4 (since H has maximum degree 3), which is impossible.
If k = p + 1, then H = K p−1
3
, hence G has p+1 2 + 2 edges, which is false. If k = p, then there are just two ways to create an odd cycle in H by adding two edges. In both cases we obtain K p−2 3 with one pendent edge attached to a vertex of degree at most 2 which leads to more than p 3 triangles in G. Suppose (p, k) ∈ {(5, 4), (6,4), (6,5)}. We get the maximum possible number of triangles in G if H consist of a triangle with k − 2 pendent edges in one vertex of the triangle and p − k + 1 pendant edges on another vertex. In all three cases G has fewer triangles than K 2 p . So we can conclude that there are no graphs cospectral with K 2 p in case (iv).
Finally, we consider case (v) of Lemma 3.1. Then G is a generalized line graph of a tree T with p + 1 vertices and p edges of type (1, 0, . . . , 0). Let H be the line graph of T , let v be a vertex of T with maximum degree k, and define ℓ = p + 2 − k. Then H has at most k 2 + ℓ−1 2 edges, and G has at most k 2 + ℓ−1 2 + 2k edges, hence k 2 + ℓ−1 2 + 2k ≥ p 2 + 2, which leads to (ℓ − 4)(k − 1) ≤ 0. So p = k + 2, p = k + 1, or p = k.
If p = k + 2,then equality holds in the above inequality, which means the two edges of T which are not incident with v intersect. This implies that H is the coalescence of K k and a triangle, or the coalescence of K k with the path P 3 at a pendent vertex of P 3 . This leads to seven possible generalized line graphs of type (1, 0, . . . , 0) (depending on which vertex has label 1). None of these has p 3 triangles. If p = k + 1, then H consists of the complete graph K k with one pendent edge. Now there are only three possible generalized line graphs of the required type, and none has p 3 triangles.
If p = k, then T = K 1,k , and H = K k . There are two generalized line graphs possible, but only K 2 p has p 3 triangles. So G = K 2 p . ✷
Concluding remarks
For q ≥ 3, the smallest eigenvalue of K q p is less than −2, so the proof of Theorem 3.2 does not generalize to larger q. Moreover, since K 4 4 and K 3 5 are not determined by their spectrum, it will already be difficult to settle the spectral characterization of K q p when p = 4 and when q = 3.
The graphs cospectral with K q p presented in Propositions 2.1 and 2.2 have an integral eigenvalue k − 1. So it is conceivable that the pineapple graphs with three nonintegral eigenvalues are determined by their spectrum. For such pineapple graphs the three nonintegral eigenvalues belong to one component, so the graph consists of one large component and a number of isolated vertices. But we doubt if this observation is useful, since the examples of Proposition 2.1 also have this structure.
This work is supported by TUBITAK (the scientific and technological research council of Turkey) 2214-A doctorate research fellowship program.
Figure 1 :
1Two graphs with characteristic polynomial x 3
Lemma 3 . 1 .
31Let G be a connected graph with least eigenvalue greater than −2. Then one of the following statements holds:(i) G has eight vertices, and d G = 1.
(
ii) G has seven vertices, and d G = 2.
(
iii) G has six vertices, and d G = 3.
(
iv) G is the line graph of an unicyclic graph with a cycle of odd length, and d G = 4.
(v) G is a generalized line graph of a tree of type(1, 0, . . . , 0), and d G = 4.
(
vi) G is the line graph of a tree with n ≥ 5 vertices, and d G = n.
Theorem 3 . 2 .
32The pineapple graph K 2 p is determined by the spectrum of its adjacency matrix.
A E Brouwer, W H Haemers, Spectra of Graphs. Springer UniversitextA.E. Brouwer, W.H. Haemers, Spectra of Graphs, Springer Universitext, 2012.
M Camara, W H Haemers, Spectral Characterizations of Almost Complete Graphs. 176M. Camara, W.H. Haemers, Spectral Characterizations of Almost Complete Graphs, Discrete Appl. Math. 176 (2014), 19-23.
D M Cvetković, M Doob, H Sachs, Spectra of Graphs. New YorkDeutscher Verlag der Wissenschaften, Berlin, and Academic Pressthird edition. first editionD.M. Cvetković, M. Doob, H. Sachs, Spectra of Graphs, third edition, Johann Abrosius Barth Verlag, 1995 (first edition: Deutscher Verlag der Wissenschaften, Berlin, and Academic Press, New York, 1980).
Cospectral graphs with least eigenvalue at least −2. D M Cvetković, M Lepović, Publ. Inst. Math. (Belgrad). 7892D.M. Cvetković, M. Lepović, Cospectral graphs with least eigenvalue at least −2, Publ. Inst. Math. (Belgrad), 78(92) (2005), 51-63.
On spectral characterizations and embeddings of graphs, Linear Algebra and its Application. M Doob, D M Cvetković, 27M. Doob, D.M. Cvetković, On spectral characterizations and embeddings of graphs, Linear Algebra and its Application, 27 (1979), 17-26.
Constructing cospectral graphs. C D Godsil, B D Mckay, Aequationes Math. 25C.D. Godsil, B.D. McKay, Constructing cospectral graphs, Aequationes Math. 25 (1982), 257- 268.
Some graphs determined by their spectra, Linear Algebra and its Applications. X Zhang, H Zhang, 431X. Zhang, H. Zhang, Some graphs determined by their spectra, Linear Algebra and its Appli- cations, 431 (2009), 1443-1454.
| []
|
[
"GENERALIZATION OF THE WIENER-IKEHARA THEOREM",
"GENERALIZATION OF THE WIENER-IKEHARA THEOREM"
]
| [
"Gregory Debruyne ",
"Jasson Vindas "
]
| []
| []
| We study the Wiener-Ikehara theorem under the so-called log-linearly slowly decreasing condition. Moreover, we clarify the connection between two different hypotheses on the Laplace transform occurring in exact forms of the Wiener-Ikehara theorem, that is, in "if and only if" versions of this theorem.2010 Mathematics Subject Classification. 11M45, 40E05. | 10.1215/ijm/1499760025 | [
"https://arxiv.org/pdf/1611.09765v2.pdf"
]
| 119,178,287 | 1611.09765 | c6599d88e4ad87ae001f66c676ead42758fa3e83 |
GENERALIZATION OF THE WIENER-IKEHARA THEOREM
30 Jan 2017
Gregory Debruyne
Jasson Vindas
GENERALIZATION OF THE WIENER-IKEHARA THEOREM
30 Jan 2017
We study the Wiener-Ikehara theorem under the so-called log-linearly slowly decreasing condition. Moreover, we clarify the connection between two different hypotheses on the Laplace transform occurring in exact forms of the Wiener-Ikehara theorem, that is, in "if and only if" versions of this theorem.2010 Mathematics Subject Classification. 11M45, 40E05.
Introduction
The Wiener-Ikehara theorem plays a central role in Tauberian theory [12]. Since its publication [10,19], there have been numerous applications and generalizations of this theorem, see, e.g., [1,5,6,9,13,15,20].
Recently, Zhang has relaxed the non-decreasing Tauberian condition in the Wiener-Ikehara theorem to so-called log-linear slow decrease. Following Zhang, we shall call a function f linearly slowly decreasing if for each ε > 0 there is a > 1 such that
lim inf x→∞ inf y∈[x,ax] f (y) − f (x) x ≥ −ε,
and we call a function S log-linearly slowly decreasing if S(log x) is linearly slowly decreasing, i.e., if for each ε > 0 there exist δ > 0 and x 0 such that (1.1) S(x + h) − S(x) e x ≥ −ε, for 0 ≤ h ≤ δ and x ≥ x 0 .
Using the latter condition, Zhang was able to obtain an exact form of the Wiener-Ikehara theorem. His theorem 1 reads as follows,
) in
Zhang's result cover as particular instances the cases when L{S; s} − a/(s − 1) has analytic or even L 1 loc -extension to ℜe s = 1, as follows from the Riemann-Lebesgue lemma.
About a decade ago, Korevaar [13] also obtained an exact form of the Wiener-Ikehara theorem for non-decreasing functions. His exact hypothesis on the Laplace transform was the so-called local pseudofunction boundary behavior. The authors have recently established [5] local pseudofunction behavior as a minimal boundary assumption in other complex Tauberian theorems for Laplace transforms. It should be pointed out that Tauberian theorems with mild boundary hypotheses have relevant applications in the theory of Beurling generalized numbers (cf. [4,7,8,17,20]); in fact, in that setting one must work with zeta functions whose boundary values typically display very low regularity properties.
In this article we show that local pseudofunction boundary behavior is also able to deliver an exact form of the Wiener-Ikehara theorem if one works with log-linear slow decrease. Moreover, we clarify the connection between local pseudofunction boundary behavior and the exact conditions of Zhang, giving a form of the Wiener-Ikehara theorem that contains both versions (Theorem 3.8).
We thank H. G. Diamond and W.-B. Zhang for useful discussions on the subject.
Pseudofunctions and pseudomeasures
We present in this section some background material on pseudofunctions and pseudomeasures.
We begin with Fourier transforms, which we shall interpret in the distributional sense. The standard Schwartz test function spaces of compactly supported smooth functions (on an open subset U ⊆ R) and rapidly decreasing functions are denoted by D(U) and S(R), while D ′ (U) and S ′ (R) stand for their topological duals, the spaces of distributions and tempered distributions. The Fourier transform, normalized asφ(t) = F {ϕ; t} = ∞ −∞ e −itx ϕ(x)dx, is a topological automorphism on the Schwartz space S(R). One can then extend it to S ′ (R) via duality, namely, the Fourier transform of f ∈ S ′ (R) is the tempered distributionf ∈ S ′ (R) determined by f (t), ϕ(t) = f (x),φ(x) , for each test function ϕ ∈ S(R). As usual, locally integrable functions are regarded as
distributions via f (x), ϕ(x) = ∞ −∞ f (x)ϕ(x)dx.
Note that if f ∈ S ′ (R) has support in [0, ∞), its Laplace transform L {f ; s} = f (u), e −su is well-defined, analytic on ℜe s > 0, and one has lim σ→0 + L {f ; σ + it} =f (t), in the distributional sense. See the textbooks [3,18] for further details on distribution theory.
Pseudofunctions and pseudomeasures are special kinds of Schwartz distributions that arise in harmonic analysis [2,11] and are defined via Fourier transform. A tempered distribution f ∈ S ′ (R) is called a (global) pseudomeasure iff ∈ L ∞ (R). If we additionally have lim |x|→∞f (x) = 0, we call f a (global) pseudofunction. We denote the spaces of pseudofunctions and pseudomeasures by P F (R) and P M(R), respectively.
We say that a distribution g is a pseudofunction (pseudomeasure) at t 0 ∈ R if the point possesses an open neighborhood where g coincides with a pseudofunction (pseudomeasure). We then say that g ∈ D ′ (U) is a local pseudofunction (local pseudomeasure) on an open set U ⊆ R if g is a pseudofunction (pseudomeasure) at every t 0 ∈ U; we write g ∈ P F loc (U) (g ∈ P M loc (U)). Using a partition of the unity, one easily checks that g ∈ P F loc (U) if and only if ϕg ∈ P F (R) for each ϕ ∈ D(U), or, which amounts to the same, it satisfies [13] (2.1)
g(t), e iht ϕ(t) = o(1),
as |h| → ∞, for each ϕ ∈ D(U). The property (2.1) can be regarded as a generalized Riemann-Lebesgue lemma. In particular,
L 1 loc (U) ⊂ P F loc (U). Likewise, if we replace o(1) by O(1) in (2.1), namely, (2.2) g(t), e iht ϕ(t) = O(1),
as |h| → ∞, we obtain a characterization of local pseudomeasures. Hence, any Radon measure on U is an instance of a local pseudomeasure. We mention that smooth functions are multipliers for local pseudofunctions and pseudomeasures, as follows from (2.1) and (2.2). Let G(s) be analytic on the half-plane ℜe s > α. We say that G has local pseudofunction (local pseudomeasure) boundary behavior on the boundary open set α + iU if there is g ∈ P F loc (U) (g ∈ P M loc (U)) such that
(2.3) lim σ→α + ∞ −∞ G(σ + it)ϕ(t)dt = g(t), ϕ(t) , for each ϕ ∈ D(U).
The meaning of having pseudofunction (pseudomeasure) boundary behavior at a boundary point α + it 0 should be clear. We emphasize that L 1 loc , continuous, or analytic extension are very special cases of local pseudofunction boundary behavior. Interestingly, if g ∈ D ′ (U) is the distributional boundary value of an analytic function, just having (2.1) ((2.2), resp.) as h → ∞ suffices to conclude that g ∈ P F loc (U) (g ∈ P M loc (U)), as shown by the following proposition.
Proposition 2.1. Suppose that g ∈ D ′ (U) is the boundary distribution on α + iU of an analytic function G on the half-plane ℜe s > α, that is, that (2.3) holds for every test function ϕ ∈ D(U). Then, for each ϕ ∈ D(U) and n ∈ N,
g(t), e iht ϕ(t) = O 1 |h| n , h → −∞.
In particular, g is a local pseudofunction (local pseudomeasure) on U if and only if (2.1) ( (2.2), resp.) holds as h → ∞ for each ϕ ∈ D(U).
Proof. Fix ϕ ∈ D(U) and let V be an open neighborhood of supp ϕ with compact closure in U. Pick a distribution f ∈ S ′ (R) such thatf has compact support andf = g on V . The Paley-Wiener-Schwartz theorem tells us that f is an entire function with at most polynomial growth on the real axis, so, find m > 0 such that
f (x) = O(|x| m ), |x| → ∞. Let f ± (x) = f (x)H(±x),
where H is the Heaviside function, i.e., the characteristic function of the interval [0, ∞). Observe that [3]f ± (t) = lim σ→0 + L{f ± ; ±σ+it}, where the limit is taken in S ′ (R). We also have g =f − +f + on V . Consider the analytic function, defined off the imaginary axis,
F (s) = G(s + α) − L{f + ; s} if ℜe s > 0, L{f − ; s} if ℜe s < 0.
The function F has zero distributional jump across the subset iV of the imaginary axis, namely, lim
σ→0 + F (σ + it) − F (−σ + it) = 0 in D ′ (V )
. The edge-of-the-wedge theorem [16,Thm. B] gives that F has analytic continuation through iV . We then conclude thatf − must be a real analytic function on V . Integration by parts then yields
f − (t), e iht ϕ(t) = ∞ −∞f − (t)ϕ(t)e iht dt = O n 1 |h| n , |h| → ∞.
On the other hand, as h → −∞,
f + (t), e iht ϕ(t) = f + (x),φ(x − h) = ∞ 0 f (x)φ(x + |h|) dx ≪ n,m ∞ 0 (x + 1) m (x + |h|) n+m+1 dx ≤ 1 |h| n ∞ 0 du (u + 1) n+1 ,
becauseφ is rapidly decreasing.
Generalizations of the Wiener-Ikehara theorem
We begin our investigation with a boundedness result. We call a function S loglinearly boundedly decreasing if there is δ > 0 such that
lim inf x→∞ inf h∈[0,δ] S(x + h) − S(x) e x > −∞, that is, if there are δ, x 0 , M > 0 such that (3.1) S(x + h) − S(x) ≥ −Me x , for 0 ≤ h ≤ δ and x ≥ x 0 .
Functions defined on [0, ∞) are always tacitly extended to (−∞, 0) as 0 for x < 0.
Proposition 3.1. Let S ∈ L 1 loc [0, ∞). Then, (3.2) S(x) = O(e x ), x → ∞,
if and only if S is log-linearly boundedly decreasing and its Laplace transform Proof. Suppose (3.2) holds. It is obvious that S must be log-linearly boundedly decreasing and that (3.3) is convergent for ℜe s > 1. Set ∆(x) = e −x S(x) and decompose it as ∆ = ∆ 1 + ∆ 2 , where ∆ 2 ∈ L ∞ (R) and ∆ 1 is compactly supported. The boundary value of (3.3) on ℜe s = 1 is the Fourier transform of ∆, that is, the distribution ∆ 1 +∆ 2 . By definition∆ 2 ∈ P M(R), while∆ 1 ∈ C ∞ (R) ⊂ P F loc (R) because it is in fact the restriction of an entire function to the real line. So, actually∆ ∈ P M loc (R).
Let us now prove that the conditions are sufficient for (3.2). Since changing a function on a finite interval does not violate the local pseudomeasure behavior of the Laplace transform, we may assume that (3.1) holds for all x ≥ 0. Iterating the inequality (3.1), one finds that there is C such that
(3.4) S(u) − S(y) ≥ −Ce u for all u ≥ y ≥ 0.
We may thus assume without loss of generality that S is positive. In fact, if necessary, one may replace S byS(u) = S(u) + S(0) + Ce u , whose Laplace transform also admits local pseudomeasure boundary behavior at s = 1. We set again ∆(x) = e −x S(x), its Laplace transform is L{S; s + 1}, so that L{∆; s} has pseudomeasure boundary behavior at s = 0. There are then a sufficiently small λ > 0 and a local pseudomeasure g on (−λ, λ) such that lim σ→0 + L{∆; σ + it} = g(t) in D ′ (−λ, λ). Let ϕ be an arbitrary (non-identically zero) smooth function with support in (−λ, λ) such that its Fourier transformφ is non-negative. By the monotone convergence theorem and the equality L{∆;
σ + it} = F {∆(x)e −σx ; t} in S ′ (R), ∞ 0 ∆(x)φ(x − h)dx = lim σ→0 + ∞ 0 ∆(x)e −σxφ (x − h)dx = lim σ→0 + ∞ −∞ L{∆; σ + it}e iht ϕ(t)dt = g(t), e iht ϕ(t) = O(1), as h → ∞. Set now B = ∞ 0 e −xφ (x)dx > 0. Appealing to (3.4) once again, we obtain e −h S(h) = 1 B ∞ 0 e −x−h S(h)φ(x)dx ≤ 1 B ∞ 0 e −x−h S(x + h)φ(x)dx + C B ∞ 0φ (x)dx ≤ 1 B ∞ 0 ∆(x)φ(x − h)dx + C B ∞ 0φ (x)dx = O(1).
If one reads the above proof carefully, one realizes that we do not have to ask the existence of λ > 0 such that
g(t), e iht ϕ(t) = O(1), h → ∞, for all ϕ ∈ D(−λ, λ),
where g is as in the proof of Proposition 3.1. Indeed, one only needs one appropriate test function in this relation. To generalize Proposition 3.1, we introduce the ensuing terminology. The Wiener algebra is A(R) = F (L 1 (R)). We write A c (R) for the subspace of A(R) consisting of compactly supported functions.
ϕ ∈ A c (R) with supp ϕ ⊂ U. Proof. Fix ϕ ∈ A c (R) with supp ϕ ⊂ U. Let f ∈ L ∞ (R) be such that lim σ→α + G(σ + it) =f (t),I ϕ (h) = ∞ −∞ G 1 (α + it)ϕ(t)e iht dt + lim σ→0 + ∞ −∞ L{f + ; σ + it}ϕ(t)e iht dt = o(1) + lim σ→0 + ∞ 0 e −σx f + (x)φ(x − h)dx = o(1) + ∞ −∞ f (x + h)φ(x)dx, which is O(1).
In the pseudofunction case we may additionally require lim |x|→∞ f (x) = 0, so that I ϕ (h) = o(1).
Exactly the same argument given in proof of Proposition 3.1 would work when pseudomeasure boundary behavior of L{S; s} at s = 1 is replaced by pseudomeasure boundary behavior on ℜe s = 1 with respect to a single ϕ ∈ A c (R) \ {0} with nonnegative Fourier transform (which implies ϕ(0) = 0) if one is able to justify the Parseval relation
∞ −∞ ∆(x)e −σxφ (x − h)dx = ∞ −∞ L{∆; σ + it}e iht ϕ(t)dt.
But this holds in the L 2 -sense as follows from the next simple lemma 2 . Proof. As in the proof of Theorem 3.1, we may assume that (3.4) holds and S is positive. For fixed σ > 1,
0 < e −σh S(h) = σ 1 − e −σ h+1 h (S(h) − S(x))e −σx dx + o σ (1) ≤ σCe −(σ−1)h (1 − e 1−σ ) (σ − 1)(1 − e −σ ) + o σ (1) = o σ (1), h → ∞.
The following alternative version of Proposition 3.1 should now be clear. Next, we proceed to extend the actual Wiener-Ikehara theorem. Proof. The direct implication is straightforward. Let us show the converse. We may assume again that S is positive. As before, we set ∆(x) = e −x S(x). Applying Proposition 3.1, we obtain ∆(x) = O(1), because 1/(s − 1) is actually a global pseudomeasure on ℜe s = 1. In particular, we now know that ∆ ∈ S ′ (R). Let H be the Heaviside function. Note that the Laplace transform of H is 1/s, ℜe s > 0. We then have that the Fourier transform of ∆−aH is the boundary value of L{S; s+1}−a/s on ℜes = 0, and thus a local pseudofunction on the whole real line; but this just means that for each φ ∈ F (D(R))
∆(x) − aH(x), φ(x − h) = 1 2π ∆ (t) − aĤ(t),φ(−t)e ith = o(1), h → ∞,
i.e.,
(3.7) ∞ −∞ ∆(x + h)φ(x) dx = a ∞ −∞ φ(x) dx + o(1), h → ∞.
Since ∆ is bounded for large arguments, its set of translates ∆(x+h) is weakly bounded in S ′ (R). Also, F (D(R)) is dense in S(R). We can thus apply the Banach-Steinhaus theorem to conclude that (3.7) remains valid 3 for all φ ∈ S(R). Now, let ε > 0 and choose δ and x 0 such that (1.1) is fulfilled. Pick a non-negative test function φ ∈ D(0, δ) such that δ 0 φ(x)dx = 1. Then,
∆(h) = δ 0 ∆(h)φ(x) dx ≤ ε + δ 0 e x ∆(x + h)φ(x) dx ≤ ε + e δ δ 0 ∆(x + h) φ(x)dx = ε + e δ (a + o(1)), h ≥ x 0 ,
where we have used (3.7). Taking first the limit superior as h → ∞, and then letting δ → 0 + and ε → 0 + , we obtain lim sup h→∞ ∆(h) ≤ a. The reverse inequality with the limit inferior follows from a similar argument, but now choosing the test function φ with support in (−δ, 0). Hence, (3.5) has been established.
We can further generalize Theorem 3.6 by using the following simple consequence of Wiener's local division lemma.
Lemma 3.7. Let φ 1 , φ 2 ∈ L 1 (R) be such that suppφ 2 is compact and thatφ 1 = 0 on
suppφ 2 . Let τ ∈ L ∞ (R) satisfy (τ * φ 1 )(h) = o(1), then (τ * φ 2 )(h) = o(1).
Proof. By Wiener's division lemma [12,Chap. II,Thm. 7.3], there is ψ ∈ L 1 (R) such thatψ =φ 2 /φ 1 , or ψ * φ 1 = φ 2 . Since convolving an o(1)-function with an L 1 -function
remains o(1), we obtain (τ * φ 2 )(h) = ((τ * φ 1 ) * ψ)(h) = o(1). Theorem 3.8. Let S ∈ L 1
loc [0, ∞) and let {ϕ λ } λ∈J be a family of functions such that ϕ λ ∈ A c (R) for each λ ∈ J and the following property holds:
For any t ∈ R, there exists some λ t ∈ J such that ϕ λt (t) = 0. Moreover, when t = 0, the Fourier transform of the corresponding ϕ λ 0 is non-negative as well. Then, S(x) ∼ ae x if and only if S is log-linearly slowly decreasing, (3.3) holds, and the analytic function (3.6) has pseudofunction boundary behavior on ℜe s = 1 with respect to every ϕ λ .
Proof. Once again the direct implication is straightforward, so we only prove the converse. By Corollary 3.5, it follows that ∆(x) := e −x S(x) = O(1). Modifying ∆ on a finite interval, we may assume that ∆ ∈ L ∞ (R). The usual calculations done above (cf. the proof of Proposition 3.1) show that
∞ −∞ (∆(x + h) − aH(x + h))φ λ (x)dx = o(1), x → ∞, for each λ ∈ J,
where again H denotes the Heaviside function. (We may now apply dominated convergence to interchange limit and integral because ∆ ∈ L ∞ (R).) Pick t 0 ∈ R. Lemma 3.7 then ensures ∆ (t) − aĤ(t), ϕ(t)e iht = ∆(x + h) − aH(x + h),φ(x) = o(1) for all ϕ ∈ D(R) with support in a sufficiently small (but fixed) neighborhood of t 0 . This shows that∆ − aĤ ∈ P F loc (R). Since this distribution is the boundary value of (3.6) on ℜe s = 1, Theorem 3.6 yields S(x) ∼ ae x .
Observe that Zhang's theorem (Theorem 1.1) follows at once from Theorem 3.8 upon setting ϕ λ (t) = χ [−λ,λ] (t)(1 − |t| /λ). Here one hasφ λ (x) = 4 sin 2 (λx/2)/(x 2 λ). More generally, Corollary 3.9. Let S ∈ L 1 loc [0, ∞) and let ϕ ∈ A c (R) be non-identically zero such that ϕ is non-negative. Then, S(x) ∼ ae x if and only if S is log-linearly slowly decreasing, (3.3) holds, and the analytic function G(s) = L{S; s} − a/(s − 1) satisfies: There is λ 0 > 0 such that for each λ ≥ λ 0
I λ (h) = lim σ→1 + ∞ −∞ G(σ + it)e iht ϕ t λ dt
exists for all sufficiently large h > h λ and lim h→∞ I λ (h) = 0.
We conclude the article with two remarks.
Remark 3.10. Suppose that S is of local bounded variation on [0, ∞) so that L{S; s} = s −1 L{dS; s} = s −1 ∞ 0 − e −sx dS(x). Then, the pseudomeasure boundary behavior of L{S; s} at s = 1 in Proposition 3.1 becomes equivalent to that of L{dS; s} because the boundary value of s is the invertible smooth function 1 + it and smooth functions are multipliers for local pseudomeasures (and pseudofunctions). Likewise, the local pseudofunction boundary behavior of (3.6) in Theorem 3.6 is equivalent to that of (3.8) L{dS; s} − a s − 1 .
On the other hand, we do not know whether the pseudomeasure (pseudofunction) boundary behavior of L{S; s} (of (3.6)) with respect to ϕ (with respect to every ϕ λ ) can be replaced by that of L{dS; s} (of (3.8)) in Corollary 3.5 (in Theorem 3.8). The same comment applies to Corollary 3.9.
Remark 3.11. Let G(s) be analytic on the half-plane ℜe s > α and suppose it has pseudomeasure (pseudofunction) boundary behavior on ℜe s = α with respect to some ϕ ∈ A c (R). If ϕ(t 0 ) = 0, then G does not necessarily have pseudomeasure (pseudofunction) boundary behavior at α + it 0 . For example, if G has meromorphic continuation to a neighborhood of α + it 0 with a pole of order say n ≥ 2 at the point α + it 0 and if ϕ is such that ϕ (j) (t 0 ) = 0 for j = 0, 1, . . . , n and is supported in a sufficiently small neighborhood of t 0 , we would have that ϕ(t)G(α+it) is continuous and hence a pseudofunction, without G(s) having itself pseudomeasure boundary behavior at α + it 0 . If ϕ(t 0 ) = 0, however, it is unclear to us whether G should have local pseudomeasure (pseudofunction) boundary behavior at α + it 0 . It would be interesting to establish whether the latter is true or false. Observe this question is closely related to the one raised in Remark 3.10.
Theorem 1. 1 .
1Let S ∈ L 1 loc [0, ∞) be log-linearly slowly decreasing. Assume that (1.2) L{S; s} = ∞ 0 e −sx S(x)dx is absolutely convergent for ℜe s > 1 and that there is a constant a for which G(s) = L{S; s} − a s − 1 satisfies: There is λ 0 > 0 such that for each λ ≥ x) ∼ ae x . Theorem 1.1 is exact in the sense that if (1.5) holds, then S is log-linearly slowly decreasing and (1.2)-(1.4) hold as well. Note that the hypotheses (1.3) and (1.4
e
−sx S(x)dx converges for ℜe s > 1 and admits pseudomeasure boundary behavior at the point s = 1.
Definition 3 . 2 . 0
320An analytic function G(s) on the half-plane ℜe s > α is said to have pseudomeasure boundary behavior (pseudofunction boundary behavior) on ℜes = α with respect to ϕ ∈ A c (R) if there is N > σ + it)e iht ϕ(t)dt exists for every h ≥ N and I ϕ (h) = O(1) (I ϕ (h) = o(1), resp.) as h → ∞.Let us check that the notions from Definition 3.2 generalize those of local pseudomeasures and pseudofunctions.
Proposition 3 . 3 .
33Let G(s) be analytic on the half-plane ℜe s > α and have local pseudomeasure (local pseudofunction) boundary behavior on α + iU. Then, G has pseudomeasure (pseudofunction) boundary behavior on ℜe s = α with respect to every
distributionally, on a neighborhood V ⊂ U of supp ϕ. As in the proof of Proposition 2.1, one deduces from the edge-of-the-wedge theorem that G 1 (s) = G(s)−L{f + , s−α} has analytic continuation through α+iV , where f + (x) = f (x)H(x). Thus,
Lemma 3 . 4 .
34Let S ∈ L 1 loc [0, ∞) be log-linearly boundedly decreasing with convergent Laplace transform for ℜe s > 1. Then, S(x) = o(e σx ), x → ∞, for each σ > 1.
Corollary 3. 5 .
5Let S ∈ L 1 loc [0, ∞) and let ϕ ∈ A c (R) be non-identically zero and have non-negative Fourier transform. Then, (3.2) holds if and only if S is log-linearly boundedly decreasing, (3.3) holds, and L{S; s} has pseudomeasure boundary behavior on ℜe s = 1 with respect to ϕ.
Theorem 3 . 6 .
36Let S ∈ L 1 loc [0, ∞). Then, (3.5) S(x) ∼ ae x holds if and only if S is log-linearly slowly decreasing, (3.3) holds, and (3.6) L{S; s} − a s − 1 admits local pseudofunction boundary behavior on the whole line ℜe s = 1.
More precisely, we first apply Lemma 3.4 and then modify S in a finite interval so that we may assume that ∆(x)e −σx belongs to L 2 (R) for each σ > 0. Clearly, ϕ ∈ L 2 (R) as well.
In the terminology of[14], this means that ∆ has the S-limit a at infinity.
An extension of the Ikehara Tauberian theorem and its application. J Aramaki, Acta Math. Hungar. 71J. Aramaki, An extension of the Ikehara Tauberian theorem and its application, Acta Math. Hungar. 71 (1996), 297-326.
J J Benedetto, Spectral synthesis. New York-LondonAcademic Press, IncJ. J. Benedetto, Spectral synthesis, Academic Press, Inc., New York-London, 1975.
H Bremermann, Distributions, complex variables and Fourier transforms. Addison-Wesley, Reading, MassachusettsH. Bremermann, Distributions, complex variables and Fourier transforms, Addison-Wesley, Read- ing, Massachusetts, 1965.
On PNT equivalences for Beurling numbers. G Debruyne, J Vindas, 10.1007/s00605-016-0979-9Monatsh. Math. in pressG. Debruyne, J. Vindas, On PNT equivalences for Beurling numbers, Monatsh. Math., in press, doi:10.1007/s00605-016-0979-9.
Complex Tauberian theorems for Laplace transforms with local pseudofunction boundary behavior. G Debruyne, J Vindas, arXiv:1604.05069J. Anal. Math. to appear (preprintG. Debruyne, J. Vindas, Complex Tauberian theorems for Laplace transforms with local pseudo- function boundary behavior, J. Anal. Math., to appear (preprint: arXiv:1604.05069).
Généralisation du théorème de Ikehara. H Delange, Ann. Sci. Ecole Norm. Sup. 71H. Delange, Généralisation du théorème de Ikehara, Ann. Sci. Ecole Norm. Sup. 71 (1954), 213- 242.
Chebyshev bounds for Beurling numbers. H G Diamond, W.-B Zhang, Acta Arith. 160H. G. Diamond, W.-B. Zhang, Chebyshev bounds for Beurling numbers, Acta Arith. 160 (2013), 143-157.
Beurling generalized numbers, Mathematical Surveys and Monographs series. H G Diamond, W.-B Zhang, American Mathematical SocietyProvidence, RIH. G. Diamond, W.-B. Zhang, Beurling generalized numbers, Mathematical Surveys and Mono- graphs series, American Mathematical Society, Providence, RI, 2016.
A class of extremal functions for the Fourier transform. S W Graham, J D Vaaler, Trans. Amer. Math. Soc. 265S. W. Graham, J. D. Vaaler, A class of extremal functions for the Fourier transform, Trans. Amer. Math. Soc. 265 (1981), 283-302.
An extension of Landau's theorem in the analytic theory of numbers. S Ikehara, J. Math. and Phys. M.I.T. 10S. Ikehara, An extension of Landau's theorem in the analytic theory of numbers, J. Math. and Phys. M.I.T. 10 (1931), 1-12.
J.-P Kahane, R Salem, Ensembles parfaits et séries trigonométriques. Hermann, ParisSecond editionJ.-P. Kahane, R. Salem, Ensembles parfaits et séries trigonométriques, Second edition, Hermann, Paris, 1994.
Tauberian theory. A century of developments. J Korevaar, Grundlehren der Mathematischen Wissenschaften. BerlinSpringer-Verlag329J. Korevaar, Tauberian theory. A century of developments, Grundlehren der Mathematischen Wissenschaften, 329, Springer-Verlag, Berlin, 2004.
Distributional Wiener-Ikehara theorem and twin primes. J Korevaar, Indag. Math. (N.S.). 16J. Korevaar, Distributional Wiener-Ikehara theorem and twin primes, Indag. Math. (N.S.) 16 (2005), 37-49.
S Pilipović, B Stanković, J Vindas, Asymptotic behavior of generalized functions, Series on Analysis, Applications and Computation. NJWorld Scientific Publishing Co. Pte. Ltd., Hackensack5S. Pilipović, B. Stanković, J. Vindas, Asymptotic behavior of generalized functions, Series on Analysis, Applications and Computation, 5, World Scientific Publishing Co. Pte. Ltd., Hacken- sack, NJ, 2012.
Generalization of the effective Wiener-Ikehara theorem. Sz, Gy, A Révész, De Roton, Int. J. Number Theory. 9Sz. Gy. Révész, A. de Roton, Generalization of the effective Wiener-Ikehara theorem, Int. J. Number Theory 9 (2013), 2091-2128.
Lectures on the edge-of-the-wedge theorem. W Rudin, Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics. 6AMSW. Rudin, Lectures on the edge-of-the-wedge theorem, Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, No. 6, AMS, Providence, RI, 1971.
The prime number theorem for Beurling's generalized numbers. New cases, Acta Arith. J.-C Schlage-Puchta, J Vindas, 153J.-C. Schlage-Puchta, J. Vindas, The prime number theorem for Beurling's generalized numbers. New cases, Acta Arith. 153 (2012), 299-324.
Methods of the theory of generalized functions. V S Vladimirov, Analytical Methods and Special Functions. 6V. S. Vladimirov, Methods of the theory of generalized functions, Analytical Methods and Special Functions, 6, Taylor & Francis, London, 2002.
The Fourier integral and certain of its applications. N Wiener, Cambridge University PressCambridgeReprint of the 1933 editionN. Wiener, The Fourier integral and certain of its applications, Reprint of the 1933 edition, Cambridge University Press, Cambridge, 1988.
Wiener-Ikehara theorems and the Beurling generalized primes. W.-B Zhang, Monatsh. Math. 174W.-B. Zhang, Wiener-Ikehara theorems and the Beurling generalized primes, Monatsh. Math. 174 (2014), 627-652.
Krijgslaan 281, B 9000 Ghent, Belgium E-mail address: [email protected] J. Vindas. G Debruyne, 9000Department of Mathematics, Ghent University ; Department of Mathematics, Ghent UniversityG. Debruyne, Department of Mathematics, Ghent University, Krijgslaan 281, B 9000 Ghent, Belgium E-mail address: [email protected] J. Vindas, Department of Mathematics, Ghent University, Krijgslaan 281, B 9000
Belgium E-mail address: jasson. Ghent, [email protected], Belgium E-mail address: [email protected]
| []
|
[
"The massive end of the luminosity and stellar mass functions: Dependence on the fit to the light profile",
"The massive end of the luminosity and stellar mass functions: Dependence on the fit to the light profile"
]
| [
"M Bernardi \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n",
"A Meert \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n",
"R K Sheth \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n\nThe Abdus Salam International Center for Theoretical Physics\nStrada Costiera 1134151TriesteItaly\n",
"V Vikram \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n",
"M Huertas-Company \nGEPI\nObservatoire de Paris\nCNRS\nUniv. Paris Diderot\nPlace Jules Janssen92190MeudonFrance\n",
"S Mei \nGEPI\nObservatoire de Paris\nCNRS\nUniv. Paris Diderot\nPlace Jules Janssen92190MeudonFrance\n",
"F Shankar \nGEPI\nObservatoire de Paris\nCNRS\nUniv. Paris Diderot\nPlace Jules Janssen92190MeudonFrance\n"
]
| [
"Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA",
"Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA",
"Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA",
"The Abdus Salam International Center for Theoretical Physics\nStrada Costiera 1134151TriesteItaly",
"Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA",
"GEPI\nObservatoire de Paris\nCNRS\nUniv. Paris Diderot\nPlace Jules Janssen92190MeudonFrance",
"GEPI\nObservatoire de Paris\nCNRS\nUniv. Paris Diderot\nPlace Jules Janssen92190MeudonFrance",
"GEPI\nObservatoire de Paris\nCNRS\nUniv. Paris Diderot\nPlace Jules Janssen92190MeudonFrance"
]
| [
"Mon. Not. R. Astron. Soc"
]
| In addition to the large systematic differences arising from assumptions about the stellar mass-to-light ratio, the massive end of the stellar mass function is rather sensitive to how one fits the light profiles of the most luminous galaxies. We quantify this by comparing the luminosity and stellar mass functions based on SDSS cmodel magnitudes, and PyMorph single-Sersic and Sersic-Exponential fits to the surface brightness profiles of galaxies in the SDSS. The PyMorph fits return more light, so that the predicted masses are larger than when cmodel magnitudes are used. As a result, the total stellar mass density at z ∼ 0.1 is about 1.2× larger than in our previous analysis of the SDSS. The differences are most pronounced at the massive end, where the measured number density of objects having M * ≥ 6 × 10 11 M ⊙ is ∼ 5× larger. Alternatively, at number densities of 10 −6 Mpc −3 , the limiting stellar mass is 2× larger. The differences with respect to fits by other authors, typically based on Petrosian-like magnitudes, are even more dramatic, although some of these differences are due to sky-subtraction problems, and are sometimes masked by large differences in the assumed M * /L (even after scaling to the same IMF). Our results impact studies of the growth and assembly of stellar mass in galaxies, and of the relation between stellar and halo mass, so we provide simple analytic fits to these new luminosity and stellar mass functions and quantify how they depend on morphology, as well as the binned counts in electronic format. While these allow one to quantify the differences which arise because of the assumed light profile, and we believe our Sersic-Exponential based results to be the most realistic of the models we have tested, we caution that which profile is the most appropriate at the high mass end is still debated. | 10.1093/mnras/stt1607 | [
"https://arxiv.org/pdf/1304.7778v2.pdf"
]
| 118,610,575 | 1304.7778 | 399419f37e27470163790dbab04821f11b009daf |
The massive end of the luminosity and stellar mass functions: Dependence on the fit to the light profile
May 2014
M Bernardi
Department of Physics and Astronomy
University of Pennsylvania
19104PhiladelphiaPAUSA
A Meert
Department of Physics and Astronomy
University of Pennsylvania
19104PhiladelphiaPAUSA
R K Sheth
Department of Physics and Astronomy
University of Pennsylvania
19104PhiladelphiaPAUSA
The Abdus Salam International Center for Theoretical Physics
Strada Costiera 1134151TriesteItaly
V Vikram
Department of Physics and Astronomy
University of Pennsylvania
19104PhiladelphiaPAUSA
M Huertas-Company
GEPI
Observatoire de Paris
CNRS
Univ. Paris Diderot
Place Jules Janssen92190MeudonFrance
S Mei
GEPI
Observatoire de Paris
CNRS
Univ. Paris Diderot
Place Jules Janssen92190MeudonFrance
F Shankar
GEPI
Observatoire de Paris
CNRS
Univ. Paris Diderot
Place Jules Janssen92190MeudonFrance
The massive end of the luminosity and stellar mass functions: Dependence on the fit to the light profile
Mon. Not. R. Astron. Soc
0000000May 2014Accepted . Received ; in original formarXiv:1304.7778v2 [astro-ph.CO] 9 Sep 2013 (MN L A T E X style file v2.2)galaxies: fundamental parameters -galaxies: luminosity function, mass function -galaxies: photometry
In addition to the large systematic differences arising from assumptions about the stellar mass-to-light ratio, the massive end of the stellar mass function is rather sensitive to how one fits the light profiles of the most luminous galaxies. We quantify this by comparing the luminosity and stellar mass functions based on SDSS cmodel magnitudes, and PyMorph single-Sersic and Sersic-Exponential fits to the surface brightness profiles of galaxies in the SDSS. The PyMorph fits return more light, so that the predicted masses are larger than when cmodel magnitudes are used. As a result, the total stellar mass density at z ∼ 0.1 is about 1.2× larger than in our previous analysis of the SDSS. The differences are most pronounced at the massive end, where the measured number density of objects having M * ≥ 6 × 10 11 M ⊙ is ∼ 5× larger. Alternatively, at number densities of 10 −6 Mpc −3 , the limiting stellar mass is 2× larger. The differences with respect to fits by other authors, typically based on Petrosian-like magnitudes, are even more dramatic, although some of these differences are due to sky-subtraction problems, and are sometimes masked by large differences in the assumed M * /L (even after scaling to the same IMF). Our results impact studies of the growth and assembly of stellar mass in galaxies, and of the relation between stellar and halo mass, so we provide simple analytic fits to these new luminosity and stellar mass functions and quantify how they depend on morphology, as well as the binned counts in electronic format. While these allow one to quantify the differences which arise because of the assumed light profile, and we believe our Sersic-Exponential based results to be the most realistic of the models we have tested, we caution that which profile is the most appropriate at the high mass end is still debated.
INTRODUCTION
The brightest, most massive galaxies have been the object of much study. Recent work has emphasized the importance of using a good parametrization of the abundance at the bright, massive end if one is interested in using Halo Model based abundance matching techniques, or extreme value statistics, to understand their origin (e.g. Paranjape & Sheth 2012). A few years ago Bernardi et al. (2010) noted that the most luminous galaxies were more abundant than expected from the most commonly used parametrizations of the luminosity function. They also pointed out that, when converted to ⋆ E-mail: [email protected] a stellar mass function, this mis-match was important for models which use the observed abundance and its evolution to constrain the issue of whether these objects were assembled via major or minor mergers. However, they also showed that the conversion from φ(L) to φ(M * ) is rather sensitive to the assumed stellar mass-to-light ratio, for which, as we show below, there is still no consensus. Bernardi et al. (2010) used luminosities estimated from the cmodel magnitudes output by the Sloan Digital Sky Survey (hereafter SDSS, Abazajian et al. 2009). These tended to return more light than the more commonly used estimates based on the Petrosian radius defined by the SDSS, especially for the brightest objects, although some of this difference was due to sky subtraction problems in the SDSS. Bernardi et al. (2010) applied a crude correction for this to the cmodel magnitudes, but not to the Petrosian magnitudes output by the SDSS pipelines, primarily because essentially all previous work with Petrosian magnitudes made no such correction.
The cmodel magnitudes are a poor-man's best guesstimate for the total light if the surface brightness distribution of the objects follows neither a pure exponential disk nor a deVaucouleur's profile (Bernardi et al. 2007). Recently, Meert et al. (2013a,b) have performed more careful Sersicbulge + exponential disk (+ sky) decompositions of these objects. These typically return even more light than the cmodel magnitudes (e.g. Bernardi et al. 2013), in part because of the improved treatment of the sky, but also because differences in the model which is fitted to the observed light profile matter.
The main purpose of the present note is to show how these differences impact estimates of the luminosity and stellar mass functions at the bright end. As one might expect, the effect is at least as dramatic as the choice of M * /L. Therefore, a related goal of the present work is to separate out the effect on φ(M * ) of how the luminosity was estimated from that of M * /L.
Section 2 describes our sample, shows the luminosity and stellar mass functions, quantifies how they depend on the fit to the light profile and provides simple fitting formulae which quantify our results as well as the binned counts in electronic format. While these results allow one to easily account for the dependence on the light profile (e.g. using Sersic instead of SDSS cmodel or Petrosian magnitudes), the question of which M * /L estimate is most appropriate is beyond the scope of this work, and deserves further study. For reasons described in Bernardi et al (2010), all our M * /L estimates assume a Chabrier (2003) IMF. In Section 3 we show that, even though a number of recent works have made this same choice for the IMF (Baldry et al. 2012;Moustakas et al. 2013), they still have M * /L values which are very different from ours (i.e., Bernardi et al. 2010), from one another, and from earlier work (Bell et al. 2003). That is to say, differences in M * /L arise even when the same IMF is assumed: this is not generally appreciated. In Section 4 we show how the luminosity and stellar mass functions depend on morphological type, where the type is determined by the Bayesian Automated Classification scheme of Huertas-Company et al. (2011). A final section summarizes.
When converting from apparent brightnesses to luminosities, we assume a spatially flat background cosmology dominated by a cosmological constant, with parameters (Ωm, ΩΛ) = (0.3, 0.7), and a Hubble constant at the present time of H0 = 70 km s −1 Mpc −1 .
LUMINOSITY AND STELLAR MASS FUNCTIONS
The sample
To provide a direct comparison with previous work, we have selected the same sample as Bernardi et al. (2010); i.e., about 260,000 SDSS galaxies having 14.5 ≤ mrPet ≤ 17.7. We obtained the Petrosian and cmodel estimates of the total light for each of these objects from the SDSS DR7
[b]
Figure 1. Difference between PyMorph Sersic fits and SDSS DR7 Petrosian, SDSS cmodel, PyMorph SerExp, and Sersic fits from Simard et al. (2011) (bottom to top), for galaxies in the sample selected by Bernardi et al. (2010). Petrosian magnitudes are always the faintest, whereas single Sersic-based magnitudes tend to be the brightest. Dotted lines around PyMorph (Ser)- Simard (Ser) show the 16th and 84th percentiles of the distribution; these are similar to the scatter around the median for the other curves.
database. These are known to suffer from sky-subtraction and crowded-field/masking problems (Bernardi et al. 2010;Meert et al. 2013a,b). In what follows, the cmodel magnitudes we use are crudely corrected for the SDSS sky subtraction problems as described in Bernardi et al. (2010). On the other hand, analogous corrections to the Petrosian magnitudes are rarely made, so, for ease of comparison with previous work, we apply no such correction here (we discuss this further in the context of Figure 1). We then ran PyMorph (Vikram et al. 2010;Meert et al. 2013a) on these objects. This is an algorithm which uses GALFIT (Peng et al. 2002) to fit seeing-convolved 2dimensional Sersic + exponential models to the observed surface brightness profiles of galaxy images. Results from extensive tests indicate that the algorithm is accurate (Meert et al. 2013a); it does not suffer from the sky-subtraction problems which plague the simpler SDSS reductions especially in crowded fields. PyMorph sometimes fails to converge to an answer; this happens about 2% of the time, but because this fraction is independent of magnitude, it does not affect our completeness, other than by a small overall scaling. Finally, we computed k-and evolution corrections for each object following Bernardi et al. (2010), and hence, luminosities.
Dependence on assumed surface brightness profile
The magnitudes and half-light radii output by PyMorph depend on the model which is fit. E.g., fitting what is really a two-component image with a single deVaucouleurs profile will generally underestimate the total light. On the other hand, the total light associated with the best-fit single Sersic or a two-component Sersic-bulge + exponential-disk model, is less-biased from its true value (e.g. Bernardi et al. 2007;Bernardi et al. 2013). Meert et al. (2013a) and have also shown that in objects brighter than L * , fitting a two-component Sersic + exponential model to what is really just a single Sersic results in a noisier recovery of the input parameters, but these are not biased. On the other hand, fitting a single Sersic to what is truly a two-component system results in significant biases. Although the Sersic + Exponential model is more accurate, the Sersic fit is often performed on real data when it is believed that the resolution and S/N are such that it is unlikely to recover a robust two-component fit. Therefore, since either of these models are expected to be more realistic than a single deVaucouleurs model, we will use both in what follows.
The estimates of the total light from a Sersic or Sersic + exponential model are generally larger than those based on the cmodel magnitudes output by the SDSS pipelines (e.g. Bernardi et al. 2007;Hill et al. 2011;Bernardi et al. 2013; also see Mosleh, Williams & Franx 2013), and both are larger than the SDSS DR7 Petrosian magnitudes. Figure 1 illustrates that this difference can be large. Some of this is due to the difference in the treatment of the sky, and some to the differences between the fitted models. For example, the offset at the faint end between the Petrosian and the other models is almost entirely due to the fact that the SDSS DR7 pipeline tended to overestimate the contribution from the sky, thus making the SDSS Petrosian magnitudes about 0.05 mags too faint.
After we had completed our study, He et al. (2013) quantified the effects of sky-subtraction and masking problems on the SDSS DR7 Petrosian values: accounting for these makes their Petrosian magnitudes 0.05 mags brighter at the faint end, and 0.2 mags brighter at the bright end. (Our own reanalysis, based on PyMorph sky-estimates sug-gests this difference is slightly smaller: about 0.1 mags at Mr ≤ −23.5.) As a result, at the bright end, He et al's Petrosian magnitudes are slightly brighter than our cmodel magnitudes (recall that the cmodel magnitudes include only a crude correction for the SDSS sky subtraction problems), but they are generally fainter than our Sersic or SerExp values at the bright end. Fundamentally, this can be traced to the well-known facts that (a) Petrosian magnitudes underestimate the total light when the light profile has extended wings, and (b) this is particularly an issue at the bright end (e.g. Binggeli & Cameron 1991;Blanton et al. 2001;Trujillo et al. 2001;Andreon 2002;Brown et al. 2003;Graham et al. 2005). At the bright end, this leads to an underestimate of order ∼ 0.3 mags or more (i.e., this matters more than the sky-subtraction problems), which is similar to the difference between the cmodel and SerExp magnitudes. This expected difference is consistent with He et al.'s finding that even their revised Petrosian magnitudes are systematically fainter than aperture fluxes based on deeper photometry which reaches to 1% of the sky.
Although the dependence on the assumed light profile is what has motivated our study, it is reasonable to ask if these differences are indeed larger than those associated with different pipelines which fit the same model. We address this in the next subsection.
Dependence on pipeline
As a check of our reductions, we have also used luminosities from the single-Sersic-based photometric reductions of Simard et al. (2011). Figure 1 shows that these are in good agreement with PyMorph except for a small offset (∼ 0.05 mags), although the differences become large at the bright end. See Figures A1 and A2 in Bernardi et al. (2013) and discussion on sky estimates in Meert et al. (2013a,b) for why we believe our estimates are less biased. In any case, these differences are small compared to PyMorph-cmodel.
The PyMorph and Simard et al. luminosities come from integrating the fitted profile to infinity. Other authors truncate, typically at some multiple of the half-light radius. For example, the analysis of galaxies in the GAMA survey (Galaxy And Mass Assembly survey - Kelvin et al. 2012) truncates the fits at 10Re. Using the GAMA DR1 data release (Driver et al. 2011), we compare PyMorph, Simard, and GAMA values for the 7335 galaxies for which all three reductions are available. (This sample is set by the fact that the GAMA DR1 covers 100 sq.deg. of the SDSS. GAMA has 10750 matches with the DR7 SDSS spectroscopic galaxy sample, of which 7335 galaxies are in the Bernardi et al. 2010 sample we study here.) Figure 2 compares PyMorph, Simard, and GAMA values for the single-Sersic magnitude. The bottom right panel shows that the truncation matters at the level of 0.05 mags only at Mr < −22. But otherwise, if truncated similarly, then GAMA and PyMorph are in good agreement at Mr > −22, whereas PyMorph returns significantly more light than the other two at the bright end. We believe the differences at the bright end are similar in origin (i.e. sky subtraction issues) to those with respect to Simard et al. (see Meert et al. 2013a,b;Bernardi et al. 2013).
The luminosity function
For each of the estimates of the total light shown in Figure 1, we estimated the luminosity function as in Bernardi et al. (2010) using the Vmax method of Schmidt (1968). (I.e., we weighted each galaxy using 1/Vmax(LrPet), where Vmax is the maximum comoving volume within which the object could have been included in the sample, accounting for both the bright and faint magnitude limits.) Figure 3 shows the luminosity functions for the SDSS Petrosian and cmodel magnitudes (corrected for the SDSS sky subtraction problems as described in Bernardi et al. 2010), and SerExp and Sersic magnitudes (from PyMorph). Although the difference between the Petrosian and cmodel magnitudes has been known for some time, the fact that single-Sersic based counts lie substantially above those based on the SDSS outputs has only recently begun to attract attention. For example, the GAMA based results of Hill et al. (2011) point to this difference, but because GAMA covers a substantially smaller volume than the SDSS, it does not probe the high luminosity end which is of most interest here. Our PyMorph reductions, which are in good agreement with Hill et al. at Mr > −23, show that at Mr < −23 the difference with respect to cmodel counts is dramatic indeed.
The PyMorph-based counts are in good agreement with those which use the Simard et al. (2011) single-Sersic reductions, except at luminosities brighter than Mr ∼ −24, where PyMorph tends to be brighter (c.f. Figure 1), so the PyMorph luminosity function shows more high luminosity objects. This agreement illustrates that our finding that single-Sersic fits return substantially more objects in the high luminosity tail than do cmodel magnitudes is robust to changes in the reduction pipeline. The solid curves show the result of fitting
Xφ(X) = φαβ X X * α e −(X/X * ) β Γ(α/β) + φγ X Xγ γ e −(X/Xγ ) (1) with X = L to the counts. The associated luminosity density is ρX = φα X * Γ[(1+α)/β]/Γ[α/β]+φγ Xγ Γ[1+γ]
. The first term in equation (1) is the same functional form as that used by Bernardi et al. (2010); the second is required to fit the slight bump at the faint-end. The parameters which yield the best-fit are given in Table 1. Note that the value of X * is not as intuitive as is its mean value X * Γ[(1 + α)/β]/Γ[α/β].
The observed distributions shown here have been broad- ened slightly by measurement errors. Bernardi et al. (2010) showed how to modify the analog of equation (1) so as to estimate the parameters of the intrinsic distribution, but that, in practice, the difference between the intrinsic and observed broadened distributions is small -much smaller than the difference between the PyMorph and cmodel counts, so here we show the results of the observed distribution not the intrinsic one. Figure 4 shows the associated stellar mass functions. In all cases, M * was estimated from the luminosity and the cmodel g − r color assuming the Chabrier (2003) IMF as described in Bernardi et al. (2010). We use the cmodel color because the main goal of this paper is to study the effect on φ(M * ) from changes in L. By using cmodel colors, we are ensuring that our M * /L estimates for each object are the same as in Bernardi et al. (2010); however, the L estimate for each object differs (Petrosian = cmodel = PyMorph). Notice again that the PyMorph-based estimates (as well as those from Simard et al.) lie well-above the Petrosian and cmodel ones, although some of the difference, especially with respect to Petrosian, is due to sky-subtraction issues. (Of course, if the stellar population models used to estimate M * /L are incorrect, or if the IMF is mass-dependent, then this will modify the results. See Section 3 for comparison with other work.) The estimate from Baldry et al. (2012) lies below all the others. This is remarkable because it is based on the GAMA-Sersic reductions, and we have already seen that the associated φ(L) (from Hill et al. 2011) is in good agreement with that based on PyMorph. Therefore, the difference in φ(M * ) must be entirely due to M * /L, even though Baldry et al. also assume a Chabrier IMF. We discuss this more in Section 3.
The stellar mass function
We think it is interesting to present our results in a format which highlights just how much the PyMorph-based values differ from other work (we use Bell et al. 2003 for comparison). Figure 5 shows cumulative (rather than differential) counts, both for number and stellar mass-weighted den- sity. The number counts at the mass scale above which the number density of objects is 10 −6 Mpc −3 is larger by a factor of ∼ 2 compared to the cmodel-based counts (a factor of ∼ 3 compared to Bell et al.). Alternatively, at M * = 6×10 11 M⊙, the PyMorph counts lie a factor of ∼ 8 above those based on cmodel magnitudes (a much larger factor above Bell et al.).
For the mass-weighted counts the corresponding discrepancies at 10 6 M⊙Mpc −3 or 6 × 10 11 M⊙ are similar or slightly larger.
To make our results simple to use, in addition to Table 1, which reports the parameter values associated with the best-fits, we have made the binned counts available in the electronic versions of Tables 2 and 3.
COMPARISON WITH PREVIOUS WORK
Recently, it has become fashionable to concentrate more on φ(M * ) than φ(L). Unfortunately, this combines two very different types of uncertainty: that associated with the total light, and the other associated with the stellar mass-tolight ratio M * /L (e.g. mismatched stellar templates, massdependence of the IMF, etc.). To illustrate this, we compare our results with two of the most recent determinations of φ(M * ): those of Baldry et al. (2012) and Moustakas et al. (2013). Although we all assume a Chabrier IMF, their estimates of φ(M * ) are more similar to one another than they are to ours. However, as we argue below, this implies large differences in their M * /L values since they used different luminosities to get M * .
The Baldry et al. analysis is based on single Sersic fits to the light profiles from galaxies with z < 0.06 in the GAMA survey reported by Kelvin et al. (2012). Of the 7335 GAMA matches in the Bernardi et al. (2010) sample which we are studying here, only 1612 have z < 0.06. The red triangles in Figure 2 show that, for these 1612 galaxies, the single Sersic PyMorph estimates of the total light are in good agreement with those derived by Kelvin et al., and used by Baldry et al. However, the redshift cut eliminates most of the high luminosity objects which are of most interest to our paper.
We have checked explicitly and found that the GAMAbased luminosity function for these z < 0.06 objects is actually in good agreement with that based on PyMorph single Sersic reductions when restricted to z < 0.06, and this is in good agreement with that for the full GAMA sample of Since their sample is essentially the same SDSS sample as ours, we expect their luminosity function to agree with that of Bernardi et al. (2010). However, Figure 4 shows that although their stellar mass function is similar to our cmodel-derived φ(M * ) at low masses, it is different at the high end, indicating that the Moustakas et al. M * /L ratios are smaller than ours. (We suspect that this must be related to the choice of template used to estimate M * /L at higher masses, for reasons given in Fig. 22 Therefore,
• by comparing the PyMorph Sersic-based φ(M * ) with that based on Simard et al. our Figure 4 quantifies how differences due to the uncertainties in a given light profile fit (mainly due to sky subtraction issues, see Meert et al. 2013b) affect φ(M * );
• by comparing our SerExp and Sersic-based φ(M * ) fits, our Figure 4 quantifies the effect of fitting different light profile models;
• by comparing our Petrosian based φ(M * ) to the Bell This shows that the effects on φ(M * ) of using the total luminosity computed from different fits to the light profile are dramatic; it is important to specify how the light profile was fit when reporting a luminosity or stellar mass function.
DEPENDENCE ON MORPHOLOGY
We have combined our PyMorph SerExp reductions with the Bayesian Automated morphological Classifier of Huertas-Company et al. (2011) to determine how the luminosity and stellar mass functions depend on morphology. This algorithm returns the probability that an object is one of four types (for each object, the sum of the four probabilities is unity). Therefore, to estimate the luminosity function, we simply weighted object j by pj(type)/Vmax(L Pet j ). In practice, there are a number of faint objects for which pj(E) ≤ 0.15. These can dominate over the counts from similarly faint objects for which pj(E) > 0.85. If these low values of p are simply the result of errors in the BAC algorithm, then these will wrongly boost the luminosity function at the faint end. To check the magnitude of the effect, we have set to zero all values of p ≤ 0.15, and reassigned the weight to the types which remain with contribution proportional to the nonzero remaining values of p such that the sum over the four p values (some of which are now zero) is still unity. This reduces the counts of faint ellipticals and luminous spirals by a factor of about two: it is these modified counts which we show in Figure 6.
The binned counts are given in Tables 2 and 3, which are provided in convenient electronic format in the online version of the journal. (Note that summing up the luminosity functions of the different morphological types does not quite give the luminosity function of the full sample. The small discrepancy arises because we were unable to find BAC classifications for a small fraction of the objects in our sample.) Figure 6 confirms the well-known trend for early-types (E and S0) to dominate the high-mass end, and later-types (Sa to Sd) to dominate at lower masses.
We have compared these estimates with those of Nair & Abraham (2010), who used the T-Type classification using the modified RC3 (Third Reference Catalogue of Bright Galaxies) classifiers. We set E (T= −5 and T= −4), S0 (T= −3, T= −2 and T= −1), Sa (T= 0, T= 1 and T= 2), Sb (T= 3 and T= 4), and Scd (T= 5, T= 6 and T= 7). Our E and S0 counts are in quite good agreement, though, at the bright end, they tend to classify objects as E rather than S0 when BAC divides its weights between E and S0. Differences are slightly more pronounced for later types: our Scd counts are in good agreement at the faint end, but lie substantially above theirs at high luminosities (where the counts are falling exponentially). On the other hand, our Sab counts lie about a factor of two below theirs at high L, suggesting that BAC assigns some weight to the extreme Scd classification when Nair et al. choose the extremes.
DISCUSSION
PyMorph Sersic or Sersic+exponential based estimates of the total light of a galaxy are larger than those based on SDSS pipeline outputs (Petrosian or cmodel; Figure 1). As a result our PyMorph-based luminosity and stellar mass functions are rather different from previous work: they have more light and mass at the bright, massive end (Figures 3 and 4).
Petrosian magnitudes have been popular in recent years. However, our SDSS Petrosian-based luminosity functions, which are similar to those in the literature, lie wellbelow any of the others. This is consistent with a number of recent studies which agree that they, and cmodel magnitudes, underestimate the total light (Simard et al. 2011;Hill et al. 2011;Kelvin et al. 2012;Bernardi et al. 2013;Meert et al. 2013b;Hall et al. 2012). Some of this is due to sky-subtraction problems which affect the SDSS pipelines (Bernardi et al. 2010) more than PyMorph (Meert et al. 2013b). While Petrosian magnitudes with better skysubtraction are brighter (e.g. He et al. 2013, which appeared after the initial submission of our work), thus reducing the difference with respect to our PyMorph magnitudes, our results suggest that they should only be used after applying the sorts of corrections advocated by Graham et al. (2005) which partially account for the fact that Petrosian magni-tudes underestimate the total light when the light profile has extended wings (e.g. Blanton et al. 2001;Trujillo et al. 2001;Andreon 2002). For the brightest galaxies, even such corrected Petrosian magnitudes are fainter than our PyMorph magnitudes. In addition, Petrosian-derived quantities are not seeing corrected; this impacts studies of scaling relations (e.g. see Appendix A in Hyde & Bernardi 2009) more than our current study of the mass function. For these reasons, we believe that fits to Sersic or SerExp profiles are to be preferred, especially in higher redshift datasets where seeing is an issue.
Our Sersic-based luminosity functions, from both PyMorph and Simard et al. (2011), are very different from the Sersic-based analysis in Blanton et al. (2003). However, the Sersic parameters used by Blanton et al. were estimated from a 1-dimensional radial surface brightness profile, measured in ∼ 5 − 10 azimuthally averaged annuli. This procedure is expected to be significantly less accurate than the 2-dimensional fits to the whole galaxy image performed by PyMorph and Simard et al. In addition, these more recent analyses include a more careful treatment of the background sky, especially in crowded fields, so we believe that these more recent Sersic reductions (as well as those of Kelvin et al. 2012 for the subset of objects in the GAMA survey) supercede those of Blanton et al. Our fits indicate that the luminosity density at z ∼ 0.1 is about 10% larger than our previous work with cmodel magnitudes (Table 1), and about a factor of two larger than when based on Petrosian SDSS DR7 magnitudes ( Figure 5). This difference is driven by the most luminous objects which are predominantly quiescent early-type galaxies (Figure 6). Since a number of authors now agree that SDSS pipeline magnitudes underestimate the true luminosity, and these more recent algorithms are in reasonably good agreement with the PyMorph reductions used by us (Figure 2), we conclude that there is now good agreement that the bright end of the luminosity function may be substantially brighter than the SDSS DR7 Petrosian magnitudes suggest.
As one might expect given our analysis of the luminosity function, our (Chabrier IMF-based) stellar mass densities at z ∼ 0.1 are about 25% larger than our previous work (Bernardi et al. 2010) with cmodel magnitudes (Table 1), which was itself considerably larger than when based on the SDSS DR7 Petrosian magnitudes which are often used for this purpose ( Figure 5). As a result, our estimates have implications for studies of the evolution of the star formation rate, the growth of stellar mass in galaxies, the processes by which this mass was assembled, and Halo Model analyses of the M * − M halo relation. For example:
• Our higher stellar mass density at z ∼ 0 resolves the tension with respect to the total mass density inferred from the integrated star formation rate (SFR), as noted in Bernardi et al. (2010).
• A higher number density of massive galaxies in the local Universe allows for a higher incidence of major (in addition to minor) mergers in driving the stellar mass growth of the most massive central galaxies at late times (e.g., Bernardi et al. 2011a,b;Shankar et al. 2013). This conclusion rests, of course, on the quality of the determination of the high redshift stellar mass function, a task which we expect to be even more challenging than in the local Universe. In this respect, it is interesting that the z ∼ 1 counts at M * ≥ 10 11.5 M⊙ of Carollo et al. (2013) are in rather good agreement with our z ∼ 0 (Sersic-based) estimate, strongly limiting merger rates at the high mass end.
• A higher number density at high masses would better match the stellar and dynamical mass functions, possibly reducing the need for a strong mass-dependent variation of the IMF (see, e.g., Fig. 23 in Bernardi et al. 2010), although the true extent of the latter statement relies on accurate dynamical mass measurements with appropriate effective radiuses and structure constants. We plan to address this separately.
• The stellar mass function in the local Universe is one of the fundamental ingredients in popular semi-empirical models for populating haloes with galaxies, such as the halo occupation and abundance matching techniques (e.g., Cooray & Sheth 2002;Berlind & Weinberg 2002;Vale & Ostriker 2004;Shankar et al. 2006;Zehavi et al. 2011;Leauthaud et al. 2012;Moster et al. 2013). If the more massive galaxies are more abundant, then Halo Model analyses will assign them to lower mass halos. Since lower mass halos are less strongly clustered, we expect the most massive galaxies to be less strongly clustered than current models assume.
• This would also imply that the median baryon fraction at the high mass end may be significantly higher than previously thought. This may pose serious questions about the impact of feedback from active galactic nuclei, in the quasar and/or radio modes (e.g., Granato et al. 2004;Croton et al. 2006;Silk & Mamon 2012); even more so, if one considers that the baryon fraction of ellipticals at their formation epoch (z > 1) must have been even higher.
• More stellar mass at the high mass end directly impacts studies of how the stellar fraction compares with that in the gas detected by X-ray and Sunyaev-Zeldovich experiments. E.g., by decreasing the halo mass one should associate with a given stellar mass, it potentially reduces the discrepancy shown in Figure 9 of Ade et al. (2013).
• These effects on the stellar to halo mass mapping go in the same direction as those from plausible changes to the IMF at the massive end (Bernardi et al. 2010 and references therein), so the data may be indicating that a major revision of the results on the galaxy-halo mapping at the high mass end, is called-for.
Since a number of groups are currently engaged in such studies, we have provided the luminosity and stellar mass functions shown in Figures 3, 4 and 6 in a convenient electronic tabular format in the online version of the journal (see Tables 2 and 3).
We caution that our estimates of the amount of stellar mass in the most massive objects are at least 2× larger than other recent determinations (e.g. Baldry et al. 2012 andMoustakas et al. 2013), even though we argued that our estimates of the luminosity function are in much better agreement than this φ(M * )-based number suggests. Section 3 shows that their M * values are similar only because they assume very different L and M * /L values from one another. Therefore, until the field converges on what it believes to be reliable M * /L estimates for the highest mass objects (see Mitchell et al. 2013 for ongoing discussion of this point), we believe our results argue against calibrating models to published stellar mass functions: calibrating to the luminosity function is more robust, as it does not combine differences in L and M * /L into a single number M * .
When using our results, it is important to bear in mind that the PyMorph estimates assume that the galaxy is either a single Sersic, or a two-component Sersic+Exponential system. These both allow for substantial light beyond the core of the image, so there is some question as to just what it is that the profiles are fitting. The most massive objects tend to be the brightest cluster galaxies (BCGs), so some of the excess light returned by Sersic and/or SerExp fits may contain intracluster light (ICL). If so, then our estimates of the total stellar mass are appropriate for models which associate the ICL with the central galaxy. This may even be physically reasonable, since most of the accretion and stripping which occured as the cluster assembled likely happened during accretion onto what is now the central object.
Figure 2 .
2Comparison of single Sersic reductions for the SDSS galaxies in common toSimard et al. (2011),Kelvin et al. (2012) and PyMorph (our notation M Ser−10Re reflects the fact thatKelvin et al. truncate the profile at 10Re). Red triangles show a similar analysis if one restricts to objects with z < 0.06 as done inBaldry et al. (2012); this shallower volume does not probe the highest luminosities that are of most interest here. Dotted lines show the 16th and 84th percentiles of the distribution.
Figure 3 .
3SDSS Main galaxy luminosity function based on Petrosian, cmodel, single Sersic from Simard et al. (2011) and PyMorph SerExp and Sersic magnitudes (bottom to top at Mr = −24). Smooth curves show the result of fitting equation (1) to the counts; associated best-fit parameter values are given in Table 1. For the Petrosian and cmodel magnitudes, the curve shown is that reported by Bernardi et al. (2010) on the basis of fitting to Mr < −20. The Petrosian and Sersic based fits of Bell et al. (2003) and Hill et al. (2011), respectively, are also shown for comparison.
Figure 4 .
4Same as previous figure, but now for the associated stellar mass functions. Recent stellar mass functions from Baldry et al. (2012; based on Sersic magnitudes) and Moustakas et al. (2013; based on cmodel magnitudes) are also shown. All stellar masses assume a Chabrier IMF.
Figure 5 .
5Similar to previous figure, but now showing cumulative rather than differential counts. Top and bottom panels show number and stellar mass density respectively. To facilitate comparison with previous work we show the fit ofBell et al. (2003).
Hill et al. (2011) shown in Figure 3. (Of course, the smaller volume means the error bars are large, and the comparison is effectively limited to abundances greater than about 10 −4 Mpc −3 dex −1 .) However, our PyMorph-based φ(M * ) estimate lies well above that of Baldry et al., so we conclude that our M * /L values must be larger than theirs. The Baldry et al. estimate is much closer to, though slightly below, the estimate of Bell et al. (2003, scaled to a Chabrier IMF). However, since the Baldry et al. Sersic-based φ(L) estimate agrees with ours, and this last lies well above the φ(L) associated with the SDSS Petrosian magnitudes which were used by Bell et al. (see our Figure 3) we conclude that the Baldry et al. M * /L values must also be smaller than those of Bell et al. Moustakas et al. use cmodel magnitudes.
and associated discussion of Bernardi et al. 2010.) On the other hand, despite differences at lower masses, the Moustakas et al. φ(M * ) is reasonably well approximated by the Bell et al. estimate at higher masses. Since the cmodel luminosity function is different from the Petrosian one at these high masses, we conclude that the Moustakas et al. M * /L values must be smaller than those of Bell et al., and different again from those of Baldry et al. (who used Sersic rather than cmodel magnitudes).
Figure 6 .
6Morphological dependence of the luminosity (top) and stellar mass (bottom) functions. In all cases, the luminosities are based on Sersic magnitudes, stellar masses assume a Chabrier IMF, and morphologies are based on the BAC method of Huertas-Company et al. (2011). et al. fit, our cmodel-based estimate to that from Moustakas et al., or our Sersic (or Simard et al.) based φ(M * ) to Baldry et al, our Figure 4 quantifies how systematic differences in M * /L affect φ(M * ).
Table 1 .
1Parameters of φ(Lr) (top rows) and φ(M * ) (bottom rows) derived from fitting equations (1) to the observed counts based on
different magnitudes.
Fit
φ *
L *
α
β
φγ
Lγ
γ
ρ L
10 −2 Mpc −3
10 9 L ⊙
10 −2 Mpc −3
10 9 L ⊙
10 9 L ⊙ Mpc −3
cmodel
0.928
0.3077
1.918
0.433
0.964
1.8763
0.470
0.136
Sersic
1.343
0.0187
1.678
0.300
0.843
0.8722
1.058
0.150
SerExp
1.348
0.3223
1.297
0.398
0.820
0.9081
1.131
0.146
Sersic (Simard)
1.920
6.2456
0.497
0.589
0.530
0.8263
1.260
0.152
Fit
φ *
M *
α
β
φγ
Mγ
γ
ρ Ms
10 −2 Mpc −3
10 9 M ⊙
10 −2 Mpc −3
10 9 M ⊙
10 9 M ⊙ Mpc −3
cmodel
0.766
0.4103
1.764
0.384
0.557
4.7802
0.053
0.276
Sersic
1.040
0.0094
1.665
0.255
0.675
2.7031
0.296
0.344
SerExp
0.892
0.0014
2.330
0.239
0.738
3.2324
0.305
0.330
Sersic (Simard)
0.820
0.0847
1.755
0.310
0.539
5.2204
0.072
0.349
Table 2 .
2The binned φ(Mr) counts for the full sample and when weighted by the probability of a given morphological type. X = Mr [mag] and Y = Log 10 φ(Mr) [Mpc −3 dex −1 ]. Four electronic tables are provided in this format based on the type of magnitude: PyMorph Sersic (LF-Ser.dat), PyMorph SerExp (LF-SerExp.dat), Simard Sersic (LF-Ser-Simard.dat) and cmodel fromBernardi et al. (2010) (LF-cmodel.dat).X
Y (All)
Y wP(Ell)
Y wP(S0)
Y wP(Sab)
Y wP(Scd)
-17.700
-2.350±0.065
-4.030±0.065
-3.209±0.065
-2.708±0.065
-2.706±0.065
Table 3. The stellar mass function φ(M * ) of the full sample, and when weighted by the probability of being a given morphological
type. X = Log 10 M * [M ⊙ ] and Y=Log 10 [(ln 10) M * φ(M * )] [Mpc −3 dex −1 ]. Four electronic tables are provided in this format based
on the type of magnitude: PyMorph Sersic (MsF-Ser.dat), PyMorph SerExp (MsF-SerExp.dat), Simard Sersic (MsF-Ser-Simard.dat) and
cmodel from Bernardi et al. (2010) (MsF-cmodel.dat).
X
Y (All)
Y wP(Ell)
Y wP(S0)
Y wP(Sab)
Y wP(Scd)
9.050
-2.012±0.053
-3.884±0.053
-2.933±0.053
-2.376±0.053
-2.339±0.053
c 0000 RAS, MNRAS 000, 000-000
ACKNOWLEDGEMENTSWe are grateful to L. Simard for sharing information about sky levels, S. Andreon, S. Courteau and A. Graham for helpful comments about previous work, and A. Kravtsov for urging us to complete this work which is supported in part by ADP/NNX09AD02G and nsf-ast 0908241.
. Abazajian, ApJS. 182543Abazajian, et al. 2009, ApJS, 182, 543
. P A R Ade, Planck collaborationarXiv:1212.4131A&A. submittedAde P. A. R. et al. (Planck collaboration), 2013, A&A, submitted (arXiv:1212.4131)
. S Andreon, A&A. 382495Andreon S., 2002, A&A, 382, 495
. I K Baldry, MNRAS. 421621Baldry I. K. et al. 2012, MNRAS, 421, 621
. E F Bell, D H Mcintosh, N Katz, M D Weinberg, ApJS. 149289Bell E. F., McIntosh D. H., Katz N., Weinberg M. D., 2003, ApJS, 149, 289
. A A Berlind, D H Weinberg, ApJ. 575587Berlind A. A., Weinberg D. H. 2002, ApJ, 575, 587
. M Bernardi, J B Hyde, R K Sheth, C J Miller, R C Nichol, AJ. 1331741Bernardi M., Hyde J. B., Sheth R. K., Miller C. J., Nichol R. C. 2007, AJ, 133, 1741
. M Bernardi, F Shankar, J B Hyde, S Mei, F Marulli, R K Sheth, MNRAS. 4042087Bernardi M., Shankar, F., Hyde, J. B., Mei, S., Marulli, F., Sheth, R. K. 2010, MNRAS, 404, 2087
. M Bernardi, N Roche, F Shankar, R K Sheth, 4126MN-RASBernardi M., Roche N., Shankar F. & Sheth R. K. 2011a, MN- RAS, 412, L6
. M Bernardi, N Roche, F Shankar, R K Sheth, 412684MN-RASBernardi M., Roche N., Shankar F. & Sheth R. K. 2011b, MN- RAS, 412, 684
. M Bernardi, A Meert, V Vikram, M Huertas-Company, S Mei, F Shankar, R K Sheth, arXiv:1211.6122MNRAS. submittedBernardi M., Meert A., Vikram V., Huertas-Company M., Mei S., Shankar F., Sheth R. K. 2013, MNRAS, submitted (arXiv:1211.6122)
. B Binggeli, L M Cameron, A&A. 25227Binggeli B. & Cameron L. M. 1991, A&A, 252, 27
. M Blanton, AJ. 1212358Blanton, M. et al. 2001, AJ, 121, 2358
. M R Blanton, ApJ. 594186Blanton M. R. et al. 2003, ApJ, 594, 186
. Brown, MNRAS. 341747Brown et al. 2003, MNRAS, 341, 747
. C M Carollo, arXiv:1302.5115ApJL. 586133ApJCarollo C. M., et al., 2013, ApJ, submitted (arXiv:1302.5115) Chabrier G. 2003, ApJL, 586, 133
. A Cooray, R K Sheth, Phys. Rep. 3721Cooray A., Sheth R. K., 2002, Phys. Rep., 372, 1
. D J Croton, V Springel, S D M White, MNRAS. 36511Croton D. J., Springel V., White S. D. M., et al. 2006, MNRAS, 365, 11
. S P Driver, MNRAS. 413971Driver S. P., et al. 2011, MNRAS, 413, 971
. A Graham, AJ. 1301535Graham A., et al. 2005, AJ, 130, 1535
. G L Granato, G De Zotti, L Silva, A Bressan, L Danese, ApJ. 600580Granato G. L., De Zotti G., Silva L., Bressan A., & Danese L. 2004, ApJ, 600, 580
. M Hall, S Courteau, A A Dutton, M Mcdonald, Y Zhu, MNRAS. 4252741Hall M., Courteau S., Dutton A. A., McDonald M. & Zhu, Y. 2012, MNRAS, 425, 2741
. Y Q He, X Y Xia, C N Hao, Y P Jing, S Mao, Li, ApJ. 77337He, Y. Q., Xia, X. Y., Hao, C. N., Jing, Y. P., Mao, S. & Li, Cheng 2013, ApJ, 773, 37
. M Hilz, T Naab, J P Ostriker, P Jeremiah, J Thomas, A Burkert, & Jesseit R. 4253119MNRASHilz M., Naab T., Ostriker J. P., Jeremiah P., Thomas J., Burkert A. & Jesseit R. 2012, MNRAS, 425, 3119
. M Huertas-Company, J A Aguerri, M Bernardi, S Mei, J Sánchez Almeida, A&A. 525157Huertas-Company M., Aguerri J. A. L, Bernardi M., Mei S. & Sánchez Almeida J. 2011, A&A, 525, 157
. J B Hyde, M Bernardi, MNRAS. 394Hyde, J. B. & Bernardi, M. 2009, MNRAS, 394, 1978
. L S Kelvin, MNRAS. 4211007Kelvin L. S. et al. 2012, MNRAS, 421, 1007
. A Leauthaud, M R George, P S Behroozi, K Bundy, J Tinker, R H Wechsler, C Conroy, A Finoguenov, M Tanaka, ApJ. 74695Leauthaud A., George M. R., Behroozi P. S., Bundy K., Tinker J., Wechsler R. H., Conroy C., Finoguenov A., Tanaka M., 2012, ApJ, 746, 95
. A Meert, V Vikram, M Bernardi, MNRAS. 4331344Meert A., Vikram V., Bernardi M. 2013a, MNRAS, 433, 1344
. A Meert, V Vikram, M Bernardi, MNRAS. submittedMeert A., Vikram V., Bernardi M. 2013b, MNRAS, submitted
. P D Mitchell, C G Lacey, C M Baugh, S Cole, arXiv:1303.7228MNRAS. submittedMitchell P. D., Lacey C. G., Baugh C. M., Cole S., 2013, MNRAS, submitted (arXiv:1303.7228)
. M Mosleh, R J Williams, M Franx, arXiv:1302.6240ApJ. submittedMosleh M., Williams R. J., Franx M. 2013, ApJ, submitted (arXiv:1302.6240)
. B P Moster, T Naab, S D M White, MNRAS. 4283121Moster B. P., Naab T., & White S. D. M. 2013, MNRAS, 428, 3121
. J Moustakas, arXiv:1301.1688ApJ. submittedMoustakas J. et al. 2013, ApJ, submitted (arXiv:1301.1688)
. P Nair, R G Abraham, ApJS. 186427Nair P. & Abraham R. G., 2010, ApJS, 186, 427
. A Paranjape, R K Sheth, MNRAS. 4231845Paranjape A. & Sheth R. K. 2012, MNRAS, 423, 1845
. C Y Peng, L C Ho, C D Impey, H Rix, AJ. 124266Peng C. Y., Ho L. C., Impey C. D. & Rix H. 2002, AJ, 124, 266
. F Shankar, A Lapi, P Salucci, G De Zotti, L Danese, ApJ. 64314Shankar F., Lapi A., Salucci P., De Zotti G., Danese L. 2006, ApJ, 643, 14
. F Shankar, F Marulli, M Bernardi, S Mei, A Meert, V Vikram, MNRAS. 428109Shankar F., Marulli F., Bernardi M., Mei S., Meert A. & Vikram V. 2013, MNRAS, 428, 109
. M Schmidt, ApJ. 151393Schmidt, M. 1968, ApJ, 151, 393
. J Silk, G A Mamon, Research in Astronomy and Astrophysics. 12917Silk J. & Mamon G. A. 2012, Research in Astronomy and Astro- physics, 12, 917
. L Simard, J T Mendel, D R Patton, S L Ellison, A W Mcconnachie, ApJS. 19611Simard L., Mendel J. T., Patton D. R., Ellison S. L., McConnachie A. W. 2011, ApJS, 196, 11
. I Trujillo, MNRAS. 326869Trujillo I., et al. 2001, MNRAS, 326, 869
. V Vikram, Y Wadadekar, A K Kembhavi, G V Vijayagovindan, MNRAS. 4091379Vikram V., Wadadekar Y., Kembhavi A. K., Vijayagovindan G. V., 2010, MNRAS, 409, 1379
. Vale A Ostriker, J P , MNRAS. 353189Vale A. & Ostriker J. P. 2004, MNRAS, 353, 189
. I Zehavi, ApJ. 73659Zehavi I., et al. 2011, ApJ, 736, 59
| []
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.